Skip to content

New navigation model and docs refactoring #321

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
fe346df
Update sidebars.js
jovezhong Jun 25, 2025
a73c610
Merge branch 'main' into nav/2025.6
jovezhong Jun 25, 2025
d44674f
[sidebar] add labels for builtin integration, avoid showing ext strea…
jovezhong Jun 26, 2025
ddbe791
delete kafka-source and its reference
jovezhong Jun 26, 2025
de88654
enable docusaurus v4 features, change logo to 160x32
jovezhong Jun 26, 2025
483e4ed
mount external-stream page
jovezhong Jun 26, 2025
8bdb650
Delete getting-started.md
jovezhong Jun 26, 2025
5d90b6f
remove stream-generator and workspace
jovezhong Jun 26, 2025
6de54e5
remove install.md and its references
jovezhong Jun 26, 2025
d6753f1
merge proton-create-view.md and view.md
jovezhong Jun 26, 2025
60446b2
Delete proton-create-stream
jovezhong Jun 26, 2025
f940b8d
Delete integration-metabase.md
jovezhong Jun 26, 2025
e3d4276
Delete issues.md
jovezhong Jun 26, 2025
591f66d
big change of the nav tree
jovezhong Jun 26, 2025
5121bff
WIP new nav model
jovezhong Jun 27, 2025
8158fab
create new folder for stream processing
jovezhong Jun 28, 2025
b154698
WIP faq for TPE and Proton and compare page
jovezhong Jun 30, 2025
6d08f93
update glossary
jovezhong Jul 1, 2025
aacfaac
combine/clean up files, also add missing.js to check unmounted md
jovezhong Jul 1, 2025
495d49e
big update to showcases
jovezhong Jul 1, 2025
8dbd706
update proton vs TPE doc, refine faq and howto
jovezhong Jul 1, 2025
680a26c
Create list-pages.ts
jovezhong Jul 1, 2025
48d8be6
Update list-pages.ts
jovezhong Jul 1, 2025
bf53b81
finish TODO and move mv page up
jovezhong Jul 2, 2025
c0d27a2
fix typo and update dic.txt
jovezhong Jul 2, 2025
24aff1c
Update sidebars.js
jovezhong Jul 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion docs/alert.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,13 @@

Timeplus provides out-of-box charts and dashboards. You can also create [sinks](/destination) to send downsampled data to Kafka or other message buses, or notify others via email/slack. You can even send new messages to Kafka, then consume such messages timely in the downstream system. This could be a solution for alerting and automation.

Since it's a common use case to define and manage alerts, Timeplus started supporting alerts out-of-box.
Since it's a common use case to define and manage alerts, Timeplus supports alerting out-of-box.\

:::warning
Starting from Timeplus Enterprise v2.9, the alerting feature will be provided by the core SQL engine, with increased performance and stability, as well as SQL based manageability.

The previous alerting feature will be deprecated in the future releases.
:::

## Create New Alert Rule

Expand Down
7 changes: 7 additions & 0 deletions docs/append-stream.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Append Stream

By default, the streams in Timeplus are Append Streams:
* They are designed to handle a continuous flow of data, where new events are added to the end of the stream.
* The data is saved in columnar storage, optimized for high throughput and low latency read and write.
* Older data can be purged automatically by setting a retention policy, which helps manage storage costs and keeps the stream size manageable.
* Limited capabilities to update or delete existing data, as the primary focus is on appending new data.
6 changes: 3 additions & 3 deletions docs/proton-architecture.md → docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@

## High Level Architecture

The following diagram depicts the high level architecture of Proton.
The following diagram depicts the high level architecture of Timeplus SQL engine, starting from a single node deployment.

![Proton Architecture](/img/proton-high-level-arch.gif)
![Architecture](/img/proton-high-level-arch.gif)

All of the components / functionalities are built into one single binary.

## Data Storage

Users can create a stream by using `CREATE STREAM ...` [DDL SQL](/proton-create-stream). Every stream has 2 parts at storage layer by default:
Users can create a stream by using `CREATE STREAM ...` [DDL SQL](/sql-create-stream). Every stream has 2 parts at storage layer by default:

1. the real-time streaming data part, backed by Timeplus NativeLog
2. the historical data part, backed by ClickHouse historical data store.
Expand Down
6 changes: 3 additions & 3 deletions docs/changelog-stream.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Changelog Stream
# Changelog Key Value Stream

When you create a stream with the mode `changelog_kv`, the data in the stream is no longer append-only. When you query the stream directly, only the latest version for the same primary key(s) will be shown. Data can be updated or deleted. You can use Changelog Stream in JOIN either on the left or on the right. Timeplus will automatically choose the latest version.

Expand All @@ -17,7 +17,7 @@ In this example, you create a stream `dim_products` in `changelog_kv` mode with

:::info

The rest of this page assumes you are using Timeplus Console. If you are using Proton, you can create the stream with DDL. [Learn more](/proton-create-stream#changelog-stream)
The rest of this page assumes you are using Timeplus Console. If you are using Proton, you can create the stream with DDL.

:::

Expand Down Expand Up @@ -403,7 +403,7 @@ Debezium also read all existing rows and generate messages like this

### Load data to Timeplus

You can follow this [guide](/kafka-source) to add 2 data sources to load data from Kafka or Redpanda. For example:
You can follow this [guide](/proton-kafka) to add 2 external streams to load data from Kafka or Redpanda. For example:

* Data source name `s1` to load data from topic `doc.public.dim_products` and put in a new stream `rawcdc_dim_products`
* Data source name `s2` to load data from topic `doc.public.orders` and put in a new stream `rawcdc_orders`
Expand Down
15 changes: 15 additions & 0 deletions docs/compare.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Timeplus Proton vs. Timeplus Enterprise

Timeplus Proton powers unified streaming and data processing on a single database node. Its commercial counterpart, Timeplus Enterprise, supports advanced deployment strategy and includes enterprise-ready features.

| Features | **Timeplus Proton** | **Timeplus Enterprise** |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Deployment** | <ul><li>Single-node Docker image</li><li>Single binary on Mac/Linux</li></ul> | <ul><li>Single node</li><li>Multi-node Cluster for high availability and horizontal scalability, with data replication and distributed query processing</li><li>[Kubernetes Helm Chart](/k8s-helm)</li></ul> |
| **Data Processing** | <ul><li>[Streaming SQL](/stream-query)</li><li>[Historical Data Processing](/history)</li><li>[Append Stream](/append-stream), Random streams, Mutable Stream v1 ([Versioned Stream](/versioned-stream))</li><li>Streaming transformation, [join](/joins), [aggregation](/streaming-aggregations), [tumble/hop/session windows](/streaming-windows)</li><li>User-Defined Function: [JavaScript](/js-udf), [Remote](/remote-udf), [SQL](/sql-udf)</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>Auto-scaling Materialized View</li><li>[Mutable Stream v2](/mutable-stream) with row-based storage for 3x performance and efficiency, also support coalesced and better high cardinality data mutations</li><li>Support more [EMIT](/streaming-aggregations#emit) strategies and spill-to-disk capabilities when memory is scarce</li><li>[Python UDF](/py-udf)</li><li>[Dynamic Dictionary](/sql-create-dictionary) based on MySQL/Postgres or Mutable streams</li><li>[Tiered Storage](/tiered-storage) using S3 or HDD</li><li>[Just-In-Time compilation](/jit)</li></ul> |
| **External Systems** | <ul><li>External streams to [Apache Kafka, Confluent Cloud, Redpanda](/proton-kafka), [Apache Pulsar](/pulsar-external-stream), and [Remote Timeplus](/timeplus-external-stream)</li><li>[Streaming ingestion via REST API (compact mode only)](/proton-ingest-api)</li><li>[External table to ClickHouse](/proton-clickhouse-external-table)</li></ul> | <ul><li>Everything in Timeplus Proton</li><li>External streams to [HTTP API](/http-external), External tables to [MySQL](/mysql-external-table), [PostgreSQL](/pg-external-table), [MongoDB](/mongo-external), [Amazon S3](/s3-external), [Apache Iceberg](/iceberg) </li><li>Hundreds of connectors from [Redpanda Connect](/redpanda-connect), e.g. WebSocket, HTTP Stream, NATS</li><li>CSV upload</li><li>[Streaming ingestion via REST API (with API key and flexible modes)](/ingest-api)</li></ul> |
| **Web Console** | | <ul><li>[RBAC](/rbac), Pipeline Wizard, SQL Exploration, Data Lineage, Cluster Monitoring, Troubleshooting and Manageability</li></ul> |
| **Support** | <ul><li>Community support from GitHub and Slack</li></ul> | <ul><li>Enterprise support via email, Slack, and Zoom, with a SLA</li></ul> |

These details are subject to change, but we'll do our best to make sure they accurately represent the latest roadmaps for Timeplus Proton and Timeplus Enterprise.

[Contact us](mailto:[email protected]) for more details or schedule a demo.
21 changes: 0 additions & 21 deletions docs/confluent-cloud-source.md

This file was deleted.

3 changes: 0 additions & 3 deletions docs/enterprise-releases.md

This file was deleted.

8 changes: 4 additions & 4 deletions docs/enterprise-v2.5.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ Key highlights of this release:
* Connecting to various input or output systems via Redpanda Connect. [Learn more](/redpanda-connect).
* Creating and managing users in the Web Console. You can change the password and assign the user either Administrator or Read-only role.
* New [migrate](/cli-migrate) subcommand in [timeplus CLI](/cli-reference) for data migration and backup/restore.
* Materialized views auto-rebalancing in the cluster mode. [Learn more](/proton-create-view#auto-balancing).
* Materialized views auto-rebalancing in the cluster mode. [Learn more](/view#auto-balancing).
* Approximately 30% faster data ingestion and replication in the cluster mode.
* Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/query-syntax#emit_on_update).
* Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).

## Supported OS {#os}
|Deployment Type| OS |
Expand Down Expand Up @@ -125,9 +125,9 @@ Compared to the [2.4.23](/enterprise-v2.4#2_4_23) release:
* new type of [External Streams for Apache Pulsar](/pulsar-external-stream).
* for bare metal installation, previously you can login with the username `default` with empty password. To improve the security, this user has been removed.
* enhancement for nullable data types in streaming and historical queries.
* Materialized views auto-rebalancing in the cluster mode.[Learn more](/proton-create-view#auto-balancing).
* Materialized views auto-rebalancing in the cluster mode.[Learn more](/view#auto-balancing).
* Approximately 30% faster data ingestion and replication in the cluster mode.
* Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/query-syntax#emit_on_update).
* Performance improvement for [ASOF JOIN](/joins) and [EMIT ON UPDATE](/streaming-aggregations#emit_on_update).
* timeplus_web 1.4.33 -> 2.0.6
* UI to add/remove user or change role and password. This works for both single node and cluster.
* UI for inputs/outputs from Redpanda Connect.
Expand Down
4 changes: 2 additions & 2 deletions docs/enterprise-v2.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -245,15 +245,15 @@ Compared to the [2.5.12](/enterprise-v2.5#2_5_12) release:
* timeplusd 2.4.27 -> 2.5.10
* Performance Enhancements:
* Introduced hybrid hash table technology for streaming SQL with JOINs or aggregations. Configure via `SETTINGS default_hash_table='hybrid'` to optimize memory usage for large data streams.
* Improved performance for [EMIT ON UPDATE](/query-syntax#emit_on_update) queries. Memory optimization available through `SETTINGS optimize_aggregation_emit_on_updates=false`.
* Improved performance for [EMIT ON UPDATE](/streaming-aggregations#emit_on_update) queries. Memory optimization available through `SETTINGS optimize_aggregation_emit_on_updates=false`.
* Enhanced read/write performance for ClickHouse external tables with configurable `pooled_connections` setting (default: 3000).
* Monitoring and Management:
* Added [system.stream_state_log](/system-stream-state-log) and [system.stream_metric_log](/system-stream-metric-log) system streams for comprehensive resource monitoring.
* Implemented Kafka offset tracking in [system.stream_state_log](/system-stream-state-log), exportable via [timeplus diag](/cli-diag) command.
* A `_tp_sn` column is added to each stream (except external streams or random streams), as the sequence number in the unified streaming and historical storage. This column is used for data replication among the cluster. By default, it is hidden in the query results. You can show it by setting `SETTINGS asterisk_include_tp_sn_column=true`. This setting is required when you use `INSERT..SELECT` SQL to copy data between streams: `INSERT INTO stream2 SELECT * FROM stream1 SETTINGS asterisk_include_tp_sn_column=true`.
* New Features:
* Support for continuous data writing to remote Timeplus deployments via setting a [Timeplus external stream](/timeplus-external-stream) as the target in a materialized view.
* New [EMIT PERIODIC .. REPEAT](/query-syntax#emit_periodic_repeat) syntax for emitting the last aggregation result even when there is no new event.
* New [EMIT PERIODIC .. REPEAT](/streaming-aggregations#emit_periodic_repeat) syntax for emitting the last aggregation result even when there is no new event.
* Able to create or drop databases via SQL in a cluster. The web console will be enhanced to support different databases in the next release.
* Historical data of a stream can be removed by `TRUNCATE STREAM stream_name`.
* Able to add new columns to a stream via `ALTER STREAM stream_name ADD COLUMN column_name data_type`, in both a single node or multi-node cluster.
Expand Down
2 changes: 1 addition & 1 deletion docs/enterprise-v2.7.md
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ Component versions:
Compared to the [2.7.5](#2_7_5) release:
* timeplusd 2.7.37 -> 2.7.45
* added new setting [mv_preferred_exec_node](/sql-create-materialized-view#mv_preferred_exec_node) while creating materialized view
* added new EMIT policy `EMIT ON UPDATE WITH DELAY`. The SQL syntax for EMIT has been refactored. [Learn more](/query-syntax#emit)
* added new EMIT policy `EMIT ON UPDATE WITH DELAY`. The SQL syntax for EMIT has been refactored. [Learn more](/streaming-aggregations#emit)
* fixed global aggregation with `EMIT ON UPDATE` in multi-shard environments
* fixed concurrency issues in hybrid aggregation
* support incremental checkpoints for hybrid hash join
Expand Down
8 changes: 4 additions & 4 deletions docs/enterprise-v2.8.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Each component maintains its own version numbers. The version number for each Ti

## Key Highlights
Key highlights of this release:
* New Compute Node server role to [run materialized views elastically](/proton-create-view#autoscaling_mv) with checkpoints on S3 storage.
* New Compute Node server role to [run materialized views elastically](/view#autoscaling_mv) with checkpoints on S3 storage.
* Timeplus can read or write data in Apache Iceberg tables. [Learn more](/iceberg)
* Timeplus can read or write PostgreSQL tables directly via [PostgreSQL External Table](/pg-external-table) or look up data via [dictionaries](/sql-create-dictionary#source_pg).
* Use S3 as the [tiered storage](/tiered-storage) for streams.
Expand Down Expand Up @@ -62,9 +62,9 @@ Compared to the [2.8.0 (Preview)](#2_8_0) release:
* When using `CREATE OR REPLACE FORMAT SCHEMA` to update an existing schema, and using `DROP FORMAT SCHEMA` to delete a schema, Timeplus will clean up the Protobuf schema cache to avoid misleading errors.
* Support writing Kafka message timestamp via [_tp_time](/proton-kafka#_tp_time)
* Enable IPv6 support for KeyValueService
* Simplified the [EMIT syntax](/query-syntax#emit) to make it easier to read and use.
* Support [EMIT ON UPDATE WITH DELAY](/query-syntax#emit_on_update_with_delay)
* Support [EMIT ON UPDATE](/query-syntax#emit_on_update) for multiple shards
* Simplified the [EMIT syntax](/streaming-aggregations#emit) to make it easier to read and use.
* Support [EMIT ON UPDATE WITH DELAY](/streaming-aggregations#emit_on_update_with_delay)
* Support [EMIT ON UPDATE](/streaming-aggregations#emit_on_update) for multiple shards
* Transfer leadership to preferred node after election
* Pin materialized view execution node [Learn more](/sql-create-materialized-view#mv_preferred_exec_node)
* Improve async checkpointing
Expand Down
Loading