diff --git a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190225.md b/src/current/_includes/releases/v19.1/v19.1.0-beta.20190225.md deleted file mode 100644 index 9320109d3db..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190225.md +++ /dev/null @@ -1,78 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -Since our initial launch, Cockroach Labs has used semantic versioning in our release cycle guidelines. Two years, one major release, and n-patch fixes later, we're making the switch to Calendar Versioning. This means subscribers to our release notes will see quite the jump in today's version numbering, from last week's 2.1.5 to today's 19.1 beta. You can read more about the switch [here](https://www.cockroachlabs.com/blog/calendar-versioning/). - -

General changes

- -- Records for completed jobs are cleaned up automatically after two weeks. [#34725][#34725] -- [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v19.1/show-jobs) now returns only running and recently finished jobs. Older jobs can still be inspected via the `crdb_internal.jobs` table. [#34829][#34829] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- `nodelocal://` storage paths for [`BACKUP`](https://www.cockroachlabs.com/docs/v19.1/backup), [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore), and [`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import) may include a node-ID in the Host part of the URI. It is not currently used (any node can get sent work and will look in its local IO directory) but will likely be required in the future. [#34797][#34797] {% comment %}doc{% endcomment %} -- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) targeting cloud sinks now partition files into date folders. [#34813][#34813] {% comment %}doc{% endcomment %} -- The new `kv.rangefeed.concurrent_catchup_iterators` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) limits the number of rangefeed catchup iterators a store will allow concurrently before queueing. [#34890][#34890] {% comment %}doc{% endcomment %} -- The [`CHANGEFEED` `experimental_avro`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed#options) format now supports SQL columns of type `DATE`, `TIME`, `UUID`, `INET`, and `JSONB`. [#34918][#34918] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Virtual tables in `pg_catalog` and `information_schema` now support `COMMENT ON` like regular tables. [#33697][#33697] - -

Bug fixes

- -- Fixed a bug that could cause a Raft log to grow very large, which in turn could prevent replication changes. [#34502][#34502] -- Prevented down nodes from obstructing log truncation on ranges they are a member of. This problem could cause replication to fail due to an overly large Raft log. [#34712][#34712] -- Fixed a panic when subtracting an array containing null from a JSON datum. [#34757][#34757] -- Fixed a panic during some `UNION ALL` operations with projections, filters, or renders directly on top of the `UNION ALL`. [#34762][#34762] -- Fixed a panic when the subquery in `UPDATE SET (a,b) = (...subquery...)` returns no rows. [#34804][#34804] -- Fixed a rare panic ("close of closed channel") when shutting down a server. [#34823][#34823] -- Fixed a deadlock during [`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import) and [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore) that caused all writes on a node to be stopped. [#34830][#34830] -- Fixed a panic during planning of certain complex join queries. [#34843][#34843] -- Fixed a panic when [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) is run on clusters containing nodes with different versions of CockroachDB. [#34842][#34842] -- Fixed a bug where servers would endlessly try to refresh table statistics on dropped tables. [#34884][#34884] -- CockroachDB now only lists tables in `pg_catalog.pg_tables`, for compatibility with PostgreSQL. [#34857][#34857] -- Fixed a panic when using [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v19.1/explain-analyze) with certain mutation queries. [#34991][#34991] - -

Performance improvements

- -- [Subqueries](https://www.cockroachlabs.com/docs/v19.1/subqueries) used with `EXISTS` or as a scalar value now avoid fetching more rows than needed to decide the outcome. [#34801][#34801] - -

Doc updates

- -- Documented the built-in [`ycsb` workload](https://www.cockroachlabs.com/docs/v19.1/cockroach-workload). [#4343][#4343] - -
- -

Contributors

- -This release includes 102 merged PRs by 22 authors. We would like to thank the following contributors from the CockroachDB community: - -- Jaewan Park - -
- -[#33697]: https://github.com/cockroachdb/cockroach/pull/33697 -[#34301]: https://github.com/cockroachdb/cockroach/pull/34301 -[#34502]: https://github.com/cockroachdb/cockroach/pull/34502 -[#34712]: https://github.com/cockroachdb/cockroach/pull/34712 -[#34725]: https://github.com/cockroachdb/cockroach/pull/34725 -[#34757]: https://github.com/cockroachdb/cockroach/pull/34757 -[#34762]: https://github.com/cockroachdb/cockroach/pull/34762 -[#34797]: https://github.com/cockroachdb/cockroach/pull/34797 -[#34801]: https://github.com/cockroachdb/cockroach/pull/34801 -[#34804]: https://github.com/cockroachdb/cockroach/pull/34804 -[#34813]: https://github.com/cockroachdb/cockroach/pull/34813 -[#34823]: https://github.com/cockroachdb/cockroach/pull/34823 -[#34829]: https://github.com/cockroachdb/cockroach/pull/34829 -[#34830]: https://github.com/cockroachdb/cockroach/pull/34830 -[#34842]: https://github.com/cockroachdb/cockroach/pull/34842 -[#34843]: https://github.com/cockroachdb/cockroach/pull/34843 -[#34857]: https://github.com/cockroachdb/cockroach/pull/34857 -[#34884]: https://github.com/cockroachdb/cockroach/pull/34884 -[#34890]: https://github.com/cockroachdb/cockroach/pull/34890 -[#34906]: https://github.com/cockroachdb/cockroach/pull/34906 -[#34918]: https://github.com/cockroachdb/cockroach/pull/34918 -[#34991]: https://github.com/cockroachdb/cockroach/pull/34991 -[#4343]: https://github.com/cockroachdb/docs/pull/4343 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190304.md b/src/current/_includes/releases/v19.1/v19.1.0-beta.20190304.md deleted file mode 100644 index 2e37beeac79..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190304.md +++ /dev/null @@ -1,80 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Enterprise edition changes

- -- Added a GSS auth method configurable by the `server.host_based_authentication.configuration` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings). This supports the new ability to use an external enterprise directory system like Active Directory for authentication in a CockroachDB cluster. Detailed usage guidance is coming soon. [#34772][#34772] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- CockroachDB now supports the syntax [`TIME(6)`](https://www.cockroachlabs.com/docs/v19.1/time), [`TIMESTAMP(6)`](https://www.cockroachlabs.com/docs/v19.1/timestamp) and `TIMESTAMPTZ(6)` / `TIMESTAMP(6) WITH TIME ZONE`, for compatibility with PostgreSQL. Only the value `6` is supported, which is also the default in PostgreSQL. When used for a table column definition, the precision is not stored, and it is not possible to distinguish types with and without specified precisions in the introspection metadata. [#35128][#35128] {% comment %}doc{% endcomment %} -- [`CHECK`](https://www.cockroachlabs.com/docs/v19.1/check) constraints can now be applied to columns when they are first added to a table with [`ALTER TABLE ... ADD COLUMN`](https://www.cockroachlabs.com/docs/v19.1/add-column). [#35018][#35018] {% comment %}doc{% endcomment %} -- CockoachDB no longer supports `AS OF SYSTEM TIME` interval expressions less than 1 microsecond in the past. [#34547][#34547] {% comment %}doc{% endcomment %} -- When the JSON `?` [operator](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) is used to compare a JSON string and a string that are equal, CockroachDB now returns `true`, for compatibility with PostgreSQL. [#35005][#35005] -- Specific implementations of join can now be forced by inserting `HASH`, `MERGE`, or `LOOKUP` between the type of join (`INNER | LEFT | RIGHT | FULL`) and the `JOIN` keyword. [#35183][#35183] {% comment %}doc{% endcomment %} -- CockroachDB now supports `SHOW SEQUENCES` to list the sequences in a given database or the current database, alongside `SHOW TABLES`, which was already able to list both tables and views. [#35215][#35215] -- Added the `sql.stats.experimental_automatic_collection.fraction_idle` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) to control the throttling of automatic statistics. [#34928][#34928] {% comment %}doc{% endcomment %} - -

Admin UI changes

- -- Added a [debug page](https://www.cockroachlabs.com/docs/v19.1/admin-ui-debug-pages) that breaks down CPU usage by query (some restrictions apply). [#35147][#35147] - -

Bug fixes

- -- The columns `confupdtype`, `confdeltype` and `confmatchtype` in `pg_constraint` now report the [foreign key constraint](https://www.cockroachlabs.com/docs/v19.1/foreign-key) parameters properly, for compatibility with PostgreSQL clients that use them. [#35052][#35052] -- Fixed a panic that could occur when using `logspy` tracing in some circumstances. [#34936][#34936] -- Fixed a panic related to cached plans. [#35027][#35027] -- Fixed [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) to run at the correct timestamp when it is specified with [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v19.1/as-of-system-time). [#35139][#35139] -- CockroachDB again properly reports when a database used during `PREPARE` does not exist any more when `EXECUTE` is used. [#35151][#35151] -- The logical plans collected for display in the web UI now hide the details of which key ranges are scanned in table lookups. [#34902][#34902] -- Fixed a panic that could occur via certain patterns of folding `CASE` statements containing `NULL` values. [#35188][#35188] -- Fixed a bug that would return errors when handling valid [`UPDATE`](https://www.cockroachlabs.com/docs/v19.1/update)s during periods of an index creation schema change. [#35157][#35157] - -

Performance improvements

- -- Increased write throughput for workloads that write large numbers of intents by coalescing intent resolution requests across transactions. [#34803][#34803] -- Reduced write-amplification during bulk-loading ([`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import) and [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore)). [#34886][#34886] -- Increased the rebalancing and up-replication default rate from 2MB to 8MB. [#35100][#35100] -- Reduced the impact of bulk data ingestion on foreground traffic by controlling RocksDB flushes. [#34800][#34800] - -

Doc updates

- -- Added much more guidance on [troubleshooting cluster setup](https://www.cockroachlabs.com/docs/v19.1/cluster-setup-troubleshooting) and [troubleshooting SQL behavior](https://www.cockroachlabs.com/docs/v19.1/query-behavior-troubleshooting). [#4223](https://github.com/cockroachdb/docs/pull/4223) -- Added a summary of [Enterprise features](https://www.cockroachlabs.com/docs/v19.1/enterprise-licensing#enterprise-features). [#4418](https://github.com/cockroachdb/docs/pull/4418) -- Documented CockroachDB's partial support for the [Intellij IDEA](https://www.cockroachlabs.com/docs/v19.1/intellij-idea). [#4391](https://github.com/cockroachdb/docs/pull/4391) -- Clarified the guidance on [preparing to decommission nodes](https://www.cockroachlabs.com/docs/v19.1/remove-nodes). [#4406](https://github.com/cockroachdb/docs/pull/4406) - -
- -

Contributors

- -This release includes 89 merged PRs by 18 authors. We would like to thank the following contributors from the CockroachDB community: - -- David López (first-time contributor) -- lanzao (first-time contributor) - -
- -[#34547]: https://github.com/cockroachdb/cockroach/pull/34547 -[#34772]: https://github.com/cockroachdb/cockroach/pull/34772 -[#34800]: https://github.com/cockroachdb/cockroach/pull/34800 -[#34803]: https://github.com/cockroachdb/cockroach/pull/34803 -[#34886]: https://github.com/cockroachdb/cockroach/pull/34886 -[#34902]: https://github.com/cockroachdb/cockroach/pull/34902 -[#34928]: https://github.com/cockroachdb/cockroach/pull/34928 -[#34936]: https://github.com/cockroachdb/cockroach/pull/34936 -[#35005]: https://github.com/cockroachdb/cockroach/pull/35005 -[#35018]: https://github.com/cockroachdb/cockroach/pull/35018 -[#35027]: https://github.com/cockroachdb/cockroach/pull/35027 -[#35052]: https://github.com/cockroachdb/cockroach/pull/35052 -[#35100]: https://github.com/cockroachdb/cockroach/pull/35100 -[#35128]: https://github.com/cockroachdb/cockroach/pull/35128 -[#35139]: https://github.com/cockroachdb/cockroach/pull/35139 -[#35147]: https://github.com/cockroachdb/cockroach/pull/35147 -[#35151]: https://github.com/cockroachdb/cockroach/pull/35151 -[#35157]: https://github.com/cockroachdb/cockroach/pull/35157 -[#35183]: https://github.com/cockroachdb/cockroach/pull/35183 -[#35188]: https://github.com/cockroachdb/cockroach/pull/35188 -[#35200]: https://github.com/cockroachdb/cockroach/pull/35200 -[#35215]: https://github.com/cockroachdb/cockroach/pull/35215 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190318.md b/src/current/_includes/releases/v19.1/v19.1.0-beta.20190318.md deleted file mode 100644 index 0b65dc46469..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-beta.20190318.md +++ /dev/null @@ -1,136 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -In addition to bug fixes and various general, enterprise, SQL, and Admin UI enhancements, this release includes several major highlights: - -- [**Managed CockroachDB Console**](https://cockroachlabs.cloud/): Paid managed CockroachDB customers can now sign into their organization's account, view the connection string details, add and edit thier list of allowed IPs on the management console. -- [**GSSAPI with Kerberos Authentication**](https://www.cockroachlabs.com/docs/v19.1/gssapi_authentication): CockroachDB now supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory. This feature requires an [Enterprise License](https://www.cockroachlabs.com/docs/v19.1/enterprise-licensing). -- [**Query Optimizer Hints**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#join-hints): The cost-based optimizer now supports hint syntax to force the use of a merge, hash, or lookup join. This let you override the cost-based optimizer's join algorithm selection in cases where you have information about your data that the cost-based optimizer does not yet have. -- [**Correlated Subqueries**](https://www.cockroachlabs.com/docs/v19.1/subqueries#correlated-subqueries): Most correlated subqueries are decorrelated and processed by the cost-based-optimizer. However, for those that cannot be decorrelated, CockroachDB now emits an "apply" operator that executes a sub-plan for every row in its input. This allows CockroachDB to execute a large number of additional correlated subqueries that were not able to be executed in v2.1. - -

General changes

- -- The [cluster settings](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) `timeseries.storage.10s_resolution_ttl` and `timeseries.storage.30m_resolution_ttl` have been renamed to `timeseries.storage.resolution_10s.ttl` and `timeseries.storage.resolution_30m.ttl` for ease of use in SQL clients. Any value set using the previous setting name in existing clusters is migrated over to the new name; subsequent changes using the old name will be ignored. [#34248][#34248] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- Added the `debug encryption-active-key` command. [#35234][#35234] -- The `changefeed.min_high_water` metric has been deprecated in favor of `changefeed.max_behind_nanos`, which is easier to alert on. The **Changefeed** dashboard in the Admin UI now contains a graph of this metric. [#35257][#35257] {% comment %}doc{% endcomment %} -- Added the `rocksdb.encryption.algorithm` per-store metric, which describes the encryption cipher in use. [#35506][#35506] -- In exchange for increased correctness confidence, [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) using `changefeed.push.enabled` (the default) now take slightly more resources on startup and range rebalancing/splits. [#35470][#35470] - -

SQL language changes

- -- Changed the default set of column statistics created by [CREATE STATISTICS](https://www.cockroachlabs.com/docs/v19.1/create-statistics) to include up to 100 regular table columns in addition to all indexed columns. [#35192][#35192] {% comment %}doc{% endcomment %} -- Added the ability to pause all automatic statistics jobs by pausing the currently running job. [#35243][#35243] {% comment %}doc{% endcomment %} -- Automatic statistics are now enabled by default. [#35291][#35291] {% comment %}doc{% endcomment %} -- CockroachDB now supports many more correlated subqueries. [#34546][#34546] -- Schema changes now trigger automatic statistics collection for the affected table. [#35252][#35252] {% comment %}doc{% endcomment %} -- The `pg_catalog.current_setting()` and `pg_catalog.set_config()` [built-in functions](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) are now supported for compatibility with PostgreSQL. Note that only session-scoped configuration changes remain supported (`set_config(_, _, false)`). [#35121][#35121] {% comment %}doc{% endcomment %} -- The [`RENAME COLUMN`](https://www.cockroachlabs.com/docs/v19.1/rename-column) command can now be used alongside other table commands in a single [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v19.1/alter-table) statement. This makes it possible to, for example, atomically add a computed column based on an existing column, and rename the columns so that the computed column "replaces" the original column. [#35091][#35091] {% comment %}doc{% endcomment %} -- CockroachDB now supports `ALTER TABLE ... RENAME CONSTRAINT`. Only indexes that are not depended on by views can be renamed. [#35091][#35091] {% comment %}doc{% endcomment %} -- [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v19.1/show-jobs) now returns an extra `statement` column, which is populated when the description is not the statement. [#35439][#35439] {% comment %}doc{% endcomment %} -- [`SHOW QUERIES`](https://www.cockroachlabs.com/docs/v19.1/show-queries) and [`SHOW SESSIONS`](https://www.cockroachlabs.com/docs/v19.1/show-sessions) now omit internal queries and sessions by default. Use `SHOW ALL QUERIES` or `SHOW ALL SESSIONS` to include internal queries in the output. [#35504][#35504] {% comment %}doc{% endcomment %} -- CockroachDB now starts emitting changefeed events immediately for sinkless changefeeds (an experimental feature). The `results_buffer_size` connection string parameter is no longer needed for this purpose. [#35529][#35529] {% comment %}doc{% endcomment %} -- CockroachDB now provides usable comments with optional documentation URLs for the virtual tables in `pg_catalog`, `information_schema`, and `crdb_internal`. Use `SHOW TABLES [FROM ...] WITH COMMENT` to read. Note that `crdb_internal` tables remain an experimental feature subject to change without notice. [#34764][#34764] {% comment %}doc{% endcomment %} -- CockroachDB now reports usage frequency of various SQL scalar operators in telemetry, when telemetry is enabled, so as to guide future optimizations of query performance. [#35616][#35616] {% comment %}doc{% endcomment %} -- CockroachDB now reports how the [`SERIAL`](https://www.cockroachlabs.com/docs/v19.1/serial) pseudo-type is expanded in table column definitions, when telemetry is enabled. [#35656][#35656] {% comment %}doc{% endcomment %} - -

Admin UI changes

- -- [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) jobs no longer generate events by default. [#35425][#35425] -- Queries issued internally by CockroachDB are now displayed under a single "(internal)" application name entry in the drop-down menu on the **Statements** page. [#35503][#35503] -- Additional types of transaction restart errors are now tracked on the **Distributed** dashboard. [#35438][#35438] -- Added "Statistics Creation" as a job type on the **Jobs** page. [#35651][#35651] - -

Bug fixes

- -- Fixed a planning bug that caused incorrect aggregation results on multi-node aggregations with implicit, partial orderings on the inputs to the aggregations. [#35221][#35221] -- Fixed a panic that occurred when evaluating certain binary expressions containing operands with different types. [#35247][#35247] -- Prevented a situation in which snapshots would be refused repeatedly over long periods of time, with error messages such as "aborting snapshot because raft log is too large" appearing in the logs, and often accompanied by under-replicated ranges in the UI. [#35136][#35136] -- [`BACKUP`](https://www.cockroachlabs.com/docs/v19.1/backup) to `nodelocal` now writes files atomically. [#34937][#34937] -- Fixed a crash on `SET TRANSACTION AS OF SYSTEM TIME` with invalid expressions. [#35316][#35316] -- The experimental [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) cloud storage sinks now strip secrets from the job description in the Admin UI and `SHOW JOBS` output. [#35257][#35257] -- Fixed a nil pointer dereference in [`debug zip`](https://www.cockroachlabs.com/docs/v19.1/debug-zip) when one or more nodes in the cluster are down. [#35366][#35366] -- [Window functions](https://www.cockroachlabs.com/docs/v19.1/window-functions) are now correctly planned when `UNION ALL` is present in the subquery. [#35430][#35430] -- Fixed panics that could occur in some cases involving joins of the results of mutations. [#35482][#35482] -- CockroachDB now correctly returns an error when window functions include window definitions that contain other window functions. [#35369][#35369] -- CockroachDB now properly applies column width and nullability constraints on the result of conflict resolution in [`UPSERT`](https://www.cockroachlabs.com/docs/v19.1/upsert) and [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v19.1/insert). [#35371][#35371] -- Improved telemetry for error codes. [#35431][#35431] -- CockroachDB now properly preserves the automatically generated name of a newly created index in `system.event_log` when the name is not specified in the `CREATE INDEX` statement. [#35534][#35534] -- CockroachDB now properly reports `bigint` in `information_schema.sequences.type`, for compatibility with PostgreSQL. [#35577][#35577] -- CockroachDB now properly reports the composite foreign key match type in `information_schema.referential_constraints`. [#35575][#35575] -- Subtracting 0 from a JSON array now correctly removes its first element. [#35617][#35617] -- Fixed a "column not in input" crash when `INSERT / UPDATE / UPSERT ... RETURNING` is used inside a clause that requires an ordering. [#35644][#35644] -- Fixed an error when executing some set operations containing only nulls in one of the input columns. [#35321][#35321] -- Fixed an on-disk inconsistency that could result from a crash during a range merge. [#35626][#35626] - -

Build changes

- -- [Go 1.11.5](https://golang.org/dl/) is now the minimum required version necessary to build CockroachDB. [#35536][#35536] {% comment %}doc{% endcomment %} -- CockroachDB will provisionally refuse to build with Go 1.12, as this is known to produce incorrect code inside CockroachDB. [#35638][#35638] -- Release Docker images are now built on Debian 9.8. [#35517][#35517] - -

Doc updates

- -- Updated the [`PARTITION BY RANGE`](https://www.cockroachlabs.com/docs/v19.1/partitioning#define-table-partitions-by-list) example for geo-partitioning. [#4503](https://github.com/cockroachdb/docs/pull/4503) -- The Docs landing page now provides quick links into various areas of the CockroachDB documentation. [#4476](https://github.com/cockroachdb/docs/pull/4476) -- Documented the [`BIT`](https://www.cockroachlabs.com/docs/v19.1/bit) data type. [#4454](https://github.com/cockroachdb/docs/pull/4454) -- Documented the `bytea_output` session variable, and fixed the documentation on bytes/string conversions. [#4452](https://github.com/cockroachdb/docs/pull/4452) -- Updated [Configure Replication Zones](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones) documentation to reflect that unset variables in a replication zone now inherit their values from the parent zone. [#4446](https://github.com/cockroachdb/docs/pull/4446) - -
- -

Contributors

- -This release includes 157 merged PRs by 29 authors. We would like to thank the following contributors from the CockroachDB community: - -- Jaewan Park - -
- -[#34248]: https://github.com/cockroachdb/cockroach/pull/34248 -[#34546]: https://github.com/cockroachdb/cockroach/pull/34546 -[#34764]: https://github.com/cockroachdb/cockroach/pull/34764 -[#34937]: https://github.com/cockroachdb/cockroach/pull/34937 -[#35091]: https://github.com/cockroachdb/cockroach/pull/35091 -[#35121]: https://github.com/cockroachdb/cockroach/pull/35121 -[#35136]: https://github.com/cockroachdb/cockroach/pull/35136 -[#35192]: https://github.com/cockroachdb/cockroach/pull/35192 -[#35221]: https://github.com/cockroachdb/cockroach/pull/35221 -[#35234]: https://github.com/cockroachdb/cockroach/pull/35234 -[#35243]: https://github.com/cockroachdb/cockroach/pull/35243 -[#35247]: https://github.com/cockroachdb/cockroach/pull/35247 -[#35252]: https://github.com/cockroachdb/cockroach/pull/35252 -[#35257]: https://github.com/cockroachdb/cockroach/pull/35257 -[#35291]: https://github.com/cockroachdb/cockroach/pull/35291 -[#35316]: https://github.com/cockroachdb/cockroach/pull/35316 -[#35321]: https://github.com/cockroachdb/cockroach/pull/35321 -[#35350]: https://github.com/cockroachdb/cockroach/pull/35350 -[#35366]: https://github.com/cockroachdb/cockroach/pull/35366 -[#35369]: https://github.com/cockroachdb/cockroach/pull/35369 -[#35371]: https://github.com/cockroachdb/cockroach/pull/35371 -[#35425]: https://github.com/cockroachdb/cockroach/pull/35425 -[#35430]: https://github.com/cockroachdb/cockroach/pull/35430 -[#35431]: https://github.com/cockroachdb/cockroach/pull/35431 -[#35438]: https://github.com/cockroachdb/cockroach/pull/35438 -[#35439]: https://github.com/cockroachdb/cockroach/pull/35439 -[#35470]: https://github.com/cockroachdb/cockroach/pull/35470 -[#35482]: https://github.com/cockroachdb/cockroach/pull/35482 -[#35503]: https://github.com/cockroachdb/cockroach/pull/35503 -[#35504]: https://github.com/cockroachdb/cockroach/pull/35504 -[#35506]: https://github.com/cockroachdb/cockroach/pull/35506 -[#35517]: https://github.com/cockroachdb/cockroach/pull/35517 -[#35529]: https://github.com/cockroachdb/cockroach/pull/35529 -[#35534]: https://github.com/cockroachdb/cockroach/pull/35534 -[#35536]: https://github.com/cockroachdb/cockroach/pull/35536 -[#35575]: https://github.com/cockroachdb/cockroach/pull/35575 -[#35577]: https://github.com/cockroachdb/cockroach/pull/35577 -[#35616]: https://github.com/cockroachdb/cockroach/pull/35616 -[#35617]: https://github.com/cockroachdb/cockroach/pull/35617 -[#35626]: https://github.com/cockroachdb/cockroach/pull/35626 -[#35638]: https://github.com/cockroachdb/cockroach/pull/35638 -[#35644]: https://github.com/cockroachdb/cockroach/pull/35644 -[#35651]: https://github.com/cockroachdb/cockroach/pull/35651 -[#35656]: https://github.com/cockroachdb/cockroach/pull/35656 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-rc.1.md b/src/current/_includes/releases/v19.1/v19.1.0-rc.1.md deleted file mode 100644 index c42d76e7481..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-rc.1.md +++ /dev/null @@ -1,125 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -In addition to bug fixes and various enterprise, SQL, and Admin UI enhancements, with this release, we also want to highlight the following feature: - -- [**Prefer the nearest secondary index**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#preferring-the-nearest-index): Given multiple identical [indexes](https://www.cockroachlabs.com/docs/v19.1/indexes) that have different locality constraints using [replication zones](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones), the cost-based optimizer will now prefer the index that is closest to the gateway node that is planning the query. In a properly configured geo-distributed cluster, this can lead to performance improvements due to improved data locality and reduced network traffic. This feature enables scenarios where reference data such as a table of postal codes can be replicated to different regions, and queries will use the copy in the same region. -- [**Encryption at Rest**](https://www.cockroachlabs.com/docs/v19.1/encryption#encryption-at-rest-enterprise): This feature, which provides transparent encryption of a node's data on the local disk, was introduced as an experimental in CockroachDB v2.1. With this release, it is no longer considered experimental and is ready for production use. - -

Enterprise edition changes

- -- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) now support TLS connections to Kafka. [#35510][#35510] -- `CHANGEFEED`s now support SASL/PLAIN authentication when connecting to a Kafka sink. [#35800][#35800] -- `CHANGEFEED`s using Kafka now log information to help debug connection issues. [#35661][#35661] - -

SQL language changes

- -- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) now picks lookup-joins less frequently. [#35561][#35561] -- CockroachDB now reports uses of [CTEs](https://www.cockroachlabs.com/docs/v19.1/common-table-expressions) (`WITH ...`) and [subqueries](https://www.cockroachlabs.com/docs/v19.1/subqueries) in [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting), to guide future product planning. [#35650][#35650] -- It is now possible to specify a `HASH` / `MERGE` / `LOOKUP` [join hint](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#join-hints) with cross joins (`CROSS JOIN`). [#35700][#35700] -- The output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v19.1/explain) now uses `hash-join` or `merge-join` instead of `join`. [#35688][#35688] -- Added an `EXPLAIN (opt, env)` option, which provides all relevant information for the planning of a query. [#35802][#35802] -- `transaction deadline exceeded` errors are now returned to the client with a retryable code. [#35284][#35284] -- Regular SQL errors that indicate erroneous SQL and for which CockroachDB does not yet populate a well-defined PostgreSQL error code will now be reported with code `XXUUU` instead of code `XX000`. [#35896][#35896] {% comment %}doc{% endcomment %} -- Removed "experimental" from the names of the two existing automatic statistics [cluster settings](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), and added two new cluster settings to control the target number of stale rows per table that will trigger a statistics refresh. [#36085][#36085] {% comment %}doc{% endcomment %} -- Renamed the `experimental_reorder_joins_limit` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars) to `reorder_joins_limit`. [#36085][#36085] {% comment %}doc{% endcomment %} -- Changed [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v19.1/show-jobs) to no longer display automatic statistics jobs. `SHOW AUTOMATIC JOBS` can now be used instead to view automatic statistics jobs. [#36112][#36112] {% comment %}doc{% endcomment %} -- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) will now try to select the index having a leaseholder preference that is closest. [#36123][#36123] {% comment %}doc{% endcomment %} -- [Computed columns](https://www.cockroachlabs.com/docs/v19.1/computed-columns) are now evaluated after rounding any decimal values in input columns. [#36128][#36128] -- Changed the generation algorithm for the OID column of tables in `pg_catalog`. As with previous CockroachDB releases, we guarantee that the OID values are consistent between `pg_catalog` tables (so that tables can be joined together), but we do not guarantee that they are stable across CockroachDB versions. Avoid storing them in client apps. [#33697][#33697] - -

Command-line changes

- -- The output of [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v19.1/debug-zip) now contains more information. [#36026][#36026] {% comment %}doc{% endcomment %} - -

Admin UI changes

- -- Improved progress reporting for `CREATE STATISTICS` jobs. [#35684][#35684] -- The **Queries per second** metric in the **Summary** bar now summarizes only the query types displayed in the **SQL Queries** graph and **Node Map**. [#35905][#35905] {% comment %}doc{% endcomment %} -- The **Jobs** screen no longer shows automatic statistics by default. To see these jobs, you must now select **Auto-Statistics Creation** from the **Type** menu. [#36112][#36112] - -

Bug fixes

- -- Increased speed of automatic statistic jobs on clusters with low load. [#35698][#35698] -- `CHANGEFEED`s connected to a slow sink now error instead of using unbounded amounts of memory. [#35745][#35745] -- Removed historical log entries from Raft snapshots that could lead to failed snapshots. [#35701][#35701] -- Fixed a bug in `RESTORE` where some unusual range boundaries in interleaved tables caused an error. [#36005][#36005] -- Fixed an error that occurred when creating statistics on tables with an inverted index. [#35982][#35982] -- Fixed a panic that occurred when comparing a value to the result of an `EXISTS`. [#36038][#36038] -- Fixed an error caused by incorrect calculation of null counts in `VALUES` clauses. [#35997][#35997] -- Reduced the occurrence of `ASYNC_WRITE_FAILURE` transaction retry errors, especially for the first insert into a newly-created table. [#36104][#36104] -- Fixed panics with the message "unexpected non-pending txn in augmentMetaLocked" caused by distributed queries encountering multiple errors. [#36041][#36041] -- `CHANGEFEED`s with `changefeed.push.enabled = true` (which is the default) no longer fail when run for longer than the garbage collection window of the source data and a range split occurred. They also now emit dramatically fewer duplicates. [#35981][#35981] -- Fixed panics caused by certain window functions that operate on tuples. [#36124][#36124] -- Prevented deadlocks when cancelling distributed queries in some cases. [#36122][#36122] -- Fixed a planning error that occurred with some `IN` expressions containing a list of constant and non-constant items. [#36134][#36134] -- Reduced risk of data unavailability during AZ/region failure. [#36133][#36133] -- Fixed a planning error that occurred when using set operations with multiple columns and many null values. [#36169][#36169] - -

Performance improvements

- -- Improved performance of the TPC-C benchmark by pre-calculating statistics and injecting them during `IMPORT`. [#35940][#35940] -- Reduced the default frequency of automatic statistics refreshes. [#35992][#35992] -- Improved the selectivity estimation of range predicates during query optimization. [#36093][#36093] - -

Build changes

- -- [Go 1.11.6](https://golang.org/dl/) is now the minimum required version necessary to build CockroachDB. [#35909][#35909] {% comment %}doc{% endcomment %} - -

Doc updates

- -- Added a library of common [Cluster Topology Patterns](https://www.cockroachlabs.com/docs/v19.1/topology-patterns). [#4235](https://github.com/cockroachdb/docs/pull/4235) -- Documented how [reads and writes](https://www.cockroachlabs.com/docs/v19.1/architecture/reads-and-writes-overview) are affected by the replicated and distributed nature of data in CockroachDB. [#4543](https://github.com/cockroachdb/docs/pull/4543) -- Corrected the syntax for [per-replica replication zone constraints](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones#scope-of-constraints). [#4569](https://github.com/cockroachdb/docs/pull/4569) -- Added more thorough documentation on [CockroachDB dependencies](https://www.cockroachlabs.com/docs/v19.1/recommended-production-settings#dependencies). [#4567](https://github.com/cockroachdb/docs/pull/4567) - -
- -

Contributors

- -This release includes 104 merged PRs by 23 authors. We would like to thank the following contributors from the CockroachDB community: - -- Dong Liang (first-time contributor) - -
- -[#33697]: https://github.com/cockroachdb/cockroach/pull/33697 -[#35284]: https://github.com/cockroachdb/cockroach/pull/35284 -[#35440]: https://github.com/cockroachdb/cockroach/pull/35440 -[#35510]: https://github.com/cockroachdb/cockroach/pull/35510 -[#35561]: https://github.com/cockroachdb/cockroach/pull/35561 -[#35650]: https://github.com/cockroachdb/cockroach/pull/35650 -[#35661]: https://github.com/cockroachdb/cockroach/pull/35661 -[#35684]: https://github.com/cockroachdb/cockroach/pull/35684 -[#35688]: https://github.com/cockroachdb/cockroach/pull/35688 -[#35698]: https://github.com/cockroachdb/cockroach/pull/35698 -[#35700]: https://github.com/cockroachdb/cockroach/pull/35700 -[#35701]: https://github.com/cockroachdb/cockroach/pull/35701 -[#35728]: https://github.com/cockroachdb/cockroach/pull/35728 -[#35745]: https://github.com/cockroachdb/cockroach/pull/35745 -[#35800]: https://github.com/cockroachdb/cockroach/pull/35800 -[#35802]: https://github.com/cockroachdb/cockroach/pull/35802 -[#35896]: https://github.com/cockroachdb/cockroach/pull/35896 -[#35905]: https://github.com/cockroachdb/cockroach/pull/35905 -[#35909]: https://github.com/cockroachdb/cockroach/pull/35909 -[#35940]: https://github.com/cockroachdb/cockroach/pull/35940 -[#35981]: https://github.com/cockroachdb/cockroach/pull/35981 -[#35982]: https://github.com/cockroachdb/cockroach/pull/35982 -[#35992]: https://github.com/cockroachdb/cockroach/pull/35992 -[#35997]: https://github.com/cockroachdb/cockroach/pull/35997 -[#36005]: https://github.com/cockroachdb/cockroach/pull/36005 -[#36026]: https://github.com/cockroachdb/cockroach/pull/36026 -[#36038]: https://github.com/cockroachdb/cockroach/pull/36038 -[#36041]: https://github.com/cockroachdb/cockroach/pull/36041 -[#36085]: https://github.com/cockroachdb/cockroach/pull/36085 -[#36093]: https://github.com/cockroachdb/cockroach/pull/36093 -[#36104]: https://github.com/cockroachdb/cockroach/pull/36104 -[#36112]: https://github.com/cockroachdb/cockroach/pull/36112 -[#36122]: https://github.com/cockroachdb/cockroach/pull/36122 -[#36123]: https://github.com/cockroachdb/cockroach/pull/36123 -[#36124]: https://github.com/cockroachdb/cockroach/pull/36124 -[#36128]: https://github.com/cockroachdb/cockroach/pull/36128 -[#36133]: https://github.com/cockroachdb/cockroach/pull/36133 -[#36134]: https://github.com/cockroachdb/cockroach/pull/36134 -[#36169]: https://github.com/cockroachdb/cockroach/pull/36169 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-rc.2.md b/src/current/_includes/releases/v19.1/v19.1.0-rc.2.md deleted file mode 100644 index f818fbbfcd6..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-rc.2.md +++ /dev/null @@ -1,54 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

SQL language changes

- -- Added the `kv.bulk_io_write.concurrent_addsstable_requests` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), which limits the number of SSTables that can be added concurrently during bulk operations. [#36444][#36444] -- Added the `schemachanger.backfiller.buffer_size`, `schemachanger.backfiller.max_sst_size`, and `schemachanger.bulk_index_backfill.batch_size` [cluster settings](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), which control buffering in index backfills. [#36377][#36377] -- Added the `sql.defaults.reorder_joins_limit` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), which defines the default value of the `reorder_joins_limit` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars). [#36382][#36382] - -

Bug fixes

- -- Fixed a panic that could occur with certain patterns of using `UPDATE` and column families. [#36375][#36375] -- Prevented production server crashes on certain assertion errors. [#36434][#36434] -- Data that was previously omitted from `debug zip` is now included. [#36480][#36480] -- Fixed a planning error that occurred with some `GROUP BY` queries due to errors in null count estimation. [#36528][#36528] -- Fixed inappropriate column renaming in some cases involving single-column SRFs. [#36535][#36535] -- Prevented a panic when running a render expression that produces an error at the very end of a `count_rows` aggregate. [#36538][#36538] -- Prevented a deadlock related to store queue processing. [#36542][#36542] - -

Performance improvements

- -- CockroachDB now allows oversized ranges to split sooner. [#36368][#36368] -- Reduced memory usage during bulk data ingestion (during `IMPORT`, `RESTORE`, and index creation). [#36420][#36420] -- Prevented rocksdb from slowing down write traffic during bulk data ingestion. [#36512][#36512] -- Sped up bulk data ingestion during index backfills and `IMPORT`. [#36525][#36525] - -

Doc updates

- -- Emphasized the experimental status of [CockroachDB's Windows binary](https://www.cockroachlabs.com/docs/v19.1/install-cockroachdb-windows). [#4628](https://github.com/cockroachdb/docs/pull/4628) -- Clarified the use of the `ApplicationName` connection string parameter for JDBC clients. [#4623](https://github.com/cockroachdb/docs/pull/4623) -- Documented the [`COMMENT ON`](https://www.cockroachlabs.com/docs/v19.1/comment-on) statement, for adding comments to databases, tables, and columns. [#4617](https://github.com/cockroachdb/docs/pull/4617) -- Documented the [`RENAME CONSTRAINT`](https://www.cockroachlabs.com/docs/v19.1/rename-constraint) subcommand of `ALTER TABLE`, and identified the [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v19.1/alter-table) subcommands that can be used in combination in a single `ALTER TABLE` statement. [#4615](https://github.com/cockroachdb/docs/pull/4615) -- Documented per-statement credential parameters for Google Cloud Storage. [#4606](https://github.com/cockroachdb/docs/pull/4606) -- Clarified the accepted values for the `--duration` flag of [`cockroach workload`](https://www.cockroachlabs.com/docs/v19.1/cockroach-workload). [#4610](https://github.com/cockroachdb/docs/pull/4610) - -

Contributors

- -This release includes 36 merged PRs by 16 authors. - -[#36368]: https://github.com/cockroachdb/cockroach/pull/36368 -[#36375]: https://github.com/cockroachdb/cockroach/pull/36375 -[#36377]: https://github.com/cockroachdb/cockroach/pull/36377 -[#36382]: https://github.com/cockroachdb/cockroach/pull/36382 -[#36420]: https://github.com/cockroachdb/cockroach/pull/36420 -[#36434]: https://github.com/cockroachdb/cockroach/pull/36434 -[#36444]: https://github.com/cockroachdb/cockroach/pull/36444 -[#36480]: https://github.com/cockroachdb/cockroach/pull/36480 -[#36512]: https://github.com/cockroachdb/cockroach/pull/36512 -[#36525]: https://github.com/cockroachdb/cockroach/pull/36525 -[#36528]: https://github.com/cockroachdb/cockroach/pull/36528 -[#36535]: https://github.com/cockroachdb/cockroach/pull/36535 -[#36538]: https://github.com/cockroachdb/cockroach/pull/36538 -[#36542]: https://github.com/cockroachdb/cockroach/pull/36542 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-rc.3.md b/src/current/_includes/releases/v19.1/v19.1.0-rc.3.md deleted file mode 100644 index ffe7b4e8df5..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-rc.3.md +++ /dev/null @@ -1,28 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Bug fixes

- -- Fixed a potential crash when constructing certain types of aggregations with post projections. [#36514][#36514] -- CockroachDB now correctly validates [computed columns](https://www.cockroachlabs.com/docs/v19.1/computed-columns) during [`ALTER TABLE ... ADD COLUMN`](https://www.cockroachlabs.com/docs/v19.1/add-column). [#36575][#36575] -- Fixed a bug when parsing dates with large years. [#36555][#36555] -- Fixed a bug when decoding single column family `JSONB` columns. [#36626][#36626] -- Fixed a potential crash in a mixed-version cluster with some nodes running `v19.1-beta-x` and others running `19.1-rc.x`. [#36719][#36719] -- Made [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) more resilient to a class of bugs that manifest as stalls. [#36768][#36768] - -

Performance improvements

- -- CockroachDB now applies back-pressure to bulk operations before other traffic. [#36738][#36738] - -

Contributors

- -This release includes 19 merged PRs by 12 authors. - -[#36514]: https://github.com/cockroachdb/cockroach/pull/36514 -[#36555]: https://github.com/cockroachdb/cockroach/pull/36555 -[#36575]: https://github.com/cockroachdb/cockroach/pull/36575 -[#36626]: https://github.com/cockroachdb/cockroach/pull/36626 -[#36719]: https://github.com/cockroachdb/cockroach/pull/36719 -[#36738]: https://github.com/cockroachdb/cockroach/pull/36738 -[#36768]: https://github.com/cockroachdb/cockroach/pull/36768 diff --git a/src/current/_includes/releases/v19.1/v19.1.0-rc.4.md b/src/current/_includes/releases/v19.1/v19.1.0-rc.4.md deleted file mode 100644 index bfc3062638e..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0-rc.4.md +++ /dev/null @@ -1,23 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Bug fixes

- -- Fixed a crash caused by running [`COMMENT ON`](https://www.cockroachlabs.com/docs/v19.1/comment-on) with verbose logging turned on. [#36825][#36825] -- Fixed a panic that can happen while RangeFeeds are active. [#36870][#36870] -- The default value of the `kv.bulk_io_write.max_rate` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) is now 1 TB/s, to help prevent incorrect rate limiting behavior due to rounding. [#36912][#36912] -- Fixed a rare inconsistency that could occur on badly overloaded clusters. [#36959][#36959] -- Fixed a bug in write batch decoding that could cause "invalid batch" errors while using [`cockroach debug` commands](https://www.cockroachlabs.com/docs/v19.1/cockroach-commands) to analyze data. [#36965][#36965] -- Fixed an issue that could cause low-traffic clusters to get stuck after a network outage. [#37064][#37064] - -

Contributors

- -This release includes 11 merged PRs by 9 authors. - -[#36825]: https://github.com/cockroachdb/cockroach/pull/36825 -[#36870]: https://github.com/cockroachdb/cockroach/pull/36870 -[#36912]: https://github.com/cockroachdb/cockroach/pull/36912 -[#36959]: https://github.com/cockroachdb/cockroach/pull/36959 -[#36965]: https://github.com/cockroachdb/cockroach/pull/36965 -[#37064]: https://github.com/cockroachdb/cockroach/pull/37064 diff --git a/src/current/_includes/releases/v19.1/v19.1.0.md b/src/current/_includes/releases/v19.1/v19.1.0.md deleted file mode 100644 index 409c9ab247c..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.0.md +++ /dev/null @@ -1,98 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -With the release of CockroachDB v19.1, we’ve made a variety of security, performance, and usability improvements. A few highlights: - -- **Enhanced security**: CockroachDB now supports [transparent data encryption at rest](https://www.cockroachlabs.com/docs/v19.1/encryption#encryption-at-rest-enterprise) and [integrates with existing LDAP directory services](https://www.cockroachlabs.com/docs/v19.1/gssapi_authentication) within an organization to simplify user account management. - -- **Native Change Data Capture**: CockroachDB extends its streaming data capabilities by enabling data to flow more easily to backend warehouses, with support for [publishing distributed, row-level change feeds directly into cloud storage](https://www.cockroachlabs.com/docs/v19.1/change-data-capture) for downstream processing. - -- **High-performance multi-region deployments**: Our cost-based optimizer can now [use data locality to optimize queries](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#preferring-the-nearest-index) so you can get low-latency queries even when your data may be spread across regions. We’ve also added [follower reads](https://www.cockroachlabs.com/docs/v19.1/follower-reads) to improve read performance for certain geo-distributed workloads. - -Check out a comprehensive [summary of the most significant user-facing changes](#v19-1-0-summary) and then [upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version). You can also read more about these changes in the [v19.1 blog post](https://www.cockroachlabs.com/blog/cockroachdb-19dot1-release/). - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Summary

- -This section summarizes the most significant user-facing changes in v19.1.0. For a complete list of features and changes, including bug fixes and performance improvements, see the [release notes]({% link releases/index.md %}#testing-releases) for previous testing releases. - -- [Managed service offering](#v19-1-0-managed-service-offering) -- [Enterprise features](#v19-1-0-enterprise-features) -- [Core features](#v19-1-0-core-features) -- [Backward-incompatible changes](#v19-1-0-backward-incompatible-changes) -- [Known limitations](#v19-1-0-known-limitations) -- [Documentation](#v19-1-0-documentation) - - - -

Managed service offering

- -Feature | Description ---------|------------ -[**Managed CockroachDB Console**](https://www.cockroachlabs.com/docs/cockroachcloud/quickstart) | Paid managed CockroachDB customers can now sign into their organization's account, view the connection string details, add and edit their list of allowed IPs on the management console. - -

Enterprise features

- -These features require an [enterprise license](https://www.cockroachlabs.com/docs/v19.1/enterprise-licensing). Register for a 30-day trial license [here](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -Feature | Description ---------|------------ -[**Encryption at Rest**](https://www.cockroachlabs.com/docs/v19.1/encryption#encryption-at-rest-enterprise) | Encryption at rest provides transparent encryption of a node's data on the local disk. This feature was introduced as experimental in v2.1 and is now ready for production use. -[**GSSAPI with Kerberos Authentication**](https://www.cockroachlabs.com/docs/v19.1/gssapi_authentication) | CockroachDB now supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory. -[**Follower Reads**](https://www.cockroachlabs.com/docs/v19.1/follower-reads) | This feature reduces read latencies by allowing queries to perform historical reads of the closest replica of a given piece of data rather than reading from the more distant "leaseholder" replica. To enable follower reads on a query, use the `experimental_follower_read_timestamp()` [built-in function](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) in conjunction with the `AS OF SYSTEM TIME` clause. -[**Prefer Closest Secondary Index**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#preferring-the-nearest-index) | Given multiple identical [indexes](https://www.cockroachlabs.com/docs/v19.1/indexes) that have different locality constraints using [replication zones](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones), the cost-based optimizer will now prefer the index that is closest to the gateway node that is planning the query. In a properly configured geo-distributed cluster, this can lead to performance improvements due to improved data locality and reduced network traffic. This feature enables scenarios where reference data such as a table of postal codes can be replicated to different regions, and queries will use the copy in the same region. -[**Change Data Capture**](https://www.cockroachlabs.com/docs/v19.1/change-data-capture) | CDC in v19.1 includes many improvements to production-readiness. [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) delivering data to Apache Kafka/Confluent Platform are now fully supported, and a new cloud storage sink allows `CHANGEFEED`s to deliver table updates as JSON files to endpoints like Google Storage or AWS S3. A new push-based internal data delivery mechanism called _rangefeeds_ helps deliver data with increased reliability and lower latency. - -

Core features

- -These features are freely available in the core version and do not require an enterprise license. - -Feature | Description ---------|------------ -[**Load-Based Splitting**](https://www.cockroachlabs.com/docs/v19.1/load-based-splitting) | CockroachDB now automatically splits frequently accessed keys into smaller ranges to optimize your cluster’s performance. -[**Query Optimizer Hints**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#join-hints) | The cost-based optimizer now supports hint syntax to force the use of merge, hash, or lookup joins. This let you override the cost-based optimizer's join algorithm selection in cases where you have information about your data that the cost-based optimizer does not yet have. -[**Correlated Subqueries**](https://www.cockroachlabs.com/docs/v19.1/subqueries#correlated-subqueries) | Most correlated subqueries are now decorrelated and processed by the cost-based-optimizer. For those that cannot be decorrelated, CockroachDB now emits an "apply" operator that executes a sub-plan for every row in its input. This allows CockroachDB to execute a large number of additional correlated subqueries that were not able to be executed in v2.1. -[**Core Changefeeds**](https://www.cockroachlabs.com/docs/v19.1/changefeed-for) | CockroachDB now offers a non-enterprise version of change data capture, via the `EXPERIMENTAL CHANGEFEED FOR` statement, to consume table updates over a streaming Postgres connection. -[**Cost-Based Optimizer**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) | The cost-based optimizer now supports almost all read-only queries (except window functions) and almost all mutations (e.g., `CREATE TABLE AS`, `INSERT`, `UPDATE`, `UPSERT`, `DELETE`). In addition, the cost-based optimizer now reorders up to 4 joins in a query to attempt to find the most performant ordering (via the new `reorder_joins_limit` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars)) and takes advantage of [automatically generated table statistics](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#table-statistics) without impacting foreground traffic. Note that statistics are created by default on all indexed columns when a user upgrades to this version. Finally, a [query plan cache](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer#query-plan-cache) now saves a portion of the planning time for frequent queries used in the cost-based optimizer. -[**Logical Plans in the Admin UI**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page#logical-plan) | The **Statement Details** page in the Admin UI now shows the ordered steps CockroachDB will take to execute a query (i.e., the [`EXPLAIN`](https://www.cockroachlabs.com/docs/v19.1/explain) output). This helps you identify bottlenecks caused by how queries are planned by our heuristic and cost-based optimizers. -[**Cascading Replication Zones**](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones) | Newly created replication zones will now inherit empty values from their parent. For example, if the replication zone for a table is not explicitly set with `num_replicas`, it will inherit that value from its direct parent, whether that's the `.default` replication zone from the entire cluster or the replication zone for the database containing the table. -[**Custom Savepoints**](https://www.cockroachlabs.com/docs/v19.1/savepoint#customizing-the-savepoint-name) | CockroachDB now supports custom naming of `SAVEPOINT`s for compatibility with ORMs and other third-party tools. - -

Backward-incompatible changes

- -Before [upgrading to CockroachDB v19.1.0](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version), be sure to review the following backward-incompatible changes and adjust your application as necessary. - -- CockroachDB no longer supports the `B'abcde'` notation to express [byte array literals](https://www.cockroachlabs.com/docs/v19.1/sql-constants#byte-array-literals). This notation now expresses **bit** array literals like in PostgreSQL. The `b'...'` notation remains for byte array literals. - -- The normalized results of certain timestamp + duration operations involving year or month durations have been adjusted to agree with the values returned by PostgreSQL. - -- The `CHANGEFEED` [`experimental-avro` option](https://www.cockroachlabs.com/docs/v19.1/create-changefeed#options) has been renamed `experimental_avro`. - -- Timezone abbreviations, such as `EST`, are no longer allowed when parsing or converting to a date/time type. Previously, an abbreviation would be accepted if it were an alias for the session's timezone. - -- The way composite [foreign key](https://www.cockroachlabs.com/docs/v19.1/foreign-key) matches are evaluated has changed to match the Postgres behavior. If your schema currently uses composite keys, it may require updates, since this change may affect your foreign key constraints and cascading behavior. For more details and guidance, see [this note](#v2-2-0-alpha-20190114-composite-foreign-key-matching). - -- Mutation statements like [`UPDATE`](https://www.cockroachlabs.com/docs/v19.1/update), [`INSERT`](https://www.cockroachlabs.com/docs/v19.1/insert), and [`DELETE`](https://www.cockroachlabs.com/docs/v19.1/delete) no longer attempt to guarantee mutation or output ordering when an `ORDER BY` clause is present. It is now an error to use `ORDER BY` without `LIMIT` with the `UPDATE` statement. - -

Known limitations

- -For information about limitations we've identified in CockroachDB v19.1, with suggested workarounds where applicable, see [Known Limitations](https://www.cockroachlabs.com/docs/v19.1/known-limitations). - -

Documentation

- -Topic | Description -------|------------ -**Geo-Partitioning** | Added a [video and tutorial on using geo-partitioning](https://www.cockroachlabs.com/docs/v19.1/demo-geo-partitioning) to get very fast reads and writes in a broadly distributed cluster. -**Security** | Added an [overview of CockroachDB security](https://www.cockroachlabs.com/docs/v19.1/security-overview), with a dedicated page on [authentication](https://www.cockroachlabs.com/docs/v19.1/authentication), [encryption](https://www.cockroachlabs.com/docs/v19.1/encryption), [authorization](https://www.cockroachlabs.com/docs/v19.1/authorization), and [SQL audit logging](https://www.cockroachlabs.com/docs/v19.1/sql-audit-logging). -**Troubleshooting** | Added much more guidance on [troubleshooting cluster setup](https://www.cockroachlabs.com/docs/v19.1/cluster-setup-troubleshooting) and [troubleshooting SQL behavior](https://www.cockroachlabs.com/docs/v19.1/query-behavior-troubleshooting). -**Architecture** | Added the [Life of a Distributed Transaction](https://www.cockroachlabs.com/docs/v19.1/architecture/life-of-a-distributed-transaction), which details the path that a query takes through CockroachDB's architecture, starting with a SQL client and progressing all the way to RocksDB (and then back out again). Also added [Reads and Writes in CockroachDB](https://www.cockroachlabs.com/docs/v19.1/architecture/reads-and-writes-overview), which explains how reads and writes are affected by the replicated and distributed nature of data in CockroachDB. -**Production Guidance** | Expanded the [Production Checklist](https://www.cockroachlabs.com/docs/v19.1/recommended-production-settings) with more current hardware recommendations and additional guidance on storage, file systems, and clock synchronization. Also added a library of common [Cluster Topology Patterns](https://www.cockroachlabs.com/docs/v19.1/topology-patterns). -**ORMs** | Expanded the [SQLAlchemy tutorial](https://www.cockroachlabs.com/docs/v19.1/build-a-python-app-with-cockroachdb-sqlalchemy) to provide code for transaction retries and best practices for using SQLAlchemy with CockroachDB. diff --git a/src/current/_includes/releases/v19.1/v19.1.1.md b/src/current/_includes/releases/v19.1/v19.1.1.md deleted file mode 100644 index 01beef8072f..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.1.md +++ /dev/null @@ -1,67 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.1 since v19.1.0. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Enterprise edition changes

- -- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) now accept a `key_in_value` option; this is automatically used for cloud storage sinks, making the primary key of deleted rows recoverable. [#37328][#37328] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- `EXPLAIN (DISTSQL) CREATE STATISTICS` now shows the DistSQL plan used by the [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) job. [#37237][#37237] {% comment %}doc{% endcomment %} -- Added missing columns to [`information_schema.columns`](https://www.cockroachlabs.com/docs/v19.1/information-schema). [#37283][#37283] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Prevented panics when adding [comments](https://www.cockroachlabs.com/docs/v19.1/comment-on) to database objects at high verbosity. [#37325][#37325] -- Fixed panics when trying to run certain `SHOW` commands via the pgwire prepare path. [#37325][#37325] -- Fixed a regression in 19.1 that prevented empty arrays from being accepted over pgwire. [#37398][#37398] -- While a cluster is unavailable (e.g., during a network partition), memory and goroutines used for authenticating connections no longer leak when the client closes said connections. [#36177][#36177] -- Fixed a possible panic while recovering from a WAL on which a sync operation failed. [#37109][#37109] -- Fixed an error which could occur when a zigzag join was performed against a table with dropped columns. [#37245][#37245] -- Fixed incorrect query plans/results when non-validated FK constraints are not satisfied by the data. [#37253][#37253] -- `CHANGEFEED`s now retry instead of causing errors in more situations. [#37092][#37092] -- Fixed a bug where `CHANGEFEED` job progress would regress when the job was restarted. [#37091][#37091] -- The `changefeed.max_behind_nanos` metric now has fewer false positives of changefeeds falling behind. [#37048][#37048] -- Corrected the names of some columns for tables created with [`CREATE TABLE AS `](https://www.cockroachlabs.com/docs/v19.1/create-table-as). [#37238][#37238] -- Fixed a bug causing unvalidated check constraints to disappear from the output of [`SHOW CONSTRAINTS`](https://www.cockroachlabs.com/docs/v19.1/show-constraints) and to not be referenced in [`ALTER TABLE`](https://www.cockroachlabs.com/docs/v19.1/alter-table) after upgrading to 19.1. [#37462][#37462] - -

Performance improvements

- -- Improved the performance of some queries containing predicates with constant functions, since these functions are now evaluated earlier during query optimization. [#37234][#37234] -- Improved the performance of some queries by teaching the optimizer to always prefer constrained scans over unconstrained scans. [#37235][#37235] - -

Doc updates

- -- Updated the [Kubernetes tutorials](https://www.cockroachlabs.com/docs/v19.1/orchestrate-cockroachdb-with-kubernetes) for running CockroachDB on GKE to specify a reasonable machine type. Also updated the Helm-specific instructions for maintenance tasks (adding/removing nodes, upgrading a cluster).[#4813](https://github.com/cockroachdb/docs/pull/4813) [#4805](https://github.com/cockroachdb/docs/pull/4805) -- Fixed the code sample in [Build a PHP App with CockroachDB](https://www.cockroachlabs.com/docs/v19.1/build-a-php-app-with-cockroachdb) to not create new connections for each query. [#4804](https://github.com/cockroachdb/docs/pull/4804). - -

Contributors

- -This release includes 28 merged PRs by 14 authors. - -[#36177]: https://github.com/cockroachdb/cockroach/pull/36177 -[#37048]: https://github.com/cockroachdb/cockroach/pull/37048 -[#37091]: https://github.com/cockroachdb/cockroach/pull/37091 -[#37092]: https://github.com/cockroachdb/cockroach/pull/37092 -[#37109]: https://github.com/cockroachdb/cockroach/pull/37109 -[#37234]: https://github.com/cockroachdb/cockroach/pull/37234 -[#37235]: https://github.com/cockroachdb/cockroach/pull/37235 -[#37237]: https://github.com/cockroachdb/cockroach/pull/37237 -[#37238]: https://github.com/cockroachdb/cockroach/pull/37238 -[#37245]: https://github.com/cockroachdb/cockroach/pull/37245 -[#37253]: https://github.com/cockroachdb/cockroach/pull/37253 -[#37283]: https://github.com/cockroachdb/cockroach/pull/37283 -[#37325]: https://github.com/cockroachdb/cockroach/pull/37325 -[#37328]: https://github.com/cockroachdb/cockroach/pull/37328 -[#37398]: https://github.com/cockroachdb/cockroach/pull/37398 -[#37462]: https://github.com/cockroachdb/cockroach/pull/37462 diff --git a/src/current/_includes/releases/v19.1/v19.1.10.md b/src/current/_includes/releases/v19.1/v19.1.10.md deleted file mode 100644 index c6051bebb56..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.10.md +++ /dev/null @@ -1,51 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Security updates

- -- HTTP endpoints beginning with `/debug/` now require a valid [`admin`](https://www.cockroachlabs.com/docs/v19.1/authorization) login session. [#50491][#50491] - -

Bug fixes

- -- Previously, HTTP requests would start to fail with error 503 "`transport: authentication handshake failed: io: read/write on closed pipe`" and never become possible again until restarting the node. This has been fixed. This bug has existed since v2.1 or earlier. [#48483][#48483] -- Previously, when the value passed to `--drain-wait` was very small, but non-zero, [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) in certain cases would not proceed to perform a hard shutdown. This has been corrected. This bug was present in v19.1.9, v19.2.7, and v20.1.1. [#49365][#49365] -- Fixed a RocksDB bug that could result in inconsistencies in rare circumstances. [#50510][#50510] - -

Build changes

- -- Release Docker images are now built on Debian 9.12. [#50480][#50480] - -

Doc updates

- -- Updated guidance on [node decommissioning](https://www.cockroachlabs.com/docs/v19.1/remove-nodes). [#7304][#7304] -- Renamed "whitelist/blacklist" terminology to "allowlist/blocklist". [#7535][#7535] -- Updated the Releases navigation in the sidebar to expose the latest Production and Testing releases. [#7550][#7550] -- Fixed scrollbar visibility on Chrome. [#7487][#7487] - -
- -

Contributors

- -This release includes 10 merged PRs by 6 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Drew Kimball (first-time contributor, CockroachDB team member) -- Jackson Owens (first-time contributor, CockroachDB team member) -- James H. Linder (first-time contributor, CockroachDB team member) - -
- -[#48483]: https://github.com/cockroachdb/cockroach/pull/48483 -[#49365]: https://github.com/cockroachdb/cockroach/pull/49365 -[#50480]: https://github.com/cockroachdb/cockroach/pull/50480 -[#50491]: https://github.com/cockroachdb/cockroach/pull/50491 -[#50510]: https://github.com/cockroachdb/cockroach/pull/50510 -[#7304]: https://github.com/cockroachdb/docs/pull/7304 -[#7550]: https://github.com/cockroachdb/docs/pull/7550 -[#7535]: https://github.com/cockroachdb/docs/pull/7535 -[#7487]: https://github.com/cockroachdb/docs/pull/7487 diff --git a/src/current/_includes/releases/v19.1/v19.1.11.md b/src/current/_includes/releases/v19.1/v19.1.11.md deleted file mode 100644 index 643378e2091..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.11.md +++ /dev/null @@ -1,17 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Bug fixes

- -- Fixed a bug in [`TRUNCATE`](https://www.cockroachlabs.com/docs/v19.1/truncate) that could leave tables in a state where they could not be renamed. [#50766][#50766] - -

Contributors

- -This release includes 1 merged PR by 1 author. - -[#50766]: https://github.com/cockroachdb/cockroach/pull/50766 diff --git a/src/current/_includes/releases/v19.1/v19.1.2.md b/src/current/_includes/releases/v19.1/v19.1.2.md deleted file mode 100644 index 3b9e00802a2..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.2.md +++ /dev/null @@ -1,68 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.2 since v19.1.1. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Enterprise edition changes

- -- You can now alter the zone configuration for a secondary index partition using the syntax `ALTER PARTITION OF INDEX @ CONFIGURE ZONE ...`. [#36883][#36883] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- CockroachDB now computes the result of shifting bit arrays to the right properly and avoids generating invalid bit arrays. [#36751][#36751] -- `SHOW ZONE CONFIGURATION` no longer emits invalid `ALTER` syntax in its output when displaying the zone configuration for a table or index partition that is inheriting from the database or the default configuration. [#36883][#36883] -- `SHOW ZONE CONFIGURATION FOR TABLE t PARTITION p` no longer ignores the clause `PARTITION p` and now properly displays the zone configuration for that partition instead. [#36883][#36883] -- Automated table statistics no longer encounter "batch timestamp must be after replica GC threshold" errors on configurations with low TTL. [#37588][#37588] -- Fixed type inference of columns in subqueries for some expressions of the form `scalar IN (subquery)`. [#37598][#37598] -- Fixed a panic when constructing the error message for an invalid partitioning. [#37703][#37703] -- Fixed a potential source of (faux) replica inconsistencies that can be reported while running a mixed v19.1 / v2.1 cluster. This error (in that situation only) is benign and can be resolved by upgrading to the latest v19.1 patch release. Every time this error occurs, a "checkpoint" is created which will occupy a large amount of disk space and which needs to be removed manually (see `/auxiliary/checkpoints`). [#37722][#37722] -- Fixed a case in which [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) would return successfully even though the server process was still running in a severely degraded state. [#37722][#37722] -- Fixed incorrect results or "incorrectly ordered stream" errors in response to some queries with aggregations, and improved the `EXPLAIN` output for aggregations. [#37792][#37792] -- A `NULL` right operand now causes the sub-operator expression to return `NULL`. [#37886][#37886] -- The `age()` function is now correctly marked as impure, causing it to be unavailable in certain contexts. [#37922][#37922] -- Certain binary encodings of numeric/decimal values no longer result in values that are an order of magnitude off. [#37921][#37921] -- Fixed a race condition that could cause a panic during query planning. [#37974][#37974] -- Fixed `GROUP BY` for empty arrays. [#37940][#37940] -- Fixed a bug when estimating result set sizes in the optimizer that caused queries involving very large integer ranges to have poor plans. [#38038][#38038] -- The [`cockroach` commands](https://www.cockroachlabs.com/docs/v19.1/cockroach-commands) that internally use a RPC connection (e.g., `cockroach quit`, `cockroach init`, etc.) once again properly support passing an IPv6 address literal via the `--host` argument. [#37982][#37982] -- The [`cockroach init`](https://www.cockroachlabs.com/docs/v19.1/initialize-a-cluster) command will now always properly report when a cluster is already initialized, even after the node that it's connecting to is restarted. [#37593][#37593] - -

Security improvements

- -- Stack memory used by CockroachDB is now marked as non-executable, improving security and compatibility with SELinux. [#38011][#38011] - -
- -

Contributors

- -This release includes 25 merged PRs by 18 authors. We would like to thank the following contributors from the CockroachDB community: - -- Simo Kinnunen (first-time contributor) - -
- -[#36751]: https://github.com/cockroachdb/cockroach/pull/36751 -[#36883]: https://github.com/cockroachdb/cockroach/pull/36883 -[#37573]: https://github.com/cockroachdb/cockroach/pull/37573 -[#37588]: https://github.com/cockroachdb/cockroach/pull/37588 -[#37598]: https://github.com/cockroachdb/cockroach/pull/37598 -[#37703]: https://github.com/cockroachdb/cockroach/pull/37703 -[#37722]: https://github.com/cockroachdb/cockroach/pull/37722 -[#37792]: https://github.com/cockroachdb/cockroach/pull/37792 -[#37886]: https://github.com/cockroachdb/cockroach/pull/37886 -[#37921]: https://github.com/cockroachdb/cockroach/pull/37921 -[#37922]: https://github.com/cockroachdb/cockroach/pull/37922 -[#37940]: https://github.com/cockroachdb/cockroach/pull/37940 -[#37974]: https://github.com/cockroachdb/cockroach/pull/37974 -[#38011]: https://github.com/cockroachdb/cockroach/pull/38011 -[#38038]: https://github.com/cockroachdb/cockroach/pull/38038 -[#37982]: https://github.com/cockroachdb/cockroach/pull/37982 -[#37593]: https://github.com/cockroachdb/cockroach/pull/37593 diff --git a/src/current/_includes/releases/v19.1/v19.1.3.md b/src/current/_includes/releases/v19.1/v19.1.3.md deleted file mode 100644 index a2cd843c681..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.3.md +++ /dev/null @@ -1,54 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.3 since v19.1.2. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Bug fixes

- -- Fixed help text that erroneously labeled [Encryption at Rest](https://www.cockroachlabs.com/docs/v19.1/encryption) as experimental. [#38237][#38237] -- Fixed an incorrect type mismatch error when empty [`ARRAY`](https://www.cockroachlabs.com/docs/v19.1/array) values are used as [`DEFAULT` values](https://www.cockroachlabs.com/docs/v19.1/default-value) (and potentially in other contexts). [#38300][#38300] -- Fixed a panic that could occur when decoding decimals as query parameters. [#38330][#38330] -- `NULL`s are now correctly handled by `MIN`, `SUM`, and `AVG` when used as [window functions](https://www.cockroachlabs.com/docs/v19.1/window-functions). [#38356][#38356] -- Fixed an issue that prevented [restoring](https://www.cockroachlabs.com/docs/v19.1/restore) some backups if they included tables that were partitioned by columns of a certain types while also interleaved by child tables. [#38494][#38494] -- Fixed possible deadlock when storage engine write fails. [#38478][#38478] -- Fixed potential reappearance of deleted timeseries data, which could trip the consistency checker. [#38478][#38478] -- Removed dependency on `sync_file_range` on Linux platforms on which it returns ENOSYS, such as WSL (Windows Subsystem for Linux). [#38478][#38478] -- Nodes that have been down now recover quicker when they rejoin, assuming they weren't down for much more than the value of the `server.time_until_store_dead` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) (which defaults to 5 minutes). [#38642][#38642] -- Checking the "skip should queue" checkbox in the Manual Enqueue Range advanced debuggging page now works for the GC Queue. [#38296][#38296] -- The YCSB workload no longer ignores the `--db `option. [#38238][#38238] -- Fixed the auto-retry counter in stats and now logs it in the statement/audit logs. [#38035][#38035] - -

Security improvements

- -- Only check `CommonName` on first certificate in file. [#38165][#38165] - -

Doc updates

- -- Added the [Build a Python app with Kubernetes on CockroachCloud](https://www.cockroachlabs.com/docs/cockroachcloud/deploy-a-python-to-do-app-with-flask-kubernetes-and-cockroachcloud) tutorial for running a Python app on a CockroachCloud cluster using a local Kubernetes cluster. [#4918](https://github.com/cockroachdb/docs/pull/4918) -- Expanded the recommended [Topology Patterns](https://www.cockroachlabs.com/docs/v19.1/topology-patterns) for running CockroachDB in a cloud environment, each with required configurations and latency and resiliency characteristics. [#4820](https://github.com/cockroachdb/docs/pull/4820) -- Made the Java code samples in [Build a Java App with CockroachDB](https://www.cockroachlabs.com/docs/v19.1/build-a-java-app-with-cockroachdb) simpler and more idiomatic. [#4855](https://github.com/cockroachdb/docs/pull/4855) -- Documented [what happens when a node runs out of disk space](https://www.cockroachlabs.com/docs/v19.1/operational-faqs#what-happens-when-a-node-runs-out-of-disk-space) and how to [create a ballast file](https://www.cockroachlabs.com/docs/v19.1/debug-ballast) to prepare for this case. [#5000](https://github.com/cockroachdb/docs/pull/5000) - -

Contributors

- -This release includes 17 merged PRs by 14 authors. - -[#38035]: https://github.com/cockroachdb/cockroach/pull/38035 -[#38165]: https://github.com/cockroachdb/cockroach/pull/38165 -[#38237]: https://github.com/cockroachdb/cockroach/pull/38237 -[#38238]: https://github.com/cockroachdb/cockroach/pull/38238 -[#38296]: https://github.com/cockroachdb/cockroach/pull/38296 -[#38300]: https://github.com/cockroachdb/cockroach/pull/38300 -[#38330]: https://github.com/cockroachdb/cockroach/pull/38330 -[#38356]: https://github.com/cockroachdb/cockroach/pull/38356 -[#38478]: https://github.com/cockroachdb/cockroach/pull/38478 -[#38494]: https://github.com/cockroachdb/cockroach/pull/38494 -[#38642]: https://github.com/cockroachdb/cockroach/pull/38642 diff --git a/src/current/_includes/releases/v19.1/v19.1.4.md b/src/current/_includes/releases/v19.1/v19.1.4.md deleted file mode 100644 index 418dbaa76db..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.4.md +++ /dev/null @@ -1,48 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.4 since v19.1.3. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Enterprise edition changes

- -- The new `skip_missing_views` option for [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore) skips restoring views that cannot be restored because their dependencies are not being restored at the same time. [#38773][#38773] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- CockroachDB now ignores non-fatal errors updating jobs during [`DROP TABLE`](https://www.cockroachlabs.com/docs/v19.1/drop-table). [#38821][#38821] -- The first statement of a transaction no longer returns a transaction retry error if it is an [`UPDATE`](https://www.cockroachlabs.com/docs/v19.1/update) or [`DELETE`](https://www.cockroachlabs.com/docs/v19.1/delete) (this was already true for [`INSERT`](https://www.cockroachlabs.com/docs/v19.1/insert)). [#39087][#39087] - -

Bug fixes

- -- Fixed a bug that prevented [inverted indexes](https://www.cockroachlabs.com/docs/v19.1/inverted-indexes) from being created on [`JSONB`](https://www.cockroachlabs.com/docs/v19.1/jsonb) columns containing `NULL` values. [#38747][#38747] -- Ranges consisting of only one row (and historical versions of that row) are now correctly up-replicated. [#38588][#38588] -- Fixed a planning error that caused valid queries to fail with the error "rowCount passed in was too small". [#38793][#38793] -- Fixed incorrect results, or "unordered span" errors, in some cases involving exclusive inequalities with non-numeric types. [#38896][#38896] -- Fixed a bug in the [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) causing a bad index for [lookup joins](https://www.cockroachlabs.com/docs/v19.1/joins#lookup-joins) in some cases. [#39028][#39028] -- Fixed a potential infinite loop in queries involving reverse scans. [#39101][#39101] -- [`UPSERT`s](https://www.cockroachlabs.com/docs/v19.1/upsert) planned by the [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) that use [lookup joins](https://www.cockroachlabs.com/docs/v19.1/joins#lookup-joins) run during column mutations on the table being updated no longer cause crashes or other issues. [#38917][#38917] -- `crdb_internal.ranges` can now be used inside [views](https://www.cockroachlabs.com/docs/v19.1/views). Note that such views can become invalid in future releases if `crdb_internal.ranges` changes. [#39213][#39213] - -

Contributors

- -This release includes 15 merged PRs by 11 authors. - -[#38588]: https://github.com/cockroachdb/cockroach/pull/38588 -[#38747]: https://github.com/cockroachdb/cockroach/pull/38747 -[#38773]: https://github.com/cockroachdb/cockroach/pull/38773 -[#38793]: https://github.com/cockroachdb/cockroach/pull/38793 -[#38821]: https://github.com/cockroachdb/cockroach/pull/38821 -[#38896]: https://github.com/cockroachdb/cockroach/pull/38896 -[#38917]: https://github.com/cockroachdb/cockroach/pull/38917 -[#39028]: https://github.com/cockroachdb/cockroach/pull/39028 -[#39087]: https://github.com/cockroachdb/cockroach/pull/39087 -[#39101]: https://github.com/cockroachdb/cockroach/pull/39101 -[#39213]: https://github.com/cockroachdb/cockroach/pull/39213 diff --git a/src/current/_includes/releases/v19.1/v19.1.5.md b/src/current/_includes/releases/v19.1/v19.1.5.md deleted file mode 100644 index 3a7cbc7f50c..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.5.md +++ /dev/null @@ -1,48 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.5 since v19.1.4. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

SQL language changes

- -- Add `check_constraints` table to the `information_schema`. [#39688][#39688] - -

Bug fixes

- -- Unary negatives in constant arithmetic expressions are no longer ignored. [#39367][#39367] -- Propagate [zone configuration](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones) to non-gossiped system tables. [#39691][#39691] -- Prevent unlimited memory usage during SQL range deletions. [#39733][#39733] -- A crash caused by the presence of [window functions](https://www.cockroachlabs.com/docs/v19.1/window-functions) in the source of the [`CREATE TABLE AS`](https://www.cockroachlabs.com/docs/v19.1/create-table-as) statement is fixed. [#40430][#40430] -- Fix a planning error that could occur when a [common table expression](https://www.cockroachlabs.com/docs/v19.1/common-table-expressions) with an [`ORDER BY`](https://www.cockroachlabs.com/docs/v19.1/query-order) was used inside of a [subquery](https://www.cockroachlabs.com/docs/v19.1/subqueries). [#40490][#40490] -- Fixed an optimizer panic when building array access expressions. [#40513][#40513] -- Fix bug where an MVCC value at a future timestamp is returned after a transaction restart. [#40611][#40611] -- Consider intents in a read's uncertainty interval to be uncertain just as if they were committed values. This removes the potential for stale reads when a causally dependent transaction runs into the not-yet resolved intents from a causal ancestor. [#40611][#40611] -- Prevent problems on mixed-version 19.1 clusters that are also performing a lookup join on a table that has an ongoing index backfill. [#40739][#40739] -- The `cockroach` [CLI client commands](https://www.cockroachlabs.com/docs/v19.1/cockroach-commands) are now able to connect to a server via the environment variable `COCKROACH_URL`. [#40848][#40848] -- Fix a crash in apply joins. [#40829][#40829] -- Detailed crash reports ("panic messages") could previously be reported in the wrong file, if SQL audit reporting or statement logging had been activated. This has been corrected and crash reports will now properly always appear in the main log file. [#40942][#40942] - -

Contributors

- -This release includes 14 merged PRs by 10 authors. - -[#39367]: https://github.com/cockroachdb/cockroach/pull/39367 -[#39688]: https://github.com/cockroachdb/cockroach/pull/39688 -[#39691]: https://github.com/cockroachdb/cockroach/pull/39691 -[#39733]: https://github.com/cockroachdb/cockroach/pull/39733 -[#40430]: https://github.com/cockroachdb/cockroach/pull/40430 -[#40490]: https://github.com/cockroachdb/cockroach/pull/40490 -[#40513]: https://github.com/cockroachdb/cockroach/pull/40513 -[#40611]: https://github.com/cockroachdb/cockroach/pull/40611 -[#40739]: https://github.com/cockroachdb/cockroach/pull/40739 -[#40829]: https://github.com/cockroachdb/cockroach/pull/40829 -[#40848]: https://github.com/cockroachdb/cockroach/pull/40848 -[#40942]: https://github.com/cockroachdb/cockroach/pull/40942 diff --git a/src/current/_includes/releases/v19.1/v19.1.6.md b/src/current/_includes/releases/v19.1/v19.1.6.md deleted file mode 100644 index e2d5bbef6d6..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.6.md +++ /dev/null @@ -1,146 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.6 since v19.1.5. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Security updates

- -- CockroachDB previously allowed non-authenticated access to privileged HTTP endpoints like `/_admin/v1/events`, which operate using `root` user permissions and can thus access (and sometimes modify) any and all data in the cluster. This security vulnerability has been patched by disallowing non-authenticated access to these endpoints and restricting access to admin users only. - - {{site.data.alerts.callout_info}} - Users who have built monitoring automation using these HTTP endpoints must modify their automation to work using an HTTP session token for an admin user. - {{site.data.alerts.end}} - -- Some Admin UI screens (e.g., Jobs) were previously incorrectly displayed using `root` user permissions, regardless of the logged-in user's credentials. This enabled insufficiently privileged users to access privileged information. This security vulnerability has been patched by using the credentials of the logged-in user to display all Admin UI screens. - -- Privileged HTTP endpoints and certain Admin UI screens require an admin user. However, `root` is disallowed from logging in via HTTP and it is not possible to create additional admin accounts without an Enterprise license. This is further discussed [here](https://github.com/cockroachdb/cockroach/issues/43870) and will be addressed in an upcoming patch revision. - - {{site.data.alerts.callout_info}} - Users without an Enterprise license can create an additional admin user using a temporary evaluation license, until an alternative is available. A user created this way will persist beyond the license expiry. - {{site.data.alerts.end}} - -- Some Admin UI screens currently display an error or a blank page when viewed by a non-admin user (e.g., Table Details). This is a known limitation mistakenly introduced by the changes described above. This situation is discussed further [here](https://github.com/cockroachdb/cockroach/issues/44033) and will be addressed in an upcoming patch revision. The list of UI pages affected includes but is not limited to: - - - Job details - - Database details - - Table details - - Zone configurations - - {{site.data.alerts.callout_info}} - Users can access these Admin UI screens using an admin user until a fix is available. - {{site.data.alerts.end}} - -The list of HTTP endpoints affected by the first change above includes: - -| HTTP Endpoint | Description | Sensitive information revealed | Special (see below) | -|--------------------------------------------------------|-----------------------------------|----------------------------------------------------|---------------------| -| `/_admin/v1/data_distribution` | Database-table-node mapping | Database and table names | | -| `/_admin/v1/databases/{database}/tables/{table}/stats` | Table stats histograms | Stored table data via PK values | | -| `/_admin/v1/drain` | API to shut down a node | Can cause DoS on cluster | | -| `/_admin/v1/enqueue_range` | Force range rebalancing | Can cause DoS on cluster | | -| `/_admin/v1/events` | Event log | Usernames, stored object names, privilege mappings | | -| `/_admin/v1/nontablestats` | Non-table statistics | Stored table data via PK values | | -| `/_admin/v1/rangelog` | Range log | Stored table data via PK values | | -| `/_admin/v1/settings` | Cluster settings | Organization name | | -| `/_status/allocator/node/{node_id}` | Rebalance simulator | Can cause DoS on cluster | yes | -| `/_status/allocator/range/{range_id}` | Rebalance simulatoor | Can cause DoS on cluster | yes | -| `/_status/certificates/{node_id}` | Node and user certificates | Credentials | | -| `/_status/details/{node_id}` | Node details | Internal IP addresses | | -| `/_status/enginestats/{node_id}` | Storage statistics | Operational details | | -| `/_status/files/{node_id}` | Retrieve heap and goroutine dumps | Operational details | yes | -| `/_status/gossip/{node_id}` | Gossip details | Internal IP addresses | yes | -| `/_status/hotranges` | Ranges with active requests | Stored table data via PK values | | -| `/_status/local_sessions` | SQL sessions | Cleartext SQL queries | yes | -| `/_status/logfiles/{node_id}` | List of log files | Operational details | yes | -| `/_status/logfiles/{node_id}/{file}` | Server logs + entries | Many: names, application data, credentials, etc. | yes | -| `/_status/logs/{node_id}` | Log entries | Many: names, application data, credentials, etc. | yes | -| `/_status/profile/{node_id}` | Profiling data | Operational details | | -| `/_status/raft` | Raft details | Stored table data via PK values | | -| `/_status/range/{range_id}` | Range details | Stored table data via PK values | | -| `/_status/ranges/{node_id}` | Range details | Stored table data via PK values | | -| `/_status/sessions` | SQL sessions | Cleartext SQL queries | yes | -| `/_status/span` | Statistics per key span | Whether certain table rows exist | | -| `/_status/stacks/{node_id}` | Stack traces | Application data, stored table data | | -| `/_status/stores/{node_id}` | Store details | Operational details | | - -{{site.data.alerts.callout_info}} -"Special" endpoints are subject to the [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) `server.remote_debugging.mode`. Unless the setting was customized, clients are only able to connect from the same machine as the node. -{{site.data.alerts.end}} - -

SQL language changes

- -- [`EXPLAIN (OPT,ENV)`](https://www.cockroachlabs.com/docs/v19.1/explain) now returns a URL with the data encoded in the fragment portion. Opening the URL shows a page with the decoded data. Note that the data is processed in the local browser session and is never sent out. [#41092][#41092] -- `EXPLAIN ANALYSE` can now be used as an alias to [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v19.1/explain-analyze). [#41093][#41093] -- Mutations under `UNION` or `UNION ALL` are now disallowed. This restriction is temporary and will be lifted in a future release. [#41496][#41496] - -

Admin UI changes

- -- Certain web UI pages (like the list of databases or tables) now restrict their content to match the privileges of the logged-in user. [#42727][#42727] -- The event log now presents all cluster settings changes, unredacted, when an admin user uses the page. [#42727][#42727] -- Customization of the UI by users is now only properly saved if the user has write privilege to `system.ui` (i.e., is an admin user). Also, all authenticated users share the same customizations. This is a known limitation and should be lifted in a future version. [#42727][#42727] -- Access to table statistics are temporarily blocked from access by non-admin users until further notice, for security reasons. [#42727][#42727] -- Certain debug pages have been blocked from non-admin users for security reasons. [#42727][#42727] - -

Bug fixes

-- The experimental cloud storage changefeed sink previously violated some of the changefeed invariants under rare conditions. A number of these have been fixed, but some fixes require changes to the format of the filenames output by the cloud storage changefeed. This is unsuitable for inclusion in a patch release, so users of cloud storage changefeed sinks are highly encouraged to upgrade to CockroachDB 19.2 or later. [#42907][#42907] -- Fixed a crash that occurs when a suboperator with a `LIKE` comparison has a _NULL_ left-hand side. [#41073][#41073] -- Reduced write amplification by avoiding forcing files through compactions unnecessarily. [#41301][#41301] -- Fixed a rare data corruption bug in RocksDB caused by newer Linux kernel's handling of i_generation on certain file systems. [#41393][#41393] -- Fixed a bug causing the `cluster_logical_timestamp()` function to sometimes return incorrect results. [#41441][#41441] -- Fixed internal errors generated during the execution of some complicated cases of correlated subqueries. [#41667][#41667] -- Fixed a bug causing zone configuration changes on tables with existing index zone configurations to not take effect unless the `num_replicas` field was also set. [#41682][#41682] -- Fixed bug causing zone configuration application on indexes to leak into configurations on partitions. [#41679][#41679] -- Fixed multiple bugs relating to zone configurations and `COPY FROM PARENT`. Previously, `COPY FROM PARENT` was ignored when using it on partitions and indexes, and when using it on a zone that had an existing value for the field that was being changed. [#41699][#41699] -- Fixed a bug causing rapid network disconnections to lead to cluster unavailability because goroutines waited for a connection which will never be initialized to send its first heartbeat. [#42165][#42165] -- Fixed an internal error in a rare case involving `UNION` and tuples inside `VALUES` clauses. [#42266][#42266] -- CockroachDB now permits planning of window functions within mutation statements, and other statements that cannot be distributed. [#42617][#42617] -- Fixed a bug that would produce a spurious failure with the error message "incompatible COALESCE expressions" when adding or validating `MATCH FULL` foreign key constraints involving composite keys with columns of differing types. [#42652][#42652] -- Fixed a case where we incorrectly determine that a query (or part of a query) which contains an `IS NULL` constraint on a unique index column returns at most one row, possibly ignoring a `LIMIT 1` clause. [#42792][#42792] -- It is now possible to transfer range leases to lagging replicas. [#42764][#42764] -- [`ALTER INDEX IF EXISTS`](https://www.cockroachlabs.com/docs/v19.1/alter-index) no longer fails when using an unqualified index name that does not match any existing index. Now it is a no-op. [#42839][#42839] -- CockroachDB now prevents a number of panics from the SQL layer caused by an invalid range split. These would usually manifest with messages mentioning encoding errors ("found null on not null column" but also possibly various others). [#42860][#42860] -- Other callers to `acquireNodeLease` will not get erroneously cancelled just because the context of the first caller was cancelled. [#43028][#43028] -- Fixed a bug in poller causing it to emit row updates at a timestamp less than or equal to an already forwarded resolved timestamp. [#43027][#43027] -- Fixed a bug in cloud storage sink file naming that violates ordering in presence of a schema changes. [#43027][#43027] -- Fixed a bug causing disk stalls to allow a node to continue heartbeating its liveness record and prevent other nodes from taking over its leases, despite being completely unresponsive. [#41765][#41765] -- CockroachDB now properly removes excess secondary log files (SQL audit logging, statement execution logging, and RocksDB events). [#41034][#41034] -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v19.1/debug-zip), [`cockroach node`](https://www.cockroachlabs.com/docs/v19.1/view-node-details) and [`cockroach user`](https://www.cockroachlabs.com/docs/v19.1/create-and-manage-users) now work properly if the `defaultdb` database has been manually dropped and the connection URL does not specify a database. [#41131][#41131] - -

Contributors

- -This release includes 34 merged PRs by 19 authors. - -[#42727]: https://github.com/cockroachdb/cockroach/pull/42727 -[#41131]: https://github.com/cockroachdb/cockroach/pull/41131 -[#41034]: https://github.com/cockroachdb/cockroach/pull/41034 -[#41765]: https://github.com/cockroachdb/cockroach/pull/41765 -[#41073]: https://github.com/cockroachdb/cockroach/pull/41073 -[#41092]: https://github.com/cockroachdb/cockroach/pull/41092 -[#41093]: https://github.com/cockroachdb/cockroach/pull/41093 -[#41301]: https://github.com/cockroachdb/cockroach/pull/41301 -[#41393]: https://github.com/cockroachdb/cockroach/pull/41393 -[#41441]: https://github.com/cockroachdb/cockroach/pull/41441 -[#41496]: https://github.com/cockroachdb/cockroach/pull/41496 -[#41667]: https://github.com/cockroachdb/cockroach/pull/41667 -[#41679]: https://github.com/cockroachdb/cockroach/pull/41679 -[#41682]: https://github.com/cockroachdb/cockroach/pull/41682 -[#41699]: https://github.com/cockroachdb/cockroach/pull/41699 -[#42165]: https://github.com/cockroachdb/cockroach/pull/42165 -[#42266]: https://github.com/cockroachdb/cockroach/pull/42266 -[#42617]: https://github.com/cockroachdb/cockroach/pull/42617 -[#42652]: https://github.com/cockroachdb/cockroach/pull/42652 -[#42764]: https://github.com/cockroachdb/cockroach/pull/42764 -[#42792]: https://github.com/cockroachdb/cockroach/pull/42792 -[#42839]: https://github.com/cockroachdb/cockroach/pull/42839 -[#42860]: https://github.com/cockroachdb/cockroach/pull/42860 -[#42907]: https://github.com/cockroachdb/cockroach/pull/42907 -[#43027]: https://github.com/cockroachdb/cockroach/pull/43027 -[#43028]: https://github.com/cockroachdb/cockroach/pull/43028 diff --git a/src/current/_includes/releases/v19.1/v19.1.7.md b/src/current/_includes/releases/v19.1/v19.1.7.md deleted file mode 100644 index 95448b14a90..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.7.md +++ /dev/null @@ -1,45 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.7 since v19.1.6. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Bug fixes

- -- Some incorrect issue links referenced to by error hints have been corrected. [#43234][#43234] -- Prevented rare cases of infinite looping on database files written with a CockroachDB version earlier than v2.1.9. [#43253][#43253] -- [Changefeeds](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) now emit backfill row updates for a dropped column when the table descriptor drops that column. [#43037][#43037] {% comment %}doc{% endcomment %} -- [`EXPLAIN`](https://www.cockroachlabs.com/docs/v19.1/explain) can now be used with statements that use [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v19.1/as-of-system-time). [#43305][#43305] {% comment %}doc{% endcomment %} -- Fixed a bug that caused some jobs to be left indefinitely in a pending state and never run. [#43416][#43416] -- Migrating the privileges on the `system.lease` table no longer creates a deadlock during a [cluster upgrade](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version). [#43508][#43508] -- Fixed a bug in the parsing logic for `server.host_based_authentication.configuration`, where both single-character strings and quoted strings containing spaces and separated by commas were not properly parsed. This would cause rules for usernames consisting of a single character or usernames containing spaces to apply improperly. [#43812][#43812] -- A SQL row write that is re-issued after already succeeding no longer throws a duplicate key error when the previous write in its transaction deleted the row. [#43942][#43942] -- Fixed a changefeed bug where a resolved timestamp might be published before all events that precede it have been published in the presence of a range merge. [#44082][#44082] -- Converted a panic when using collated strings to an error. [#44119][#44119] - -

Performance improvements

- -- A transaction running into multiple intents from an abandoned conflicting transaction now cleans them up more efficiently. [#43589][#43589] - -

Contributors

- -This release includes 11 merged PRs by 7 authors. - -[#43037]: https://github.com/cockroachdb/cockroach/pull/43037 -[#43234]: https://github.com/cockroachdb/cockroach/pull/43234 -[#43253]: https://github.com/cockroachdb/cockroach/pull/43253 -[#43305]: https://github.com/cockroachdb/cockroach/pull/43305 -[#43416]: https://github.com/cockroachdb/cockroach/pull/43416 -[#43508]: https://github.com/cockroachdb/cockroach/pull/43508 -[#43589]: https://github.com/cockroachdb/cockroach/pull/43589 -[#43812]: https://github.com/cockroachdb/cockroach/pull/43812 -[#43942]: https://github.com/cockroachdb/cockroach/pull/43942 -[#44082]: https://github.com/cockroachdb/cockroach/pull/44082 -[#44119]: https://github.com/cockroachdb/cockroach/pull/44119 diff --git a/src/current/_includes/releases/v19.1/v19.1.8.md b/src/current/_includes/releases/v19.1/v19.1.8.md deleted file mode 100644 index 3bbec30b5a7..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.8.md +++ /dev/null @@ -1,54 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.8 since v19.1.7. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Security updates

- -- Previous versions of CockroachDB were incorrectly enabling non-admin SQL users to use the [statements details](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page) in the Admin UI and the HTTP endpoint `/_status/statements`. This information is sensitive because the endpoint does not hide data that the requester does not have privilege over. This has been corrected by requiring a [SQL `admin` user](https://www.cockroachlabs.com/docs/v19.1/authorization) to access the statements details page and the HTTP endpoint. [#44355][#44355] - -

Admin UI changes

- -- We previously introduced a fix on the admin UI to prevent non-admin SQL users from executing queries - however, this accidentally made certain pages requiring table details not to display. This error has now been fixed. [#44193][#44193] - -

Bug fixes

- -- Fixed a bug where repeated use of [`COPY FROM PARENT`](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones#replication-zone-variables) on an index or partition could cause an unexpected validation error. [#44266][#44266] -- Fixed a planning bug related to [`FULL` joins](https://www.cockroachlabs.com/docs/v19.1/joins#full-outer-joins) between single-row relations. [#44242][#44242] -- Fixed incorrect plans in very rare cases involving filters that aren't constant folded in the optimizer but that can be evaluated statically when running a given query. [#44602][#44602] -- Fixed "no output column equivalent to.." and "column not in input" errors in some cases involving [`DISTINCT ON`](https://www.cockroachlabs.com/docs/v19.1/select-clause#eliminate-duplicate-rows) and [`ORDER BY`](https://www.cockroachlabs.com/docs/v19.1/query-order). [#44598][#44598] -- Fixed "expected constant FD to be strict" internal error. [#44599][#44599] -- Fixed a bug where running a query with the [`LIKE`](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) operator using the custom `ESCAPE` symbol when the pattern contained Unicode characters could result in an internal error in CockroachDB. [#44649][#44649] -- Fixed possibly incorrect query results in various cornercases, especially when [`SELECT DISTINCT`](https://www.cockroachlabs.com/docs/v19.1/select-clause#eliminate-duplicate-rows) is used. [#44606][#44606] -- Fixed an internal error that could happen in the planner when table statistics were collected manually using [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) for different columns at different times. [#44443][#44443] -- CockroachDB no longer repeatedly looks for non-existing jobs, which may cause high memory usage, when cleaning up schema changes. [#44824][#44824] - -
- -

Contributors

- -This release includes 12 merged PRs by 9 authors. -We would like to thank the following contributors from the CockroachDB community: - -- Oliver Tan (first-time contributor, CockroachDB team member) - -
- -[#44193]: https://github.com/cockroachdb/cockroach/pull/44193 -[#44242]: https://github.com/cockroachdb/cockroach/pull/44242 -[#44266]: https://github.com/cockroachdb/cockroach/pull/44266 -[#44355]: https://github.com/cockroachdb/cockroach/pull/44355 -[#44443]: https://github.com/cockroachdb/cockroach/pull/44443 -[#44599]: https://github.com/cockroachdb/cockroach/pull/44599 -[#44602]: https://github.com/cockroachdb/cockroach/pull/44602 -[#44606]: https://github.com/cockroachdb/cockroach/pull/44606 -[#44649]: https://github.com/cockroachdb/cockroach/pull/44649 -[#44824]: https://github.com/cockroachdb/cockroach/pull/44824 diff --git a/src/current/_includes/releases/v19.1/v19.1.9.md b/src/current/_includes/releases/v19.1/v19.1.9.md deleted file mode 100644 index 006d7172ad1..00000000000 --- a/src/current/_includes/releases/v19.1/v19.1.9.md +++ /dev/null @@ -1,91 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -This page lists additions and changes in v19.1.9 since v19.1.8. - -- For a comprehensive summary of features in v19.1, see the [v19.1 GA release notes]({% link releases/v19.1.md %}#v19-1-0). -- To upgrade to v19.1, see [Upgrade to CockroachDB v19.1](https://www.cockroachlabs.com/docs/v19.1/upgrade-cockroach-version) - -{{site.data.alerts.callout_danger}} -{% include /v19.1/alerts/warning-a63162.md %} -{{site.data.alerts.end}} - -

Backward-incompatible changes

- -- Previously, the phase of server shutdown responsible for range lease transfers to other nodes would give up after 10,000 attempts of transferring replica leases away, regardless of the value of `server.shutdown.lease_transfer_wait`. The limit of 10,000 attempts has been removed, so that now only the maximum duration `server.shutdown.lease_transfer_wait` applies. [#47698][#47698] -- The textual error and warning messages displayed by [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) under various circumstances have been updated. Meanwhile, the message "`ok`" remains as indicator that the operation has likely succeeded. [#47698][#47698] -- [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) now prints out progress details on its standard error stream, even when `--logtostderr` is not specified. Previously, nothing was printed on standard error. Scripts that wish to ignore this output can redirect the `stderr` stream. [#47698][#47698] - -

Security updates

- -- Non-licensed users are now able to [add more principals](https://www.cockroachlabs.com/docs/v19.1/grant-roles) to the special superuser role/group `admin`. Note: [Creation of additional roles](https://www.cockroachlabs.com/docs/v19.1/create-role) is still a licensed feature. [#45396][#45396] - -

General changes

- -- Previously, the phase of server shutdown responsible for range lease transfers to other nodes had a hard timeout of 5 seconds. This patch makes this timeout configurable via the new cluster setting `server.shutdown.lease_transfer_wait`. [#47698][#47698] - -

SQL language changes

- -- It is now possible to use [`GRANT`](https://www.cockroachlabs.com/docs/v19.1/grant) and [`REVOKE`](https://www.cockroachlabs.com/docs/v19.1/revoke) to add users to the `admin` role without a valid license. This change aims to enable use of the Admin UI and other privileged features without a license. [#45396][#45396] -- The type checking code now prefers aggregate overloads with string inputs if there are multiple possible candidates due to arguments of unknown type. [#46902][#46902] -- Added a new "unimplemented" error when attempting to [`ADD CONSTRAINT `](https://www.cockroachlabs.com/docs/v19.1/add-constraint) with the `EXCLUDE USING` syntax. [#46912][#46912] -- Added support for using [`CREATE INDEX ... INCLUDE (col1, col2, ...)`](https://www.cockroachlabs.com/docs/v19.1/create-index), which is an alias that PostgreSQL uses that is analogous to our `STORING (col1, col2, ...)` syntax. [#46912][#46912] -- Added support for parsing the `REINDEX` syntax, which results in an "unimplemented" error that explains that `REINDEX`ing is not required in CockroachDB. [#46912][#46912] -- CockroachDB now parses the `CREATE INDEX CONCURRENTLY` and `DROP INDEX CONCURRENTLY` syntaxes, which return errors when used. [#46808][#46808] - -

Command-line changes

- -- [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v19.1/debug-zip) now avoids creating invalid zip files if some of its requests encounter an error. [#46637][#46637] -- The time that [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) waits client-side for the node to drain (remove existing clients and push range leases away) is now configurable via the command-line flag `--drain-wait`. Note that separate server-side timeouts also apply separately; check the `server.shutdown.*` [cluster settings](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) for details. [#47698][#47698] -- It is now possible to drain a node without shutting down the process, using [`cockroach node drain`](https://www.cockroachlabs.com/docs/v19.1/view-node-details). This makes it easier to integrate with service managers and orchestration: it now becomes safe to issue `cockroach node drain` and then separately stop the service via a process manager or orchestrator. Without this new mode, there is a risk to misconfigure the service manager to auto-restart the node after it shuts down via `quit`, in a way that's surprising or unwanted. The new command `node drain` also recognizes the new `--drain-wait` flag. [#47698][#47698] -- The default value of the parameter `--drain-wait` for [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) has been increased from 1 minute to 10 minutes, to give more time for nodes with thousands of ranges to migrate their leases away. [#47698][#47698] -- The commands [`cockroach quit`](https://www.cockroachlabs.com/docs/v19.1/stop-a-node) and [`cockroach node drain`](https://www.cockroachlabs.com/docs/v19.1/view-node-details) now report a "work remaining" metric on their standard error stream. The value reduces until it reaches `0`, to indicate that the graceful shutdown has completed server-side. An operator can now rely on `cockroach node drain` to obtain confidence of a graceful shutdown prior to terminating the server process. [#47698][#47698] - -

Admin UI changes

- -- Metrics relating to [SQL transaction](https://www.cockroachlabs.com/docs/v19.1/admin-ui-sql-dashboard) restarts and rollbacks are now properly captured and exported. [#46273][#46273] - -

Bug fixes

- -- Fixed a "cannot map variable" error in some rare cases involving [joins](https://www.cockroachlabs.com/docs/v19.1/joins). [#44860][#44860] -- Fixed incorrect de-duplication of impure expressions (like `gen_random_uuid`) in projections and default values. [#44916][#44916] -- Fixed an internal error that could occur when `NULLIF` was called with one null argument. [#45391][#45391] -- It is now possible to create [inverted indexes](https://www.cockroachlabs.com/docs/v19.1/inverted-indexes) on columns whose names are mixed-case. [#45678][#45678] -- Previously, drivers that did not truncate trailing zeroes for decimals in the binary format ended up having inaccuracies of up to 10^4 during the decode step. Fixed this error by truncating the trailing zeroes as appropriate. This fixes known incorrect decoding cases with Postgrex in Elixir. [#45671][#45671] -- Fixed a name resolution error that could occur when a [common table expression](https://www.cockroachlabs.com/docs/v19.1/common-table-expressions) (CTE) was referenced in the [`SELECT`](https://www.cockroachlabs.com/docs/v19.1/selection-queries) list of a query using the syntax `.`. [#45782][#45782] -- Previously, CockroachDB could crash when computing [window functions](https://www.cockroachlabs.com/docs/v19.1/window-functions) with `RANGE` mode of framing when one of the bounds was either of `offset PRECEDING` or `offset FOLLOWING` type when there were `NULL` values in the single column from `ORDER BY` clause. Additionally, in `RANGE` mode bounds `0 PRECEDING` and `0 FOLLOWING` could be handled incorrectly. Now this has been fixed. [#45806][#45806] -- Fixed an internal error that could occur in the [optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) when a `WHERE` filter contained at least one correlated subquery and one non-correlated subquery. [#46168][#46168] -- CockroachDB now properly supports using `--url` with query options (e.g., `application_name`) without specifying `sslmode`. The default of `sslmode=disable` is assumed in that case. [#46480][#46480] -- Fixed a bug where operations on an index that contained a collated string in descending order would fail. [#46579][#46579] -- Fixed an incorrect query result that could occur when a scalar aggregate was called with a null input. [#46902][#46902] -- Fixed a data race on AST nodes for [`SELECT`](https://www.cockroachlabs.com/docs/v19.1/selection-queries) statements that include a `WINDOW` clause. It is unclear whether this could have resulted in incorrect results being returned for these queries. [#47177][#47177] -- Fixed incorrect results that could occur when casting negative [intervals](https://www.cockroachlabs.com/docs/v19.1/interval) or [timestamps](https://www.cockroachlabs.com/docs/v19.1/timestamp) to type [decimal](https://www.cockroachlabs.com/docs/v19.1/decimal). [#47524][#47524] -- Previously, CockroachDB was incorrectly releasing memory used by hash aggregation. This could lead to a crash (which was more likely when hash aggregation had store on the order of 100k of groups) and is now fixed. [#47520][#47520] -- Fixed a bug that could lead to data corruption or data loss if a replica was both the source of a snapshot and was being concurrently removed from the range. This scenario is rare, but possible. [#48317][#48317] - -

Contributors

- -This release includes 27 merged PRs by 10 authors. - -[#44860]: https://github.com/cockroachdb/cockroach/pull/44860 -[#44916]: https://github.com/cockroachdb/cockroach/pull/44916 -[#45391]: https://github.com/cockroachdb/cockroach/pull/45391 -[#45396]: https://github.com/cockroachdb/cockroach/pull/45396 -[#45671]: https://github.com/cockroachdb/cockroach/pull/45671 -[#45678]: https://github.com/cockroachdb/cockroach/pull/45678 -[#45782]: https://github.com/cockroachdb/cockroach/pull/45782 -[#45806]: https://github.com/cockroachdb/cockroach/pull/45806 -[#46168]: https://github.com/cockroachdb/cockroach/pull/46168 -[#46273]: https://github.com/cockroachdb/cockroach/pull/46273 -[#46480]: https://github.com/cockroachdb/cockroach/pull/46480 -[#46579]: https://github.com/cockroachdb/cockroach/pull/46579 -[#46637]: https://github.com/cockroachdb/cockroach/pull/46637 -[#46808]: https://github.com/cockroachdb/cockroach/pull/46808 -[#46902]: https://github.com/cockroachdb/cockroach/pull/46902 -[#46912]: https://github.com/cockroachdb/cockroach/pull/46912 -[#47177]: https://github.com/cockroachdb/cockroach/pull/47177 -[#47520]: https://github.com/cockroachdb/cockroach/pull/47520 -[#47524]: https://github.com/cockroachdb/cockroach/pull/47524 -[#47698]: https://github.com/cockroachdb/cockroach/pull/47698 -[#48317]: https://github.com/cockroachdb/cockroach/pull/48317 diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md deleted file mode 100644 index c0a6f715b5e..00000000000 --- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181119.md +++ /dev/null @@ -1,176 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Backward-incompatible changes

- -- CockroachDB no longer supports the `B'abcde'` notation to express byte array literals. This notation now expresses **bit** array literals like in PostgreSQL. The `b'...'` notation remains for byte array literals. [#28807][#28807] -- The normalized results of certain timestamp + duration operations involving year or month durations have been adjusted to agree with the values returned by PostgreSQL. [#31146][#31146] -- The `CHANGEFEED` [`experimental-avro` option](https://www.cockroachlabs.com/docs/v19.1/create-changefeed#options) has been renamed `experimental_avro`. [#31838][#31838] -- Timezone abbreviations, such as `EST`, are no longer allowed when parsing or converting to a date/time type. Previously, an abbreviation would be accepted if it were an alias for the session's timezone. [#31758][#31758] - -

General changes

- -- Load-based splitting is now enabled by default. In conjunction, the `range_min_bytes` setting in the `.default` replication zone is set to a higher value to prevent ranges from unnecessarily being considered for merging. [#31413][#31413] {% comment %}doc{% endcomment %} -- Added a [Kubernetes configuration](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/bring-your-own-certs) that shows how to use certificates generated outside of the Kubernetes-orchestrated CockroachDB cluster. [#27921][#27921] {% comment %}doc{% endcomment %} -- Added a [Fluentd configuration](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/fluentd-configmap.yml) for external logging of a Kubernetes-orchestrated CockroachDB cluster. [#26685](https://github.com/cockroachdb/cockroach/pull/26685) - -

SQL language changes

- -- The [`EXPERIMENTAL_RELOCATE`](https://www.cockroachlabs.com/docs/v19.1/experimental-features) statement no longer temporarily increases the number of replicas in a range more than one above the range's replication factor, preventing rare edge cases of unavailability. [#29684][#29684] {% comment %}doc{% endcomment %} -- The output of [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v19.1/show-jobs) now reports ongoing jobs first in start time order, then completed jobs in finished time order. The `running_status` column becomes `NULL` when the status cannot be determined. [#30917][#30917] {% comment %}doc{% endcomment %} -- The output of [`SHOW ZONE CONFIGURATIONS`](https://www.cockroachlabs.com/docs/v19.1/show-zone-configurations) now only shows the zone name and the SQL representation of the config. [#30985][#30985] {% comment %}doc{% endcomment %} -- The range log and system events logs now automatically purge records older than 30 and 90 days, respectively. This can be adjusted via the `server.rangelog.ttl` and `server.eventlog.ttl` [cluster settings](https://www.cockroachlabs.com/docs/v19.1/cluster-settings). [#30913][#30913] {% comment %}doc{% endcomment %} -- In cases such as `'2018-01-31'::TIMESTAMP + '1 month'`, where an intermediate result of February 31st needs to be normalized, previous versions of CockroachDB would advance to March 3. Instead, CockroachDB now "rounds down" to February 28th to agree with the values returned by PostgreSQL. This change also affects the results of the `generate_sequence()` function when used with timestamps. [#31146][#31146] -- Updated the output of [`SHOW ZONE CONFIGURATIONS`](https://www.cockroachlabs.com/docs/v19.1/show-zone-configurations). Also, unset fields in [zone configurations](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones) now inherit parent values. [#30611][#30611] {% comment %}doc{% endcomment %} -- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, attempts to use `CREATE/DROP SCHEMA`, `DEFERRABLE`, `CREATE TABLE (LIKE ...)`, and `CREATE TABLE ... WITH` are now collected as telemetry to gauge demand for these currently unsupported features. [#31635][#31635] {% comment %}doc{% endcomment %} -- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, the name of SQL [built-in functions](https://www.cockroachlabs.com/docs/stable/functions-and-operators) are now collected upon evaluation errors. [#31677][#31677] {% comment %}doc{% endcomment %} -- If [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting) is enabled, attempts by client apps to use the unsupported "fetch limit" parameter (e.g., via JDBC) are now collected as telemetry to gauge support for this feature. [#31637][#31637] -- The [`IMPORT format (file)`](https://www.cockroachlabs.com/docs/v19.1/import) syntax is deprecated in favor of `IMPORT format file`. Similarly, `IMPORT TABLE ... FROM format (file)` is deprecated in favor of `IMPORT TABLE ... FROM format file`. [#31263][#31263] {% comment %}doc{% endcomment %} -- For compatibility with PostgreSQL, it is once again possible to use the keywords `FAMILY`, `MINVALUE`, `MAXVALUE`, `INDEX`, and `NOTHING` as table names, and the names "index" and "nothing" are once again accepted in the right-hand side of `SET` statement assignments. [#31731][#31731] {% comment %}doc{% endcomment %} -- Renamed the first column name returned by [`SHOW STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/show-statistics) to `statistics_name`. [#31927][#31927] {% comment %}doc{% endcomment %} -- CockroachDB now accepts a wider variety of date, time, and timestamp formats. [#31758][#31758] {% comment %}doc{% endcomment %} -- The new `experimental_vectorize` [session setting](https://www.cockroachlabs.com/docs/v19.1/set-vars), when enabled, causes columnar operators to be planned instead of row-by-row processors, when possible. [#31354][#31354] {% comment %}doc{% endcomment %} -- CockroachDB now supports the `BIT` and `VARBIT (BIT VARYING)` bit array data types like PostgreSQL. Currently, only the bit array literal notation with a capital B (e.g., `B'10001'`) is supported; the notation with a small `b` (e.g., `b'abcd'`) continues to denote **byte** arrays as in previous versions of CockroachDB. [#28807][#28807] {% comment %}doc{% endcomment %} -- Added the `array_to_json` [built-in function](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators). [#29818][#29818] -- Statements involving the dropping or truncating of tables, such as [`DROP DATABASE`](https://www.cockroachlabs.com/docs/v19.1/drop-database), [`DROP TABLE`](https://www.cockroachlabs.com/docs/v19.1/drop-table), and [`TRUNCATE`](https://www.cockroachlabs.com/docs/v19.1/truncate), are now considered jobs and, as such, can be tracked via [`SHOW JOBS`](https://www.cockroachlabs.com/docs/v19.1/show-jobs) and the [**Jobs** page](https://www.cockroachlabs.com/docs/v19.1/admin-ui-jobs-page) of the Admin UI. [#29993][#29993] - -

Command-line changes

- -- The [`cockroach cert create-client`](https://www.cockroachlabs.com/docs/v19.1/create-security-certificates) now offers the `--also-generate-pkcs8-key` flag for writing a client key in PKCS#8 format. [#29008][#29008] {% comment %}doc{% endcomment %} -- The client-side option `smart_prompt` now controls whether [`cockroach sql`](https://www.cockroachlabs.com/docs/v19.1/use-the-built-in-sql-client) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v19.1/cockroach-demo) use the current transaction state to offer a multi-line entry at the start of new transactions. [#31630][#31630] {% comment %}doc{% endcomment %} -- The [`cockroach sql`](https://www.cockroachlabs.com/docs/v19.1/use-the-built-in-sql-client) and [`cockroach demo`](https://www.cockroachlabs.com/docs/v19.1/cockroach-demo) commands now recognize the commands `exit` and `quit` to terminate the shell. [#31915][#31915] {% comment %}doc{% endcomment %} -- The `cockroach debug estimate-gc` command now allows users to specify TTL period, with a default of 24 hours. [#31402][#31402] - -

Admin UI changes

- -- Improved the layout of the [**Cluster Overview**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-cluster-overview-page) page for large clusters with many nodes and ranges. [#31512][#31512] {% comment %}doc{% endcomment %} -- Added the current node ID to the [**Advanced Debugging**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-debug-pages) page to help identify the current node when viewing the web UI through a load balancer. [#31835][#31835] {% comment %}doc{% endcomment %} -- The **Non-Table Cluster Data** section of the [**Databases**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-databases-page) page now includes all non-table data types. Previously, this section only showed Time Series data. [#31830][#31830] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Hash functions with `NULL` input now return `NULL`. [#29822][#29822] -- Generated sequences now respect the `statement_timeout` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars). [#31083][#31083] -- `IS OF (...)` expressions no longer report arrays with different element types as being the same. [#31393][#31393] -- Fixed a bug where Raft proposals could get stuck if forwarded to a leader who could not itself append a new entry to its log. [#31408][#31408] -- The `confkey` column of `pg_catalog.pg_constraint` no longer includes columns that were not involved in the foreign key reference. [#31610][#31610] -- Fixed a small memory leak when running distributed queries. [#31736][#31736] -- Fixed a bug in the [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) that sometimes prevented passing ordering requirements through aggregations. [#31754][#31754] -- Fixed a bug that caused transactions to unnecessarily return a "too large" error. [#31733][#31733] -- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) now escape Kafka topic names, when necessary. [#31596][#31596] -- Fixed a bug that would incorrectly cause JSON field access equality comparisons to be true when they should be false. [#31751][#31751] -- Fixed a bug that sometimes caused invalid results or an "incorrectly ordered stream" error with streaming aggregations. [#31825][#31825] -- Fixed a mismatch between lookup join planning and execution, which could cause queries to fail with the error "X lookup columns specified, expecting at most Y". [#31792][#31792] -- Prevented a performance degradation related to overly aggressive Raft log truncations that could occur during [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore) or [`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import) operations. [#31914][#31914] -- [`EXPLAIN ANALYZE`](https://www.cockroachlabs.com/docs/v19.1/explain-analyze) plans no longer show the processor with ID 0's stats in the Response box. [#31941][#31941] -- Fixed rare deadlocks during [`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import), [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore), or [`BACKUP`](https://www.cockroachlabs.com/docs/v19.1/backup). [#31963][#31963] -- Fixed a panic caused by incorrectly encoded Azure credentials. [#31984][#31984] -- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) no longer chooses the wrong index for a scan because of incorrect selectivity estimation. [#31937][#31937] -- Prepared statements that bind temporal values now respect the session's timezone setting. Previously, bound temporal values were always interpreted as though the session time zone were UTC. [#31758][#31758] -- Prevented a stall in the processing of Raft snapshots when many snapshots are requested at the same time. [#32053][#32053] -- [`CHANGEFEED`s](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) now spend dramatically less time flushing Kafka writes. [#32060][#32060] -- Fixed a bug that caused some queries with `DISTINCT ON` and `ORDER BY` with descending columns to return an error incorrectly. [#31976][#31976] -- Fixed a bug that caused queries with `GROUP BY` or `DISTINCT ON` to return incorrect results or an "incorrectly ordered stream" error. Also improved performance of some aggregations by utilizing streaming aggregation in more cases. [#31976][#31976] -- Fixed bit array wire encoding in binary format. [#32091][#32091] -- Fixed a bug that caused transactions to appear partially committed. CockroachDB was sometimes claiming to have failed to commit a transaction when some (or all) of its writes were actually persisted. [#32166][#32166] -- Prevented long stalls that can occur in contended transactions. [#32211][#32211] -- The graphite metrics sender now collects and sends only the latest data point instead of all data points since startup. [#31829][#31829] - -

Performance improvements

- -- Improved the performance of [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/stable/as-of-system-time) queries by letting them use the table descriptor cache. [#31716][#31716] -- Within a transaction, when performing a schema change after the table descriptor has been modified, accessing the descriptor should be faster. [#30934][#30934] -- Improved the performance of index data deletion. [#31326][#31326] -- The [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer) can now determine more keys in certain cases involving unique indexes, potentially resulting in better plans. [#31662][#31662] -- Reduced the amount of allocated memory by pooling allocations of `rocksDBBatch` and `RocksDBBatchBuilder` objects. [#30523][#30523] -- Cache zone configuration values to avoid repetitive deserialization. [#30143][#30143] - -
- -

Contributors

- -This release includes 998 merged PRs by 50 authors. We would like to thank the following contributors from the CockroachDB community: - -- Jan Owsiany (first-time contributor) -- M-srivatsa (first-time contributor) -- Mayank (first-time contributor) -- Mayank Oli (first-time contributor) -- Mo Firouz (first-time contributor) -- Sankt Petersbug (first-time contributor) -- Vijay Karthik -- changangela (first-time contributor) -- hueypark (first-time contributor) -- neeral - -
- -[#26685]: https://github.com/cockroachdb/cockroach/pull/26685 -[#27921]: https://github.com/cockroachdb/cockroach/pull/27921 -[#28807]: https://github.com/cockroachdb/cockroach/pull/28807 -[#28856]: https://github.com/cockroachdb/cockroach/pull/28856 -[#29008]: https://github.com/cockroachdb/cockroach/pull/29008 -[#29067]: https://github.com/cockroachdb/cockroach/pull/29067 -[#29236]: https://github.com/cockroachdb/cockroach/pull/29236 -[#29526]: https://github.com/cockroachdb/cockroach/pull/29526 -[#29684]: https://github.com/cockroachdb/cockroach/pull/29684 -[#29818]: https://github.com/cockroachdb/cockroach/pull/29818 -[#29822]: https://github.com/cockroachdb/cockroach/pull/29822 -[#29993]: https://github.com/cockroachdb/cockroach/pull/29993 -[#30019]: https://github.com/cockroachdb/cockroach/pull/30019 -[#30143]: https://github.com/cockroachdb/cockroach/pull/30143 -[#30339]: https://github.com/cockroachdb/cockroach/pull/30339 -[#30523]: https://github.com/cockroachdb/cockroach/pull/30523 -[#30611]: https://github.com/cockroachdb/cockroach/pull/30611 -[#30849]: https://github.com/cockroachdb/cockroach/pull/30849 -[#30913]: https://github.com/cockroachdb/cockroach/pull/30913 -[#30917]: https://github.com/cockroachdb/cockroach/pull/30917 -[#30926]: https://github.com/cockroachdb/cockroach/pull/30926 -[#30934]: https://github.com/cockroachdb/cockroach/pull/30934 -[#30985]: https://github.com/cockroachdb/cockroach/pull/30985 -[#31083]: https://github.com/cockroachdb/cockroach/pull/31083 -[#31146]: https://github.com/cockroachdb/cockroach/pull/31146 -[#31263]: https://github.com/cockroachdb/cockroach/pull/31263 -[#31326]: https://github.com/cockroachdb/cockroach/pull/31326 -[#31354]: https://github.com/cockroachdb/cockroach/pull/31354 -[#31393]: https://github.com/cockroachdb/cockroach/pull/31393 -[#31402]: https://github.com/cockroachdb/cockroach/pull/31402 -[#31408]: https://github.com/cockroachdb/cockroach/pull/31408 -[#31413]: https://github.com/cockroachdb/cockroach/pull/31413 -[#31512]: https://github.com/cockroachdb/cockroach/pull/31512 -[#31596]: https://github.com/cockroachdb/cockroach/pull/31596 -[#31610]: https://github.com/cockroachdb/cockroach/pull/31610 -[#31630]: https://github.com/cockroachdb/cockroach/pull/31630 -[#31635]: https://github.com/cockroachdb/cockroach/pull/31635 -[#31637]: https://github.com/cockroachdb/cockroach/pull/31637 -[#31662]: https://github.com/cockroachdb/cockroach/pull/31662 -[#31677]: https://github.com/cockroachdb/cockroach/pull/31677 -[#31716]: https://github.com/cockroachdb/cockroach/pull/31716 -[#31725]: https://github.com/cockroachdb/cockroach/pull/31725 -[#31730]: https://github.com/cockroachdb/cockroach/pull/31730 -[#31731]: https://github.com/cockroachdb/cockroach/pull/31731 -[#31733]: https://github.com/cockroachdb/cockroach/pull/31733 -[#31736]: https://github.com/cockroachdb/cockroach/pull/31736 -[#31751]: https://github.com/cockroachdb/cockroach/pull/31751 -[#31754]: https://github.com/cockroachdb/cockroach/pull/31754 -[#31758]: https://github.com/cockroachdb/cockroach/pull/31758 -[#31792]: https://github.com/cockroachdb/cockroach/pull/31792 -[#31825]: https://github.com/cockroachdb/cockroach/pull/31825 -[#31829]: https://github.com/cockroachdb/cockroach/pull/31829 -[#31830]: https://github.com/cockroachdb/cockroach/pull/31830 -[#31835]: https://github.com/cockroachdb/cockroach/pull/31835 -[#31838]: https://github.com/cockroachdb/cockroach/pull/31838 -[#31914]: https://github.com/cockroachdb/cockroach/pull/31914 -[#31915]: https://github.com/cockroachdb/cockroach/pull/31915 -[#31927]: https://github.com/cockroachdb/cockroach/pull/31927 -[#31937]: https://github.com/cockroachdb/cockroach/pull/31937 -[#31941]: https://github.com/cockroachdb/cockroach/pull/31941 -[#31963]: https://github.com/cockroachdb/cockroach/pull/31963 -[#31976]: https://github.com/cockroachdb/cockroach/pull/31976 -[#31984]: https://github.com/cockroachdb/cockroach/pull/31984 -[#32053]: https://github.com/cockroachdb/cockroach/pull/32053 -[#32060]: https://github.com/cockroachdb/cockroach/pull/32060 -[#32091]: https://github.com/cockroachdb/cockroach/pull/32091 -[#32145]: https://github.com/cockroachdb/cockroach/pull/32145 -[#32166]: https://github.com/cockroachdb/cockroach/pull/32166 -[#32211]: https://github.com/cockroachdb/cockroach/pull/32211 diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181217.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181217.md deleted file mode 100644 index 5596f196c29..00000000000 --- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20181217.md +++ /dev/null @@ -1,176 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

General changes

- -- The default disk size on Kubernetes has been changed from 1 GiB to 100 GiB. [#32428][#32428] {% comment %}doc{% endcomment %} -- A new cluster setting (`sql.defaults.conn_results_buffer_size`) can be used to control server-side buffering of results. [#32366][#32366] {% comment %}doc{% endcomment %} - -

Enterprise edition changes

- -- Disabled range merges on tables that are being restored or imported into. [#32538][#32538] -- Added timeseries metrics for debugging `CHANGEFEED` performance issues. [#32241][#32241] {% comment %}doc{% endcomment %} -- Added the option to supply Google Cloud Storage credentials on a per-statement basis with the query parameter `credentials`. [#32544][#32544] {% comment %}doc{% endcomment %} -- It is now possible to use AWS S3 temporary credentials for `BACKUP`/`RESTORE` and `IMPORT`/`EXPORT` using the `AWS_SESSION_TOKEN` parameter in the URL. [#32455][#32455] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Users can customize the auto-retry savepoint name used by the `SAVEPOINT` command by setting the `force_savepoint_restart` session variable. For example, `SET force_savepoint_restart=true; BEGIN; SAVEPOINT foo` will now function as desired. This session variable may also be supplied as part of a connection string to support existing code that assumes that arbitrary savepoint names may be used. [#31971][#31971] {% comment %}doc{% endcomment %} -- The names supplied to a `SAVEPOINT` command are now properly treated as SQL identifiers. For example, `SAVEPOINT foo` and `SAVEPOINT FOO` are now equivalent statements. [#31971][#31971] {% comment %}doc{% endcomment %} -- It is now an error to run `ALTER TABLE ... DROP STORED` on a column which is not actually a computed, stored column. Previously, this statement would be a successful no-op. [#32279][#32279] -- Many queries containing a correlated `EXISTS` subquery with a generator function can now be decorrelated and executed successfully. Previously, these queries caused a decorrelation error. [#31922][#31922] -- `IMPORT` now uses larger integer sizes when converting unsigned MySQL integer columns to their signed cockroach counterparts. [#32481][#32481] -- An `ALTER TABLE` statement to add a foreign key constraint now automatically creates the necessary index if the referencing table is empty and the index does not already exist. [#32234][#32234] {% comment %}doc{% endcomment %} -- Queries involving `COLLATE` expressions are now supported by the cost-based optimizer. [#32500][#32500] {% comment %}doc{% endcomment %} -- Some categories of `SELECT` queries that return 0 or 1 rows (namely, queries by a PK, a unique index, or LIMIT 1 queries) are now guaranteed to not return retryable errors when running as implicit transactions (i.e., outside of a `BEGIN...COMMIT` block). [#32401][#32401] {% comment %}doc{% endcomment %} -- Added support for `AS OF SYSTEM TIME` with the `CREATE STATISTICS` statement. [#32643][#32643] {% comment %}doc{% endcomment %} -- CockroachDB now accepts ordinary string values for placeholders of type `BPCHAR`, for compatibility with PostgreSQL clients that use them. [#32654][#32654] -- CockroachDB now supports associating comments to SQL tables using PostgreSQL's `COMMENT ON TABLE` syntax. This also provides proper support for pg's `pg_catalog.pg_description` and built-in function `obj_description()`. [#32442][#32442] {% comment %}doc{% endcomment %} -- The `SHOW TABLES` statement now supports printing out table comments using the optional phrase `WITH COMMENT`, e.g `SHOW TABLES FROM mydb WITH COMMENT`. [#32442][#32442] {% comment %}doc{% endcomment %} -- The `INT` type is now treated as an alias for `INT8`. [#32831][#32831] {% comment %}doc{% endcomment %} -- Added an `experimental_optimize_updates` flag, which uses the cost-based optimizer to plan `UPDATE` statements when set to true. [#32774][#32774] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- `cockroach sql` and other commands that print query results and query execution latency will now exclude the time required to prepare the display client-side from the latency measurement. [#32663][#32663] {% comment %}doc{% endcomment %} -- `cockroach workload` now includes the `kv` and `ycsb` generators. [#32719][#32719] {% comment %}doc{% endcomment %} -- Added the `cockroach debug merge-logs` command to combine logs from multiple nodes. [#32790][#32790] {% comment %}doc{% endcomment %} - -

Admin UI changes

- -- All existing uses of the Loading component now properly surface data errors. Previously, data errors weren't consistently surfaced. [#32464][#32464] -- The Node map now uses an `equirectangular` projection. [#32617][#32617] - -

Bug fixes

- -- Fixed a panic on `UPDATE RETURNING *` during a schema change. [#32188][#32188] -- Fixed a panic when expression contains both a correlated and uncorrelated subquery. [#32443][#32443] -- Fixed a panic on `UPSERT` in the middle of a schema change adding a non-nullable column. [#32585][#32585] -- Fixed an error when configuring `NOT NULL` computed columns. [#32585][#32585] -- Fixed a deadlock when using `ALTER TABLE VALIDATE CONSTRAINT` in a transaction with a schema change. [#32772][#32772] -- Prevented non-superusers from seeing other user's sessions and queries via the `ListSessions` and `ListLocalSessions` status server API methods. [#32253][#32253] -- Avoided occasional unnecessary Raft snapshots after range splits. [#31875][#31875] -- `CHANGEFEED`s emitting into Kafka now more quickly notice new partitions. [#32297][#32297] -- Ensured that space in the temporary storage directory is reclaimed more promptly. [#32385][#32385] -- `CHANGEFEED`s with the `experimental_avro` option now work with column `WIDTH`s and `PRECISION`s. [#32474][#32474] {% comment %}doc{% endcomment %} -- CockroachDB now properly rejects queries that use an invalid function (e.g., an aggregation) in the `SET` clause of an `UPDATE` statement. [#32505][#32505] -- Prevented `VALUES` clauses from returning incorrect results for certain special OID values. [#32494][#32494] -- CockroachDB now reports an unimplemented error when a `WHERE` clause is used after `INSERT ... ON CONFLICT`. [#32556][#32556] -- Fixed an issue where calling `CREATE STATISTICS` on a large table could cause the server to crash due to running out of memory. [#32614][#32614] -- CockroachDB now properly handles foreign key cascading actions `SET DEFAULT` and `SET NULL` in `SHOW CREATE` and `cockroach dump`. [#32589][#32589] -- Fixed a node data loss bug that occurs when a disk becomes temporarily full. [#32605][#32605] -- Fixed a panic caused by `WITH ORDINALITY` in some cases. [#32596][#32596] -- Dates no longer have a time component in their text encoding over the wire. [#32144][#32144] -- Intervals now match Postgres in their text encoding over the wire. [#32144][#32144] -- Intervals no longer sometimes lose 1ns of precision. This only happened rarely due to floating point inaccuracy. [#32144][#32144] -- Corrected the `pgwire` encoding for arrays and tuples. [#32144][#32144] -- Corrected the binary decimal encoding for `NaN`. [#32144][#32144] -- Prevented a panic when running certain subqueries that get planned in a distributed fashion. [#32652][#32652] -- Fixed a panic that could occur during or after a data import on Windows. [#32664][#32664] -- Lookup joins now properly preserve ordering for outer joins. Prior to this fix, `LEFT JOIN` queries under specific conditions could produce results which did not respect the `ORDER BY` clause. [#32317][#32317] -- CockroachDB now again enables admin users, including `root`, to list all user sessions besides their own. [#32629][#32629] -- Fixed a panic involving `json_agg` and window functions. [#32716][#32716] -- CockroachDB no longer panics when encountering an internal error related to invalid entries in the output of `SHOW SESSION`s. [#32713][#32713] -- Resolved a cluster degradation scenario that could occur during `IMPORT`/`RESTORE` operations, manifested through a high number of pending Raft snapshots. [#32594][#32594] -- CockroachDB now properly evaluates `CHECK` constraints after a row conflict in `INSERT ... ON CONFLICT` when the `CHECK` constraint depends on a column not assigned by `DO UPDATE SET`. [#32779][#32779] -- CockroachDB now properly records statistics for sessions where the value of `application_name` is given by the client during initialization instead of `SET`. [#32754][#32754] -- `cockroach workload run` no longer includes data-only generators [#32720][#32720] -- Fixed a bug where metadata about contended keys was inadvertently ignored, allowing for a failure in transaction cycle detection and transaction deadlocks in rare cases. [#32773][#32773] -- Fixed a bug where `SCRUB` would erroneously report that index keys were out of order. [#32908][#32908] - -

Performance improvements

- -- Removed locking when reading physical time. [#32225][#32225] -- CockroachDB now users a faster randomness source to generate transaction IDs. [#32238][#32238] -- Implemented more efficient execution for some queries with `GROUP BY` or `DISTINCT ON` and an `ORDER BY` clause where an index with a suitable ordering is not available. [#32307][#32307] -- CockroachDB now rewrites the Raft entry cache to optimize for access patterns, reduce lock contention, and reduce memory footprint. [#32618][#32618] -- Re-enabled usage of RocksDB FlushWAL, which is a minor performance improvement for synchronous RocksDB write operations. [#32674][#32674] -- Replaced the Replica latching mechanism with a new optimized data structure that improves throughput, especially under heavy contention. [#32865][#32865] - -

Build changes

- -- `ncurses` is now linked statically so that the cockroach binary no longer requires a particular version of the `ncurses` shared library to be available on deployment machines. [#32959][#32959] - -

Doc updates

- -- Updated the [Performance Tuning](https://www.cockroachlabs.com/docs/v19.1/performance-tuning) and [TPC-C Benchmarking](https://www.cockroachlabs.com/docs/v19.1/performance-benchmarking-with-tpc-c) tutorials to clarify that the `--advertise-addr` flag must be set uniquely for each node. [#4164](https://github.com/cockroachdb/docs/pull/4164) -- Fixed a method in the [Build a C# (.NET) App with CockroachDB](https://www.cockroachlabs.com/docs/v19.1/build-a-csharp-app-with-cockroachdb) code samples. [#4161](https://github.com/cockroachdb/docs/pull/4161) -- Expanded the [Build a Rust App with CockroachDB](https://www.cockroachlabs.com/docs/v19.1/build-a-rust-app-with-cockroachdb) tutorial to cover secure clusters. [#4127](https://github.com/cockroachdb/docs/pull/4127) - -
- -

Contributors

- -This release includes 265 merged PRs by 38 authors. We would like to thank the following contributors from the CockroachDB community: - -- Jaewan Park (first-time contributor) -- Joe Harlow (first-time contributor) -- Mayank Oli -- shakeelrao (first-time contributor) - -
- -[#31875]: https://github.com/cockroachdb/cockroach/pull/31875 -[#31922]: https://github.com/cockroachdb/cockroach/pull/31922 -[#31928]: https://github.com/cockroachdb/cockroach/pull/31928 -[#31971]: https://github.com/cockroachdb/cockroach/pull/31971 -[#32144]: https://github.com/cockroachdb/cockroach/pull/32144 -[#32188]: https://github.com/cockroachdb/cockroach/pull/32188 -[#32225]: https://github.com/cockroachdb/cockroach/pull/32225 -[#32234]: https://github.com/cockroachdb/cockroach/pull/32234 -[#32238]: https://github.com/cockroachdb/cockroach/pull/32238 -[#32241]: https://github.com/cockroachdb/cockroach/pull/32241 -[#32253]: https://github.com/cockroachdb/cockroach/pull/32253 -[#32279]: https://github.com/cockroachdb/cockroach/pull/32279 -[#32297]: https://github.com/cockroachdb/cockroach/pull/32297 -[#32307]: https://github.com/cockroachdb/cockroach/pull/32307 -[#32317]: https://github.com/cockroachdb/cockroach/pull/32317 -[#32366]: https://github.com/cockroachdb/cockroach/pull/32366 -[#32385]: https://github.com/cockroachdb/cockroach/pull/32385 -[#32388]: https://github.com/cockroachdb/cockroach/pull/32388 -[#32401]: https://github.com/cockroachdb/cockroach/pull/32401 -[#32428]: https://github.com/cockroachdb/cockroach/pull/32428 -[#32442]: https://github.com/cockroachdb/cockroach/pull/32442 -[#32443]: https://github.com/cockroachdb/cockroach/pull/32443 -[#32455]: https://github.com/cockroachdb/cockroach/pull/32455 -[#32464]: https://github.com/cockroachdb/cockroach/pull/32464 -[#32467]: https://github.com/cockroachdb/cockroach/pull/32467 -[#32474]: https://github.com/cockroachdb/cockroach/pull/32474 -[#32481]: https://github.com/cockroachdb/cockroach/pull/32481 -[#32494]: https://github.com/cockroachdb/cockroach/pull/32494 -[#32500]: https://github.com/cockroachdb/cockroach/pull/32500 -[#32505]: https://github.com/cockroachdb/cockroach/pull/32505 -[#32538]: https://github.com/cockroachdb/cockroach/pull/32538 -[#32544]: https://github.com/cockroachdb/cockroach/pull/32544 -[#32556]: https://github.com/cockroachdb/cockroach/pull/32556 -[#32585]: https://github.com/cockroachdb/cockroach/pull/32585 -[#32589]: https://github.com/cockroachdb/cockroach/pull/32589 -[#32594]: https://github.com/cockroachdb/cockroach/pull/32594 -[#32596]: https://github.com/cockroachdb/cockroach/pull/32596 -[#32605]: https://github.com/cockroachdb/cockroach/pull/32605 -[#32614]: https://github.com/cockroachdb/cockroach/pull/32614 -[#32617]: https://github.com/cockroachdb/cockroach/pull/32617 -[#32618]: https://github.com/cockroachdb/cockroach/pull/32618 -[#32629]: https://github.com/cockroachdb/cockroach/pull/32629 -[#32643]: https://github.com/cockroachdb/cockroach/pull/32643 -[#32652]: https://github.com/cockroachdb/cockroach/pull/32652 -[#32654]: https://github.com/cockroachdb/cockroach/pull/32654 -[#32663]: https://github.com/cockroachdb/cockroach/pull/32663 -[#32664]: https://github.com/cockroachdb/cockroach/pull/32664 -[#32674]: https://github.com/cockroachdb/cockroach/pull/32674 -[#32713]: https://github.com/cockroachdb/cockroach/pull/32713 -[#32716]: https://github.com/cockroachdb/cockroach/pull/32716 -[#32719]: https://github.com/cockroachdb/cockroach/pull/32719 -[#32720]: https://github.com/cockroachdb/cockroach/pull/32720 -[#32745]: https://github.com/cockroachdb/cockroach/pull/32745 -[#32754]: https://github.com/cockroachdb/cockroach/pull/32754 -[#32772]: https://github.com/cockroachdb/cockroach/pull/32772 -[#32773]: https://github.com/cockroachdb/cockroach/pull/32773 -[#32774]: https://github.com/cockroachdb/cockroach/pull/32774 -[#32779]: https://github.com/cockroachdb/cockroach/pull/32779 -[#32790]: https://github.com/cockroachdb/cockroach/pull/32790 -[#32831]: https://github.com/cockroachdb/cockroach/pull/32831 -[#32865]: https://github.com/cockroachdb/cockroach/pull/32865 -[#32908]: https://github.com/cockroachdb/cockroach/pull/32908 -[#32959]: https://github.com/cockroachdb/cockroach/pull/32959 diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190114.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190114.md deleted file mode 100644 index 2c255e89a59..00000000000 --- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190114.md +++ /dev/null @@ -1,149 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -

Backward-incompatible changes

- -- [Composite foreign key matching](#v2-2-0-alpha-20190114-composite-foreign-key-matching) -- [Mutation statements](#mutation-statements) - -

Composite foreign key matching

- -We are changing the way composite [foreign key](https://www.cockroachlabs.com/docs/v19.1/foreign-key) matches are evaluated to match the default Postgres behavior. If your schema currently uses composite keys, it may require updates, since this change may affect your foreign key constraints and cascading behavior. - -Prior to this change, we were matching composite keys with an incorrect implementation of the `MATCH FULL` method, and we are resolving this by moving all existing composite foreign key matches to a correct implementation of the `MATCH SIMPLE` method. Note that prior to this, there was no option for `MATCH FULL` or `MATCH SIMPLE` and all foreign key matching used the incorrect implementation of `MATCH FULL`. - -For a more detailed explanation of the changes, see below. - -For matching purposes, composite foreign keys can be in one of 3 states: - -- _Valid_: Keys that can be used for matching foreign key relationships. -- _Invalid_: Keys that will not be used for matching. -- _Unacceptable_: Keys that cannot be inserted at all. - -The `MATCH FULL` implementation we were using prior to this change allowed composite keys with a combination of `NULL` and non-null values. This meant that we matched on `NULL`s if a `NULL` existed in the referencing column, essentially treating `NULL`s as a valid value. This was incorrect, since `MATCH FULL` requires that if any column of a composite key is `NULL`, then all columns of the key must be `NULL`. In other words, either all must be `NULL`, or none may be. - -To resolve this issue, all matches going forward will use the `MATCH SIMPLE` method (this matches the Postgres default). `MATCH SIMPLE` stipulates that: - -- Valid composite keys may contain no `NULL` values, and will be used for matching. -- Invalid keys are keys with one or more `NULL` values, and will not be used for matching, including cascading operations. - -For more information, see [#32693][#32693]. - -#

Mutation statements

- -Mutation statements like [`UPDATE`](https://www.cockroachlabs.com/docs/v19.1/update) and [`INSERT`](https://www.cockroachlabs.com/docs/v19.1/insert) no longer attempt to guarantee mutation or output ordering when an `ORDER BY` clause is present. It is now an error to use `ORDER BY` without `LIMIT` with the `UPDATE` statement. [#33087][#33087] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- Added support for configuring authentication via an `hba.conf` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings). [#32892][#32892] {% comment %}doc{% endcomment %} -- Added support for collecting table statistics on a default set of columns by calling `CREATE STATISTICS` with no columns specified. [#32981][#32981] {% comment %}doc{% endcomment %} -- Added the `default_int_size` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars) and `sql.defaults.default_int_size` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) to control how the `INT` and `SERIAL` types are interpreted. The default value, `8`, causes these types to be interpreted as aliases for `INT8` and `SERIAL8`, which have been the historical defaults for CockroachDB. PostgreSQL clients that expect `INT` and `SERIAL` to be 32-bit values can set `default_int_size` to `4`, which will cause `INT` and `SERIAL` to be aliases for `INT4` and `SERIAL4`. Please note that due to issue [#32846](https://github.com/cockroachdb/cockroach/issues/32846), `SET default_int_size` does not take effect until the next statement batch is executed. [#32848][#32848] {% comment %}doc{% endcomment %} -- When creating a [replication zone](https://www.cockroachlabs.com/docs/v19.1/configure-replication-zones), if a field is set to `COPY FROM PARENT`, the field now inherits its value from its parent zone, but any change to the field in the parent zone no longer affects the child zone. [#32861][#32861] {% comment %}doc{% endcomment %} -- Cockroach now supports specifying the matching method for composite [foreign keys](https://www.cockroachlabs.com/docs/v19.1/foreign-key) (a foreign key that includes more than one column) as either `MATCH SIMPLE` or `MATCH FULL`. `MATCH SIMPLE` remains the default. `MATCH FULL` differs from `MATCH SIMPLE` by not allowing the mixing of `NULL` and non-`NULL` values. Only all `NULL` values will not be used to validate a foreign key constraint check or cascading action. `MATCH PARTIAL` is still not supported. For more details see issue [#20305](https://github.com/cockroachdb/cockroach/issues/20305) or https://www.postgresql.org/docs/11/sql-createtable.html. [#32998][#32998] {% comment %}doc{% endcomment %} -- The `string_agg()` aggregate function is now supported by the [cost-based optimizer](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer). [#33172][#33172] -- Added support for the `pg_catalog` introspection table `pg_am` for both PostgreSQL versions 9.5 and 9.6, which changed the table significantly. [#33252][#33252] -- Added foreign key checks to kv traces. [#33328][#33328] -- CockroachDB now defines columns `domain_catalog`, `domain_schema` and `domain_name` in `information_schema.columns` (using `NULL` values, since domain types are not yet supported) for compatibility with PostgreSQL clients. [#33267][#33267] -- Attempts to use some PostgreSQL built-in functions that are not yet supported in CockroachDB now cause a clearer error message, and also become reported in [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting), if diagnostics reporting is enabled, so as to gauge demand. [#33390][#33390] -- CockroachDB now reports the name (not the value) of unsupported client parameters passed when setting up new SQL sessions in [diagnostics reporting](https://www.cockroachlabs.com/docs/v19.1/diagnostics-reporting), if diagnostics reporting is enabled, to guage demand for additional support. [#33264][#33264] -- CockroachDB now collects statistics for statements executed "internally" (for system purposes). This is meant to facilitate performance troubleshooting. [#32215][#32215] -- CockroachDB now supports associating comments to SQL databases using PostgreSQL's `COMMENT ON DATABASE` syntax. This also provides proper support for pg's `pg_catalog.pg_description` and the `obj_description()` built-in function. [#33057][#33057] {% comment %}doc{% endcomment %} -- CockroachDB now supports associating comments to SQL table columns using PostgreSQL's `COMMENT ON COLUMN` syntax. This also provides proper support for pg's `pg_catalog.pg_description` and the `col_description()` built-in function. [#33355][#33355] {% comment %}doc{% endcomment %} -- Logical plans are now sampled and stored in [statement statistics](https://www.cockroachlabs.com/docs/v19.1/show-statistics). [#33020][#33020] {% comment %}doc{% endcomment %} -- [`SHOW EXPERIMENTAL_RANGES`](https://www.cockroachlabs.com/docs/v19.1/show-experimental-ranges) is faster if no columns are requested from it, like in `SELECT COUNT(*) FROM [SHOW EXPERIMENTAL_RANGES...]`. [#33463][#33463] -- The new `experimental_optimizer_updates` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings) controls whether `UPDATE` and `UPSERT` statements are planned by the cost-based optimizer rather than the heuristic planner. Also note that when the setting is set, check constraints are not checked for rows skipped by the `INSERT ... DO NOTHING` clause. [#33339][#33339] {% comment %}doc{% endcomment %} - -

Admin UI changes

- -- The [**Statement Details**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page) page now shows sample logical plans for each unique fingerprint. [#33483][#33483] -- SQL queries issued internally by CockroachDB are now visible on the [**Statements**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page) page. They can be filtered using the application name. [#32215][#32215] - -

Bug fixes

- -- Fixed a bug where schema changes could get stuck for 5 minutes when executed immediately after a server restart. [#32988][#32988] -- Fixed a bug with returning dropped unique columns in `DELETE` statements with `RETURNING`. [#33438][#33438] -- Fixed a bug that could cause under-replication or unavailability in 5-node clusters and those using high replication factors. [#32949][#32949] -- Fixed an infinite loop in a low-level scanning routine that could be hit in unusual circumstances. [#33063][#33063] -- CockroachDB no longer reports under-replicated ranges corresponding to replicas that are waiting to be deleted. [#32845][#32845] -- Fixed a possible goroutine leak when canceling queries. [#33130][#33130] -- `CHANGEFEED`s and incremental `BACKUP`s no longer indefinitely hang under an infrequent condition. [#32909][#32909] -- `cockroach node status --ranges` previously listed the count of under-replicated ranges in the `ranges_unavailable` column and the number of unavailable ranges in the `ranges_underreplicated` column. This fixes that mix-up. [#32950][#32950] -- Fixed a panic in the cost-based optimizer during the planning of some queries. [#33183][#33183] -- Cancel requests (via the pgwire protocol) now close quickly with an EOF instead of hanging but still do not cancel the request. [#33202][#33202] -- CockroachDB does not crash upon running `SHOW SESSIONS`, `SHOW QUERIES`, and inspections of some `crdb_internal` tables when certain SQL sessions are issuing internal SQL queries. [#33138][#33138] -- Updated the Zipkin library to avoid deadlock when stopping Zipkin tracing. [#33287][#33287] -- Fixed a panic that could result from not supplying a nullable column in an `INSERT ON CONFLICT ... DO UPDATE` statement. [#33245][#33245] -- Fixed pgwire binary decoding of decimal `NaN` and `NULL` in arrays. [#33295][#33295] -- The `UPSERT` and `INSERT ON CONFLICT` statements now properly check that the user has the `SELECT` privilege on the target table. [#33358][#33358] -- CockroachDB now errors with a fatal exit when data or logging partitions become unresponsive. Previously, the process would remain running, though in an unresponsive state. [#32978][#32978] -- Updated the contextual help for `\h EXPORT` in `cockroach sql` to reflect the actual syntax of the statement. [#33460][#33460] -- `INSERT ON CONFLICT ... DO NOTHING` no longer ignores rows that appear to be duplicate in the `INSERT` operands but are not yet present in the table. These are now properly inserted. [#33320][#33320] -- Prevented a panic with certain queries that use the statement source (square bracket) syntax. [#33537][#33537] -- Previously, CockroachDB did not consider the value of the right operand for `<<` and `>>` operators, resulting in potentially very large results and excessive RAM consumption. This has been fixed to restrict the range of these values to that supported for the left operand. [#33221][#33221] - -

Performance improvements

- -- Cross-range disjunctive scans where the result size can be deduced are now automatically parallelized. [#31616][#31616] -- Limited the concurrency of `BACKUP` on nodes with fewer cores to reduce performance impact. [#33277][#33277] -- Index joins, lookup joins, foreign key checks, cascade scans, zig zag joins, and `UPSERT`s no longer needlessly scan over child interleaved tables when searching for keys. [#33350][#33350] - -

Doc updates

- -- Updated the [Production Checklist](https://www.cockroachlabs.com/docs/v19.1/recommended-production-settings) with more current hardware recommendations and additional guidance on storage, file systems, and clock synch. [#4153](https://github.com/cockroachdb/docs/pull/4153) -- Expanded the [SQLAlchemy tutorial](https://www.cockroachlabs.com/docs/v19.1/build-a-python-app-with-cockroachdb-sqlalchemy) to provide code for transaction retries and best practices for using SQLAlchemy with CockroachDB. [#4142](https://github.com/cockroachdb/docs/pull/4142) - -
- -

Contributors

- -This release includes 212 merged PRs by 34 authors. We would like to thank the following contributors from the CockroachDB community: - -- Jaewan Park -- Jingguo Yao - -
- -[#31616]: https://github.com/cockroachdb/cockroach/pull/31616 -[#32215]: https://github.com/cockroachdb/cockroach/pull/32215 -[#32693]: https://github.com/cockroachdb/cockroach/pull/32693 -[#32845]: https://github.com/cockroachdb/cockroach/pull/32845 -[#32848]: https://github.com/cockroachdb/cockroach/pull/32848 -[#32861]: https://github.com/cockroachdb/cockroach/pull/32861 -[#32892]: https://github.com/cockroachdb/cockroach/pull/32892 -[#32909]: https://github.com/cockroachdb/cockroach/pull/32909 -[#32949]: https://github.com/cockroachdb/cockroach/pull/32949 -[#32950]: https://github.com/cockroachdb/cockroach/pull/32950 -[#32978]: https://github.com/cockroachdb/cockroach/pull/32978 -[#32981]: https://github.com/cockroachdb/cockroach/pull/32981 -[#32988]: https://github.com/cockroachdb/cockroach/pull/32988 -[#32998]: https://github.com/cockroachdb/cockroach/pull/32998 -[#33020]: https://github.com/cockroachdb/cockroach/pull/33020 -[#33057]: https://github.com/cockroachdb/cockroach/pull/33057 -[#33063]: https://github.com/cockroachdb/cockroach/pull/33063 -[#33087]: https://github.com/cockroachdb/cockroach/pull/33087 -[#33130]: https://github.com/cockroachdb/cockroach/pull/33130 -[#33138]: https://github.com/cockroachdb/cockroach/pull/33138 -[#33172]: https://github.com/cockroachdb/cockroach/pull/33172 -[#33183]: https://github.com/cockroachdb/cockroach/pull/33183 -[#33202]: https://github.com/cockroachdb/cockroach/pull/33202 -[#33221]: https://github.com/cockroachdb/cockroach/pull/33221 -[#33245]: https://github.com/cockroachdb/cockroach/pull/33245 -[#33252]: https://github.com/cockroachdb/cockroach/pull/33252 -[#33264]: https://github.com/cockroachdb/cockroach/pull/33264 -[#33267]: https://github.com/cockroachdb/cockroach/pull/33267 -[#33277]: https://github.com/cockroachdb/cockroach/pull/33277 -[#33287]: https://github.com/cockroachdb/cockroach/pull/33287 -[#33295]: https://github.com/cockroachdb/cockroach/pull/33295 -[#33320]: https://github.com/cockroachdb/cockroach/pull/33320 -[#33328]: https://github.com/cockroachdb/cockroach/pull/33328 -[#33339]: https://github.com/cockroachdb/cockroach/pull/33339 -[#33350]: https://github.com/cockroachdb/cockroach/pull/33350 -[#33355]: https://github.com/cockroachdb/cockroach/pull/33355 -[#33358]: https://github.com/cockroachdb/cockroach/pull/33358 -[#33390]: https://github.com/cockroachdb/cockroach/pull/33390 -[#33438]: https://github.com/cockroachdb/cockroach/pull/33438 -[#33460]: https://github.com/cockroachdb/cockroach/pull/33460 -[#33463]: https://github.com/cockroachdb/cockroach/pull/33463 -[#33483]: https://github.com/cockroachdb/cockroach/pull/33483 -[#33537]: https://github.com/cockroachdb/cockroach/pull/33537 diff --git a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md b/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md deleted file mode 100644 index 8b338c52638..00000000000 --- a/src/current/_includes/releases/v19.1/v2.2.0-alpha.20190211.md +++ /dev/null @@ -1,186 +0,0 @@ -

{{ include.release }}

- -Release Date: {{ include.release_date | date: "%B %-d, %Y" }} - -In addition to SQL language enhancements, general usability improvements, performance improvements, and bug fixes, this release includes several major highlights: - -- [**Follower Reads**](https://www.cockroachlabs.com/docs/v19.1/follower-reads): Enterprise users can now reduce read latencies by allowing queries to perform historical reads of the closest replica of a given piece of data rather than reading from the more distant "leaseholder" replica. To enable follower reads on a query, use the `experimental_follower_read_timestamp()` [built-in function](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) in conjunction with the `AS OF SYSTEM TIME` clause. -- [**Cost-Based Optimizer**](https://www.cockroachlabs.com/docs/v19.1/cost-based-optimizer): The cost-based optimizer now supports almost all read-only queries (except window functions) and almost all mutations (e.g., `CREATE TABLE AS`, `INSERT`, `UPDATE`, `UPSERT`, `DELETE`). In addition, the cost-based optimizer now reorders up to 4 joins in a query to attempt to find the most performant ordering (via the new `experimental_reorder_joins_limit` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars)) and takes advantage of automatic statistics without impacting foreground traffic. Note that statistics are created by default on all indexed columns when a user upgrades to this version. Finally, a query plan cache now saves a portion of the planning time for frequent queries used in the cost-based optimizer. -- [**Change Data Capture**](https://www.cockroachlabs.com/docs/v19.1/change-data-capture): Enterprise users can now create `CHANGEFEED`s that deliver table updates as JSON files to cloud storage endpoints like Google Storage or AWS S3. In addition, all CockroachDB users can now use the core implementation of change data capture, via the new `EXPERIMENTAL CHANGEFEED FOR` statement, to consumes table updates over a streaming Postgres connection. Finally, all `CHANGEFEEDS` now use a new "push" mechanism called _rangefeeds_ to deliver data with increased reliability and lower latency. - -

Backward-incompatible changes

- -- The [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) `experimental_avro` format is now backward- and forward-compatible with adjacent schemas for the same table. [#34095][#34095] {% comment %}doc{% endcomment %} - -

General changes

- -- [Go 1.11.4](https://golang.org/dl/) is now the minimum required version necessary to build CockroachDB. [#33668][#33668] {% comment %}doc{% endcomment %} -- Increased the maximum length of queries in crash reports, to make debugging easier. [#34479][#34479] - -

Enterprise edition changes

- -- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed)s now experimentally support writing to cloud storage, for easy use with analytics databases. [#33647][#33647] [#34193][#34193] -- The `CHANGEFEED` `envelope=row` option is now deprecated and will be removed in the Fall 2019 release. The default envelope for new changefeeds is now `wrapped`. [#34309][#34309] {% comment %}doc{% endcomment %} -- `CHANGEFEED`s now operate on an end-to-end "push" model, reducing latency of row changes. Some workloads will also see fewer transaction restarts on tables being watched by `CHANGEFEED`s. [#34457][#34457] {% comment %}doc{% endcomment %} -- Added support for standard HTTP proxy environment variables in HTTP and S3 storage. [#34067][#34067] -- Added support for performing sufficiently old historical reads against the closest replicas rather than leaseholders as well as a new `experimental_follower_read_timestamp()` [built-in function](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators), which can be used with [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v19.1/as-of-system-time) clauses to generate a timestamp that is likely to be safe for reads from a follower. [#33478][#33478] {% comment %}doc{% endcomment %} - -

SQL language changes

- -- [`VALIDATE CONSTRAINT`](https://www.cockroachlabs.com/docs/v19.1/validate-constraint) is now compatible with the new `MATCH FULL` and `MATCH SIMPLE` foreign key semantics and is more performant. [#34365][#34365] {% comment %}doc{% endcomment %} -- Table data is now validated against a newly added [`CHECK`](https://www.cockroachlabs.com/docs/v19.1/check) constraint asynchronously after the transaction commits. [#32504][#32504] {% comment %}doc{% endcomment %} -- `NULL` values are now supported in int and text arrays in the driver protocol. [#33675][#33675] -- CockroachDB now supports transmitting bit array values using the decimal encoding in the low-level client protocol. [#34050][#34050] -- It is now possible to force a reverse scan of a specific index using `table@{FORCE_INDEX=index,DESC}`. [#34075][#34075] {% comment %}doc{% endcomment %} -- Improved the output of [`EXPLAIN`](https://www.cockroachlabs.com/docs/v19.1/explain) for `index-join` and `lookup-join`. [#34138][#34138] {% comment %}doc{% endcomment %} -- `FILTER` expressions are now supported by the cost-based optimizer. [#34077][#34077] {% comment %}doc{% endcomment %} -- [`EXPLAIN (OPT)`](https://www.cockroachlabs.com/docs/v19.1/explain) now has a much shorter output. `EXPLAIN (OPT,VERBOSE)` and `EXPLAIN (OPT,TYPES)` can be used for more verbose output. [#34128][#34128] {% comment %}doc{% endcomment %} -- Using a sequence as a [`SELECT`](https://www.cockroachlabs.com/docs/v19.1/select-clause) target is now supported by the cost-based optimizer. [#33196][#33196] {% comment %}doc{% endcomment %} -- Removed the `2.0-off` and `2.0-auto` modes for the `sql.defaults.distsql` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings). All queries are now run via the newer, distributed SQL engine; queries are still only distributed if appropriate. [#34163][#34163] {% comment %}doc{% endcomment %} -- The `experimental_force_lookup_join` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars) has been removed. [#34142][#34142] {% comment %}doc{% endcomment %} -- Added the `experimental_reorder_joins_limit` [session variable](https://www.cockroachlabs.com/docs/v19.1/set-vars), which defaults to `4` and causes the cost-based optimizer to reorder up to 4 joins in a query to attempt to find the most performant ordering. This behavior can be disabled per-session by setting the `experimental_reorder_joins_limit` session variable to 0. [#34549][#34549] {% comment %}doc{% endcomment %} -- Formatting of timestamps as JSON strings has been changed to always use the RFC3339 format instead of Cockroach's customary format. Users can now expect to see a `T` separator instead of a space between the date and time components. [#34412][#34412] {% comment %}doc{% endcomment %} -- Introduced a new top-level statement for an experimental version of [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) that doesn't require an enterprise license and that returns results as a stream over the sql connection. [#34386][#34386] {% comment %}doc{% endcomment %} -- The result buffer size can now be controlled on a per-connection basis with the `results_buffer_size` connection string parameter. [#34385][#34385] {% comment %}doc{% endcomment %} -- [`CREATE STATISTICS`](https://www.cockroachlabs.com/docs/v19.1/create-statistics) now runs as a job instead of as a regular SQL statement. [#34279][#34279] {% comment %}doc{% endcomment %} -- [`INT`](https://www.cockroachlabs.com/docs/v19.1/int) values are now stored with microsecond precision instead of nanoseconds. Existing intervals with nanoseconds are no longer able to return their nanosecond part. An existing table `t` with nanoseconds in intervals of column `s` can round them to the nearest microsecond with `UPDATE t SET s = s + '0s'`. Note that this could potentially cause uniqueness problems if the interval is a primary key. [#34202][#34202] {% comment %}doc{% endcomment %} -- Added support for [`AS OF SYSTEM TIME`](https://www.cockroachlabs.com/docs/v19.1/as-of-system-time) clauses in [`BEGIN TRANSACTION`](https://www.cockroachlabs.com/docs/v19.1/begin-transaction) and [`SET TRANSACTION`](https://www.cockroachlabs.com/docs/v19.1/set-transaction) statements, which enables entire read-only transactions to be run against a historical timestamp. This functionality simplifies performing complex analytics against a consistent snapshot of historical data and eases the burden to use historical reads with ORMs which generally make modifying the syntax of generated `SELECT` statements difficult. [#34305][#34305] {% comment %}doc{% endcomment %} -- The behavior of the `now()` [built-in function](https://www.cockroachlabs.com/docs/v19.1/functions-and-operators) inside of historical `SELECT ... AS OF SYSTEM TIME` queries now reflects the historical timestamp at which the query is being run rather than the current clock time when the statement is executed. [#34305][#34305] {% comment %}doc{% endcomment %} -- The `ORDER BY` clause can no longer be used with a `DELETE` statement when there is no `LIMIT` clause present. Sorting the output should instead be done using `SELECT ... FROM [DELETE ...] ORDER BY ...`. [#34303][#34303] {% comment %}doc{% endcomment %} -- Enabled automatic statistics collection. [#34529][#34529] {% comment %}doc{% endcomment %} -- [`DELETE`](https://www.cockroachlabs.com/docs/v19.1/delete), [`UPDATE`](https://www.cockroachlabs.com/docs/v19.1/update), and [`UPSERT`](https://www.cockroachlabs.com/docs/v19.1/upsert) statements are now planned by the cost-based optimizer. [#34522][#34522] {% comment %}doc{% endcomment %} -- The value of `information_schema.columns.character_maximum_column` is set to `NULL` for all integer types, for compatibility with PostgreSQL. [#34182][#34182] {% comment %}doc{% endcomment %} - -

Command-line changes

- -- The modified time is now set for entries in [`cockroach debug zip`](https://www.cockroachlabs.com/docs/v19.1/debug-zip) output. [#33714][#33714] {% comment %}doc{% endcomment %} -- Clarified the informational message printed upon running `cockroach start --join`. [#33435][#33435] - -

Admin UI changes

- -- Added a debug endpoint listing the hottest ranges by QPS on each node/store. [#33336][#33336] {% comment %}doc{% endcomment %} -- Improved performance of graph detail tooltips when viewing long timespans (e.g., 1 month) [#34032][#34032] -- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed) metrics are now exposed in the UI. [#34427][#34427] {% comment %}doc{% endcomment %} - -

Bug fixes

- -- Fixed a bug in [`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore) that prevented restoring some [`BACKUP`](https://www.cockroachlabs.com/docs/v19.1/backup)s containing previously dropped or truncated interleaved tables. [#34413][#34413] -- Fixed a bug in [`cockroach node status`](https://www.cockroachlabs.com/docs/v19.1/view-node-details) that prevented it from displaying down nodes in the cluster in some circumstances. [#34448][#34448] -- Fixed several related panics in the optimizer related to plan exploration. [#34667][#34667] -- Resolved a cluster degradation scenario that could occur during [`IMPORT`](https://www.cockroachlabs.com/docs/v19.1/import)/[`RESTORE`](https://www.cockroachlabs.com/docs/v19.1/restore) operations, manifested through a high number of pending Raft snapshots. [#33582][#33582] -- Fixed a bug where some comparison operations with constant inputs were not getting folded during query optimization, causing the optimizer to produce sub-optimal plans. [#33597][#33597] -- [Window functions](https://www.cockroachlabs.com/docs/v19.1/window-functions) with non-empty `PARTITION BY` and `ORDER BY` clauses are now handled correctly when invoked via the low-level client protocol. [#33591][#33591] -- Fixed a memory leak around `DEALLOCATE` and `DISCARD` statements that could result in panics with the `unexpected leftover bytes` message. [#33423][#33423] -- Lookup joins now properly preserve their input order even if more than one row of the input corresponds to the same row of the lookup table. [#33536][#33536] -- Fixed a panic that occurred when performing an [`INSERT ON CONFLICT`](https://www.cockroachlabs.com/docs/v19.1/insert) with a `SET UPDATE` that uses values from a subquery. [#33553][#33553] -- Preparing queries with missing placeholders (e.g., `SELECT $2::int`) now results in an error. [#33716][#33716] -- Fixed a goroutine leak that would occur while a cluster was unavailable (or a subset of nodes partitioned away from the cluster) and would cause a resource spike to resolve. [#33282][#33282] -- Fixed panics or incorrect results in some cases when grouping on constant columns (either with `GROUP BY` or `DISTINCT ON`). [#34123][#34123] -- Prevented down-replicating widely replicated ranges when nodes in the cluster are temporarily down. [#34126][#34126] -- Fixed a panic when an internal implementation error prevents proper handling of placeholders (query parameters). [#34134][#34134] -- CockroachDB now enables re-starting a node at an address previously allocated for another node. [#34155][#34155] -- The values reported in `information_schema.columns` for integer columns created prior to CockroachDB v2.1 as `BIT` are now fixed and consistent with other integer types. [#34182][#34182] -- CockroachDB 2.2-alpha releases can once again be built from source on FreeBSD (unsupported platform). [#34244][#34244] -- Fixed a back up in flow creation observed by "no inbound stream connection" caused by not releasing a lock before attempting a possibly blocking operation. [#34218][#34218] -- [`CHANGEFEED`](https://www.cockroachlabs.com/docs/v19.1/create-changefeed)s now can be started on tables that have been backfilled by schema changes. [#34317][#34317] -- Fixed a possible panic in `crdb_internal.pretty_key()`. [#34480][#34480] -- `CHANGEFEED`s with `changefeed.push.enabled` set to `true` now resolve timestamps in the presence of inactive ranges. [#34550][#34550] -- Fixed a panic when updating a job that doesn't exist. [#34574][#34574] -- Fixed a panic due to incorrect statistics calculations when all values of a column are NULL. [#34578][#34578] -- Fixed a bug where lease transfers passed through Snapshots could forget to update in-memory state on the new leaseholder, allowing write-skew between read-modify-write operations. [#34548][#34548] - -

Performance improvements

- -- Reduced the network and storage overhead of multi-range transactions. [#33566][#33566] -- A query plan cache now saves a portion of the planning time for frequent queries. [#34454][#34454] -- Transaction record garbage collection requests are now batched on a per range basis to reduce the number of Raft entries in a high-throughput, write-heavy, transactional workload. [#34242][#34242] - -

Doc updates

- -- The new [Life of a Distributed Transaction](https://www.cockroachlabs.com/docs/stable/architecture/life-of-a-distributed-transaction) details the path that a query takes through CockroachDB's architecture, starting with a SQL client and progressing all the way to RocksDB (and then back out again). [#4281](https://github.com/cockroachdb/docs/pull/4281) -- Added a [warning about cross-store rebalancing](https://www.cockroachlabs.com/docs/v19.1/start-a-node#store) not working as expected in 3-node clusters with multiple stores per node. [#4320](https://github.com/cockroachdb/docs/pull/4320) -- Updated the [`INT`](https://www.cockroachlabs.com/docs/v19.1/int) documentation to include examples of actual min/max integers supported by each type for easier reference. Also added a description of possible compatibility issues caused by 64-bit integers vs., for example, JavaScript runtimes. [#4317](https://github.com/cockroachdb/docs/pull/4317) -- Documented the `sql.metrics.statement_details.plan_collection.period` [cluster setting](https://www.cockroachlabs.com/docs/v19.1/cluster-settings), which controls how often the logical plan for a fingerprint is sampled (5 minutes by default) on the [**Statements**](https://www.cockroachlabs.com/docs/v19.1/admin-ui-statements-page) page of the Admin UI. [#4316](https://github.com/cockroachdb/docs/pull/4316) -- Added guidance on [removing `UNIQUE` constraints](https://www.cockroachlabs.com/docs/v19.1/constraints#remove-constraints). [#4276](https://github.com/cockroachdb/docs/pull/4276) -- Added a note that when a table that was previously [split](https://www.cockroachlabs.com/docs/v19.1/split-at) is truncated, the table must be pre-split again. [#4274](https://github.com/cockroachdb/docs/pull/4274) -- Updated the [SQL Performance Best Practices](https://www.cockroachlabs.com/docs/v19.1/performance-best-practices-overview#interleave-tables) with caveats around interleaving tables. [#4273](https://github.com/cockroachdb/docs/pull/4273) - -
- -

Contributors

- -This release includes 258 merged PRs by 29 authors. We would like to thank the following contributors from the CockroachDB community: - -- George Utsin (first-time contributor) -- Txiaozhe (first-time contributor) -- Vijay Karthik - -
- -[#32504]: https://github.com/cockroachdb/cockroach/pull/32504 -[#33196]: https://github.com/cockroachdb/cockroach/pull/33196 -[#33282]: https://github.com/cockroachdb/cockroach/pull/33282 -[#33336]: https://github.com/cockroachdb/cockroach/pull/33336 -[#33423]: https://github.com/cockroachdb/cockroach/pull/33423 -[#33435]: https://github.com/cockroachdb/cockroach/pull/33435 -[#33478]: https://github.com/cockroachdb/cockroach/pull/33478 -[#33536]: https://github.com/cockroachdb/cockroach/pull/33536 -[#33553]: https://github.com/cockroachdb/cockroach/pull/33553 -[#33566]: https://github.com/cockroachdb/cockroach/pull/33566 -[#33571]: https://github.com/cockroachdb/cockroach/pull/33571 -[#33582]: https://github.com/cockroachdb/cockroach/pull/33582 -[#33591]: https://github.com/cockroachdb/cockroach/pull/33591 -[#33597]: https://github.com/cockroachdb/cockroach/pull/33597 -[#33647]: https://github.com/cockroachdb/cockroach/pull/33647 -[#33668]: https://github.com/cockroachdb/cockroach/pull/33668 -[#33675]: https://github.com/cockroachdb/cockroach/pull/33675 -[#33714]: https://github.com/cockroachdb/cockroach/pull/33714 -[#33716]: https://github.com/cockroachdb/cockroach/pull/33716 -[#34006]: https://github.com/cockroachdb/cockroach/pull/34006 -[#34032]: https://github.com/cockroachdb/cockroach/pull/34032 -[#34050]: https://github.com/cockroachdb/cockroach/pull/34050 -[#34067]: https://github.com/cockroachdb/cockroach/pull/34067 -[#34075]: https://github.com/cockroachdb/cockroach/pull/34075 -[#34077]: https://github.com/cockroachdb/cockroach/pull/34077 -[#34095]: https://github.com/cockroachdb/cockroach/pull/34095 -[#34123]: https://github.com/cockroachdb/cockroach/pull/34123 -[#34126]: https://github.com/cockroachdb/cockroach/pull/34126 -[#34128]: https://github.com/cockroachdb/cockroach/pull/34128 -[#34134]: https://github.com/cockroachdb/cockroach/pull/34134 -[#34138]: https://github.com/cockroachdb/cockroach/pull/34138 -[#34142]: https://github.com/cockroachdb/cockroach/pull/34142 -[#34155]: https://github.com/cockroachdb/cockroach/pull/34155 -[#34163]: https://github.com/cockroachdb/cockroach/pull/34163 -[#34170]: https://github.com/cockroachdb/cockroach/pull/34170 -[#34182]: https://github.com/cockroachdb/cockroach/pull/34182 -[#34193]: https://github.com/cockroachdb/cockroach/pull/34193 -[#34202]: https://github.com/cockroachdb/cockroach/pull/34202 -[#34218]: https://github.com/cockroachdb/cockroach/pull/34218 -[#34242]: https://github.com/cockroachdb/cockroach/pull/34242 -[#34244]: https://github.com/cockroachdb/cockroach/pull/34244 -[#34247]: https://github.com/cockroachdb/cockroach/pull/34247 -[#34279]: https://github.com/cockroachdb/cockroach/pull/34279 -[#34303]: https://github.com/cockroachdb/cockroach/pull/34303 -[#34305]: https://github.com/cockroachdb/cockroach/pull/34305 -[#34309]: https://github.com/cockroachdb/cockroach/pull/34309 -[#34317]: https://github.com/cockroachdb/cockroach/pull/34317 -[#34375]: https://github.com/cockroachdb/cockroach/pull/34375 -[#34385]: https://github.com/cockroachdb/cockroach/pull/34385 -[#34386]: https://github.com/cockroachdb/cockroach/pull/34386 -[#34412]: https://github.com/cockroachdb/cockroach/pull/34412 -[#34427]: https://github.com/cockroachdb/cockroach/pull/34427 -[#34448]: https://github.com/cockroachdb/cockroach/pull/34448 -[#34454]: https://github.com/cockroachdb/cockroach/pull/34454 -[#34457]: https://github.com/cockroachdb/cockroach/pull/34457 -[#34479]: https://github.com/cockroachdb/cockroach/pull/34479 -[#34480]: https://github.com/cockroachdb/cockroach/pull/34480 -[#34522]: https://github.com/cockroachdb/cockroach/pull/34522 -[#34529]: https://github.com/cockroachdb/cockroach/pull/34529 -[#34548]: https://github.com/cockroachdb/cockroach/pull/34548 -[#34549]: https://github.com/cockroachdb/cockroach/pull/34549 -[#34550]: https://github.com/cockroachdb/cockroach/pull/34550 -[#34574]: https://github.com/cockroachdb/cockroach/pull/34574 -[#34578]: https://github.com/cockroachdb/cockroach/pull/34578 -[#34365]: https://github.com/cockroachdb/cockroach/pull/34365 -[#34667]: https://github.com/cockroachdb/cockroach/pull/34667 -[#34413]: https://github.com/cockroachdb/cockroach/pull/34413 diff --git a/src/current/_includes/sidebar-data-v19.1.json b/src/current/_includes/sidebar-data-v19.1.json deleted file mode 100644 index 551dc95660d..00000000000 --- a/src/current/_includes/sidebar-data-v19.1.json +++ /dev/null @@ -1,1927 +0,0 @@ -[ - { - "title": "Docs Home", - "is_top_level": true, - "urls": [ - "/" - ] - }, - { - "title": "Quickstart", - "is_top_level": true, - "urls": [ - "/cockroachcloud/quickstart.html" - ] - }, - {% include sidebar-data-cockroachcloud.json %}, - { - "title": "CockroachDB", - "is_top_level": true, - "items": [ - { - "title": "Guides", - "items": [ - { - "title": "Get Started", - "items": [ - { - "title": "Install CockroachDB", - "urls": [ - "/${VERSION}/install-cockroachdb.html", - "/${VERSION}/install-cockroachdb-mac.html", - "/${VERSION}/install-cockroachdb-linux.html", - "/${VERSION}/install-cockroachdb-windows.html" - ] - }, - { - "title": "Start a Local Cluster", - "items": [ - { - "title": "From Binary", - "urls": [ - "/${VERSION}/start-a-local-cluster.html", - "/${VERSION}/secure-a-cluster.html" - ] - }, - { - "title": "In Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html", - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes-insecure.html" - ] - }, - { - "title": "In Docker", - "urls": [ - "/${VERSION}/start-a-local-cluster-in-docker.html" - ] - } - ] - }, - { - "title": "Learn CockroachDB SQL", - "urls": [ - "/${VERSION}/learn-cockroachdb-sql.html" - ] - }, - { - "title": "Build an App", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/build-an-app-with-cockroachdb.html" - ] - }, - { - "title": "Go", - "urls": [ - "/${VERSION}/build-a-go-app-with-cockroachdb.html", - "/${VERSION}/build-a-go-app-with-cockroachdb-gorm.html" - ] - }, - { - "title": "Python", - "urls": [ - "/${VERSION}/build-a-python-app-with-cockroachdb.html", - "/${VERSION}/build-a-python-app-with-cockroachdb-sqlalchemy.html" - ] - }, - { - "title": "Ruby", - "urls": [ - "/${VERSION}/build-a-ruby-app-with-cockroachdb.html", - "/${VERSION}/build-a-ruby-app-with-cockroachdb-activerecord.html" - ] - }, - { - "title": "Java", - "urls": [ - "/${VERSION}/build-a-java-app-with-cockroachdb.html", - "/${VERSION}/build-a-java-app-with-cockroachdb-hibernate.html" - ] - }, - { - "title": "Node.js", - "urls": [ - "/${VERSION}/build-a-nodejs-app-with-cockroachdb.html", - "/${VERSION}/build-a-nodejs-app-with-cockroachdb-sequelize.html" - ] - }, - { - "title": "C++", - "urls": [ - "/${VERSION}/build-a-c++-app-with-cockroachdb.html" - ] - }, - { - "title": "C# (.NET)", - "urls": [ - "/${VERSION}/build-a-csharp-app-with-cockroachdb.html" - ] - }, - { - "title": "Clojure", - "urls": [ - "/${VERSION}/build-a-clojure-app-with-cockroachdb.html" - ] - }, - { - "title": "PHP", - "urls": [ - "/${VERSION}/build-a-php-app-with-cockroachdb.html" - ] - }, - { - "title": "Rust", - "urls": [ - "/${VERSION}/build-a-rust-app-with-cockroachdb.html" - ] - } - ] - } - ] - }, - { - "title": "Deploy", - "items": [ - { - "title": "Production Checklist", - "urls": [ - "/${VERSION}/recommended-production-settings.html" - ] - }, - { - "title": "Topology Patterns", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/topology-patterns.html" - ] - }, - { - "title": "Development", - "urls": [ - "/${VERSION}/topology-development.html" - ] - }, - { - "title": "Basic Production", - "urls": [ - "/${VERSION}/topology-basic-production.html" - ] - }, - { - "title": "Geo-Partitioned Replicas", - "urls": [ - "/${VERSION}/topology-geo-partitioned-replicas.html" - ] - }, - { - "title": "Geo-Partitioned Leaseholders", - "urls": [ - "/${VERSION}/topology-geo-partitioned-leaseholders.html" - ] - }, - { - "title": "Duplicate Indexes", - "urls": [ - "/${VERSION}/topology-duplicate-indexes.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/topology-follower-reads.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/topology-follow-the-workload.html" - ] - } - ] - }, - { - "title": "Manual Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/manual-deployment.html" - ] - }, - { - "title": "On-Premises", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-premises.html", - "/${VERSION}/deploy-cockroachdb-on-premises-insecure.html" - ] - }, - { - "title": "AWS", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-aws.html", - "/${VERSION}/deploy-cockroachdb-on-aws-insecure.html" - ] - }, - { - "title": "Azure", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure.html", - "/${VERSION}/deploy-cockroachdb-on-microsoft-azure-insecure.html" - ] - }, - { - "title": "Digital Ocean", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-digital-ocean.html", - "/${VERSION}/deploy-cockroachdb-on-digital-ocean-insecure.html" - ] - }, - { - "title": "Google Cloud Platform GCE", - "urls": [ - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform.html", - "/${VERSION}/deploy-cockroachdb-on-google-cloud-platform-insecure.html" - ] - } - ] - }, - { - "title": "Orchestrated Deployment", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/orchestration.html" - ] - }, - { - "title": "Kubernetes Single-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes.html", - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-insecure.html" - ] - }, - { - "title": "Kubernetes Multi-Cluster Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html" - ] - }, - { - "title": "Kubernetes Performance Optimization", - "urls": [ - "/${VERSION}/kubernetes-performance.html" - ] - }, - { - "title": "Docker Swarm Deployment", - "urls": [ - "/${VERSION}/orchestrate-cockroachdb-with-docker-swarm.html", - "/${VERSION}/orchestrate-cockroachdb-with-docker-swarm-insecure.html" - ] - } - ] - }, - { - "title": "Security", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/security-overview.html" - ] - }, - { - "title": "Authentication", - "urls": [ - "/${VERSION}/authentication.html" - ] - }, - { - "title": "Encryption", - "urls": [ - "/${VERSION}/encryption.html" - ] - }, - { - "title": "Authorization", - "urls": [ - "/${VERSION}/authorization.html" - ] - }, - { - "title": "SQL Audit Logging", - "urls": [ - "/${VERSION}/sql-audit-logging.html" - ] - }, - { - "title": "GSSAPI Authentication", - "urls": [ - "/${VERSION}/gssapi_authentication.html" - ] - } - ] - }, - { - "title": "Monitoring and Alerting", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/monitoring-and-alerting.html" - ] - }, - { - "title": "Use the Admin UI", - "urls": [ - "/${VERSION}/admin-ui-access-and-navigate.html" - ] - }, - { - "title": "Enable the Node Map", - "urls": [ - "/${VERSION}/enable-node-map.html" - ] - }, - { - "title": "Use Prometheus and Alertmanager", - "urls": [ - "/${VERSION}/monitor-cockroachdb-with-prometheus.html" - ] - } - ] - }, - { - "title": "Configure Replication Zones", - "urls": [ - "/${VERSION}/configure-replication-zones.html" - ] - }, - { - "title": "Performance Benchmarking", - "urls": [ - "/${VERSION}/performance-benchmarking-with-tpc-c.html" - ] - }, - { - "title": "Performance Tuning", - "urls": [ - "/${VERSION}/performance-tuning.html", - "/${VERSION}/performance-tuning-insecure.html" - ] - }, - { - "title": "Table Partitioning", - "urls": [ - "/${VERSION}/partitioning.html" - ] - }, - { - "title": "Change Data Capture", - "urls": [ - "/${VERSION}/change-data-capture.html" - ] - }, - { - "title": "Enterprise Features", - "urls": [ - "/${VERSION}/enterprise-licensing.html" - ] - } - ] - }, - { - "title": "Migrate", - "items": [ - { - "title": "Migration Overview", - "urls": [ - "/${VERSION}/migration-overview.html" - ] - }, - { - "title": "Migrate from Oracle", - "urls": [ - "/${VERSION}/migrate-from-oracle.html" - ] - }, - { - "title": "Migrate from Postgres", - "urls": [ - "/${VERSION}/migrate-from-postgres.html" - ] - }, - { - "title": "Migrate from MySQL", - "urls": [ - "/${VERSION}/migrate-from-mysql.html" - ] - }, - { - "title": "Migrate from CSV", - "urls": [ - "/${VERSION}/migrate-from-csv.html" - ] - } - ] - }, - { - "title": "Maintain", - "items": [ - { - "title": "Upgrade to CockroachDB v19.1", - "urls": [ - "/${VERSION}/upgrade-cockroach-version.html" - ] - }, - { - "title": "Online Schema Changes", - "urls": [ - "/${VERSION}/online-schema-changes.html" - ] - }, - { - "title": "Manage Long-Running Queries", - "urls": [ - "/${VERSION}/manage-long-running-queries.html" - ] - }, - { - "title": "Decommission Nodes", - "urls": [ - "/${VERSION}/remove-nodes.html" - ] - }, - { - "title": "Back up and Restore Data", - "urls": [ - "/${VERSION}/backup-and-restore.html" - ] - }, - { - "title": "Create a File Server for IMPORT/BACKUP", - "urls": [ - "/${VERSION}/create-a-file-server.html" - ] - }, - { - "title": "Rotate Security Certificates", - "urls": [ - "/${VERSION}/rotate-certificates.html" - ] - } - ] - }, - { - "title": "Troubleshoot", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/troubleshooting-overview.html" - ] - }, - { - "title": "Common Errors", - "urls": [ - "/${VERSION}/common-errors.html" - ] - }, - { - "title": "Troubleshoot Cluster Setup", - "urls": [ - "/${VERSION}/cluster-setup-troubleshooting.html" - ] - }, - { - "title": "Troubleshoot Query Behavior", - "urls": [ - "/${VERSION}/query-behavior-troubleshooting.html" - ] - }, - { - "title": "Understand Debug Logs", - "urls": [ - "/${VERSION}/debug-and-error-logs.html" - ] - }, - { - "title": "Support Resources", - "urls": [ - "/${VERSION}/support-resources.html" - ] - }, - { - "title": "File an Issue", - "urls": [ - "/${VERSION}/file-an-issue.html" - ] - } - ] - } - ] - }, - { - "title": "Tutorials", - "items": [ - { - "title": "Geo-Partitioning", - "urls": [ - "/${VERSION}/demo-geo-partitioning.html" - ] - }, - { - "title": "Data Replication", - "urls": [ - "/${VERSION}/demo-data-replication.html" - ] - }, - { - "title": "Fault Tolerance & Recovery", - "urls": [ - "/${VERSION}/demo-fault-tolerance-and-recovery.html" - ] - }, - { - "title": "Automatic Rebalancing", - "urls": [ - "/${VERSION}/demo-automatic-rebalancing.html" - ] - }, - { - "title": "Serializable Transactions", - "urls": [ - "/${VERSION}/demo-serializable.html" - ] - }, - { - "title": "Cross-Cloud Migration", - "urls": [ - "/${VERSION}/demo-automatic-cloud-migration.html" - ] - }, - { - "title": "Follow-the-Workload", - "urls": [ - "/${VERSION}/demo-follow-the-workload.html" - ] - }, - { - "title": "Orchestration with Kubernetes", - "urls": [ - "/${VERSION}/orchestrate-a-local-cluster-with-kubernetes.html" - ] - }, - { - "title": "JSON Support", - "urls": [ - "/${VERSION}/demo-json-support.html" - ] - } - ] - }, - { - "title": "Reference", - "items": [ - { - "title": "Architecture", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/architecture/overview.html" - ] - }, - { - "title": "SQL Layer", - "urls": [ - "/${VERSION}/architecture/sql-layer.html" - ] - }, - { - "title": "Transaction Layer", - "urls": [ - "/${VERSION}/architecture/transaction-layer.html" - ] - }, - { - "title": "Distribution Layer", - "urls": [ - "/${VERSION}/architecture/distribution-layer.html" - ] - }, - { - "title": "Replication Layer", - "urls": [ - "/${VERSION}/architecture/replication-layer.html" - ] - }, - { - "title": "Storage Layer", - "urls": [ - "/${VERSION}/architecture/storage-layer.html" - ] - }, - { - "title": "Life of a Distributed Transaction", - "urls": [ - "/${VERSION}/architecture/life-of-a-distributed-transaction.html" - ] - }, - { - "title": "Reads and Writes Overview", - "urls": [ - "/${VERSION}/architecture/reads-and-writes-overview.html" - ] - } - ] - }, - { - "title": "SQL", - "items": [ - { - "title": "Client Drivers", - "urls": [ - "/${VERSION}/install-client-drivers.html" - ] - }, - { - "title": "Client Connection Parameters", - "urls": [ - "/${VERSION}/connection-parameters.html" - ] - }, - { - "title": "SQL Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "SQL Feature Support", - "urls": [ - "/${VERSION}/sql-feature-support.html" - ] - }, - { - "title": "SQL Statements", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/sql-statements.html" - ] - }, - { - "title": "ADD COLUMN", - "urls": [ - "/${VERSION}/add-column.html" - ] - }, - { - "title": "ADD CONSTRAINT", - "urls": [ - "/${VERSION}/add-constraint.html" - ] - }, - { - "title": "ALTER COLUMN", - "urls": [ - "/${VERSION}/alter-column.html" - ] - }, - { - "title": "ALTER DATABASE", - "urls": [ - "/${VERSION}/alter-database.html" - ] - }, - { - "title": "ALTER INDEX", - "urls": [ - "/${VERSION}/alter-index.html" - ] - }, - { - "title": "ALTER RANGE", - "urls": [ - "/${VERSION}/alter-range.html" - ] - }, - { - "title": "ALTER SEQUENCE", - "urls": [ - "/${VERSION}/alter-sequence.html" - ] - }, - { - "title": "ALTER TABLE", - "urls": [ - "/${VERSION}/alter-table.html" - ] - }, - { - "title": "ALTER TYPE", - "urls": [ - "/${VERSION}/alter-type.html" - ] - }, - { - "title": "ALTER USER", - "urls": [ - "/${VERSION}/alter-user.html" - ] - }, - { - "title": "EXPERIMENTAL_AUDIT", - "urls": [ - "/${VERSION}/experimental-audit.html" - ] - }, - { - "title": "ALTER VIEW", - "urls": [ - "/${VERSION}/alter-view.html" - ] - }, - { - "title": "BACKUP (Enterprise)", - "urls": [ - "/${VERSION}/backup.html" - ] - }, - { - "title": "BEGIN", - "urls": [ - "/${VERSION}/begin-transaction.html" - ] - }, - { - "title": "CANCEL JOB", - "urls": [ - "/${VERSION}/cancel-job.html" - ] - }, - { - "title": "CANCEL QUERY", - "urls": [ - "/${VERSION}/cancel-query.html" - ] - }, - { - "title": "CANCEL SESSION", - "urls": [ - "/${VERSION}/cancel-session.html" - ] - }, - { - "title": "COMMENT ON", - "urls": [ - "/${VERSION}/comment-on.html" - ] - }, - { - "title": "COMMIT", - "urls": [ - "/${VERSION}/commit-transaction.html" - ] - }, - { - "title": "CONFIGURE ZONE", - "urls": [ - "/${VERSION}/configure-zone.html" - ] - }, - { - "title": "CREATE CHANGEFEED (Enterprise)", - "urls": [ - "/${VERSION}/create-changefeed.html" - ] - }, - { - "title": "CREATE DATABASE", - "urls": [ - "/${VERSION}/create-database.html" - ] - }, - { - "title": "CREATE INDEX", - "urls": [ - "/${VERSION}/create-index.html" - ] - }, - { - "title": "CREATE ROLE (Enterprise)", - "urls": [ - "/${VERSION}/create-role.html" - ] - }, - { - "title": "CREATE SEQUENCE", - "urls": [ - "/${VERSION}/create-sequence.html" - ] - }, - { - "title": "CREATE STATISTICS", - "urls": [ - "/${VERSION}/create-statistics.html" - ] - }, - { - "title": "CREATE TABLE", - "urls": [ - "/${VERSION}/create-table.html" - ] - }, - { - "title": "CREATE TABLE AS", - "urls": [ - "/${VERSION}/create-table-as.html" - ] - }, - { - "title": "CREATE USER", - "urls": [ - "/${VERSION}/create-user.html" - ] - }, - { - "title": "CREATE VIEW", - "urls": [ - "/${VERSION}/create-view.html" - ] - }, - { - "title": "DELETE", - "urls": [ - "/${VERSION}/delete.html" - ] - }, - { - "title": "DROP COLUMN", - "urls": [ - "/${VERSION}/drop-column.html" - ] - }, - { - "title": "DROP CONSTRAINT", - "urls": [ - "/${VERSION}/drop-constraint.html" - ] - }, - { - "title": "DROP DATABASE", - "urls": [ - "/${VERSION}/drop-database.html" - ] - }, - { - "title": "DROP INDEX", - "urls": [ - "/${VERSION}/drop-index.html" - ] - }, - { - "title": "DROP ROLE (Enterprise)", - "urls": [ - "/${VERSION}/drop-role.html" - ] - }, - { - "title": "DROP SEQUENCE", - "urls": [ - "/${VERSION}/drop-sequence.html" - ] - }, - { - "title": "DROP TABLE", - "urls": [ - "/${VERSION}/drop-table.html" - ] - }, - { - "title": "DROP USER", - "urls": [ - "/${VERSION}/drop-user.html" - ] - }, - { - "title": "DROP VIEW", - "urls": [ - "/${VERSION}/drop-view.html" - ] - }, - { - "title": "EXPERIMENTAL CHANGEFEED FOR", - "urls": [ - "/${VERSION}/changefeed-for.html" - ] - }, - { - "title": "EXPLAIN", - "urls": [ - "/${VERSION}/explain.html" - ] - }, - { - "title": "EXPLAIN ANALYZE", - "urls": [ - "/${VERSION}/explain-analyze.html" - ] - }, - { - "title": "EXPORT (Enterprise)", - "urls": [ - "/${VERSION}/export.html" - ] - }, - { - "title": "GRANT <privileges>", - "urls": [ - "/${VERSION}/grant.html" - ] - }, - { - "title": "GRANT <roles>", - "urls": [ - "/${VERSION}/grant-roles.html" - ] - }, - { - "title": "IMPORT", - "urls": [ - "/${VERSION}/import.html" - ] - }, - { - "title": "INSERT", - "urls": [ - "/${VERSION}/insert.html" - ] - }, - { - "title": "PARTITION BY (Enterprise)", - "urls": [ - "/${VERSION}/partition-by.html" - ] - }, - { - "title": "PAUSE JOB", - "urls": [ - "/${VERSION}/pause-job.html" - ] - }, - { - "title": "RENAME COLUMN", - "urls": [ - "/${VERSION}/rename-column.html" - ] - }, - { - "title": "RENAME CONSTRAINT", - "urls": [ - "/${VERSION}/rename-constraint.html" - ] - }, - { - "title": "RENAME DATABASE", - "urls": [ - "/${VERSION}/rename-database.html" - ] - }, - { - "title": "RENAME INDEX", - "urls": [ - "/${VERSION}/rename-index.html" - ] - }, - { - "title": "RENAME TABLE", - "urls": [ - "/${VERSION}/rename-table.html" - ] - }, - { - "title": "RENAME SEQUENCE", - "urls": [ - "/${VERSION}/rename-sequence.html" - ] - }, - { - "title": "RELEASE SAVEPOINT", - "urls": [ - "/${VERSION}/release-savepoint.html" - ] - }, - { - "title": "RESET <session variable>", - "urls": [ - "/${VERSION}/reset-vars.html" - ] - }, - { - "title": "RESET CLUSTER SETTING", - "urls": [ - "/${VERSION}/reset-cluster-setting.html" - ] - }, - { - "title": "RESTORE (Enterprise)", - "urls": [ - "/${VERSION}/restore.html" - ] - }, - { - "title": "RESUME JOB", - "urls": [ - "/${VERSION}/resume-job.html" - ] - }, - { - "title": "REVOKE <privileges>", - "urls": [ - "/${VERSION}/revoke.html" - ] - }, - { - "title": "REVOKE <roles>", - "urls": [ - "/${VERSION}/revoke-roles.html" - ] - }, - { - "title": "ROLLBACK", - "urls": [ - "/${VERSION}/rollback-transaction.html" - ] - }, - { - "title": "SAVEPOINT", - "urls": [ - "/${VERSION}/savepoint.html" - ] - }, - { - "title": "SELECT", - "urls": [ - "/${VERSION}/select-clause.html" - ] - }, - { - "title": "SET <session variable>", - "urls": [ - "/${VERSION}/set-vars.html" - ] - }, - { - "title": "SET CLUSTER SETTING", - "urls": [ - "/${VERSION}/set-cluster-setting.html" - ] - }, - { - "title": "SET TRANSACTION", - "urls": [ - "/${VERSION}/set-transaction.html" - ] - }, - { - "title": "SHOW <session variables>", - "urls": [ - "/${VERSION}/show-vars.html" - ] - }, - { - "title": "SHOW BACKUP", - "urls": [ - "/${VERSION}/show-backup.html" - ] - }, - { - "title": "SHOW CLUSTER SETTING", - "urls": [ - "/${VERSION}/show-cluster-setting.html" - ] - }, - { - "title": "SHOW COLUMNS", - "urls": [ - "/${VERSION}/show-columns.html" - ] - }, - { - "title": "SHOW CONSTRAINTS", - "urls": [ - "/${VERSION}/show-constraints.html" - ] - }, - { - "title": "SHOW CREATE", - "urls": [ - "/${VERSION}/show-create.html" - ] - }, - { - "title": "SHOW DATABASES", - "urls": [ - "/${VERSION}/show-databases.html" - ] - }, - { - "title": "SHOW EXPERIMENTAL_RANGES", - "urls": [ - "/${VERSION}/show-experimental-ranges.html" - ] - }, - { - "title": "SHOW GRANTS", - "urls": [ - "/${VERSION}/show-grants.html" - ] - }, - { - "title": "SHOW INDEX", - "urls": [ - "/${VERSION}/show-index.html" - ] - }, - { - "title": "SHOW JOBS", - "urls": [ - "/${VERSION}/show-jobs.html" - ] - }, - { - "title": "SHOW QUERIES", - "urls": [ - "/${VERSION}/show-queries.html" - ] - }, - { - "title": "SHOW ROLES", - "urls": [ - "/${VERSION}/show-roles.html" - ] - }, - { - "title": "SHOW SCHEMAS", - "urls": [ - "/${VERSION}/show-schemas.html" - ] - }, - { - "title": "SHOW SEQUENCES", - "urls": [ - "/${VERSION}/show-sequences.html" - ] - }, - { - "title": "SHOW SESSIONS", - "urls": [ - "/${VERSION}/show-sessions.html" - ] - }, - { - "title": "SHOW STATISTICS", - "urls": [ - "/${VERSION}/show-statistics.html" - ] - }, - { - "title": "SHOW TABLES", - "urls": [ - "/${VERSION}/show-tables.html" - ] - }, - { - "title": "SHOW TRACE FOR SESSION", - "urls": [ - "/${VERSION}/show-trace.html" - ] - }, - { - "title": "SHOW USERS", - "urls": [ - "/${VERSION}/show-users.html" - ] - }, - { - "title": "SHOW ZONE CONFIGURATIONS", - "urls": [ - "/${VERSION}/show-zone-configurations.html" - ] - }, - { - "title": "SPLIT AT", - "urls": [ - "/${VERSION}/split-at.html" - ] - }, - { - "title": "TRUNCATE", - "urls": [ - "/${VERSION}/truncate.html" - ] - }, - { - "title": "UPDATE", - "urls": [ - "/${VERSION}/update.html" - ] - }, - { - "title": "UPSERT", - "urls": [ - "/${VERSION}/upsert.html" - ] - }, - { - "title": "VALIDATE CONSTRAINT", - "urls": [ - "/${VERSION}/validate-constraint.html" - ] - } - ] - }, - { - "title": "Data Types", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/data-types.html" - ] - }, - { - "title": "ARRAY", - "urls": [ - "/${VERSION}/array.html" - ] - }, - { - "title": "BIT", - "urls": [ - "/${VERSION}/bit.html" - ] - }, - { - "title": "BOOL", - "urls": [ - "/${VERSION}/bool.html" - ] - }, - { - "title": "BYTES", - "urls": [ - "/${VERSION}/bytes.html" - ] - }, - { - "title": "COLLATE", - "urls": [ - "/${VERSION}/collate.html" - ] - }, - { - "title": "DATE", - "urls": [ - "/${VERSION}/date.html" - ] - }, - { - "title": "DECIMAL", - "urls": [ - "/${VERSION}/decimal.html" - ] - }, - { - "title": "FLOAT", - "urls": [ - "/${VERSION}/float.html" - ] - }, - { - "title": "INET", - "urls": [ - "/${VERSION}/inet.html" - ] - }, - { - "title": "INT", - "urls": [ - "/${VERSION}/int.html" - ] - }, - { - "title": "INTERVAL", - "urls": [ - "/${VERSION}/interval.html" - ] - }, - { - "title": "JSONB", - "urls": [ - "/${VERSION}/jsonb.html" - ] - }, - { - "title": "SERIAL", - "urls": [ - "/${VERSION}/serial.html" - ] - }, - { - "title": "STRING", - "urls": [ - "/${VERSION}/string.html" - ] - }, - { - "title": "TIME", - "urls": [ - "/${VERSION}/time.html" - ] - }, - { - "title": "TIMESTAMP", - "urls": [ - "/${VERSION}/timestamp.html" - ] - }, - { - "title": "UUID", - "urls": [ - "/${VERSION}/uuid.html" - ] - } - ] - }, - { - "title": "Constraints", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/constraints.html" - ] - }, - { - "title": "Check", - "urls": [ - "/${VERSION}/check.html" - ] - }, - { - "title": "Default Value", - "urls": [ - "/${VERSION}/default-value.html" - ] - }, - { - "title": "Foreign Key", - "urls": [ - "/${VERSION}/foreign-key.html" - ] - }, - { - "title": "Not Null", - "urls": [ - "/${VERSION}/not-null.html" - ] - }, - { - "title": "Primary Key", - "urls": [ - "/${VERSION}/primary-key.html" - ] - }, - { - "title": "Unique", - "urls": [ - "/${VERSION}/unique.html" - ] - } - ] - }, - { - "title": "Functions and Operators", - "urls": [ - "/${VERSION}/functions-and-operators.html" - ] - }, - { - "title": "SQL Syntax", - "items": [ - { - "title": "Full SQL Grammar", - "urls": [ - "/${VERSION}/sql-grammar.html" - ] - }, - { - "title": "Keywords & Identifiers", - "urls": [ - "/${VERSION}/keywords-and-identifiers.html" - ] - }, - { - "title": "Constants", - "urls": [ - "/${VERSION}/sql-constants.html" - ] - }, - { - "title": "Selection Queries", - "urls": [ - "/${VERSION}/selection-queries.html" - ] - }, - { - "title": "Ordering Query Results", - "urls": [ - "/${VERSION}/query-order.html" - ] - }, - { - "title": "Limiting Query Results", - "urls": [ - "/${VERSION}/limit-offset.html" - ] - }, - { - "title": "Table Expressions", - "urls": [ - "/${VERSION}/table-expressions.html" - ] - }, - { - "title": "Common Table Expressions", - "urls": [ - "/${VERSION}/common-table-expressions.html" - ] - }, - { - "title": "Join Expressions", - "urls": [ - "/${VERSION}/joins.html" - ] - }, - { - "title": "Computed Columns", - "urls": [ - "/${VERSION}/computed-columns.html" - ] - }, - { - "title": "Scalar Expressions", - "urls": [ - "/${VERSION}/scalar-expressions.html" - ] - }, - { - "title": "Subqueries", - "urls": [ - "/${VERSION}/subqueries.html" - ] - }, - { - "title": "Name Resolution", - "urls": [ - "/${VERSION}/sql-name-resolution.html" - ] - }, - { - "title": "AS OF SYSTEM TIME", - "urls": [ - "/${VERSION}/as-of-system-time.html" - ] - }, - { - "title": "NULL Handling", - "urls": [ - "/${VERSION}/null-handling.html" - ] - } - ] - }, - { - "title": "Transactions", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/transactions.html" - ] - }, - { - "title": "Advanced Client-side Transaction Retries", - "urls": [ - "/${VERSION}/advanced-client-side-transaction-retries.html" - ] - } - ] - }, - { - "title": "Views", - "urls": [ - "/${VERSION}/views.html" - ] - }, - { - "title": "Window Functions", - "urls": [ - "/${VERSION}/window-functions.html" - ] - }, - { - "title": "Performance Optimization", - "items": [ - { - "title": "SQL Best Practices", - "urls": [ - "/${VERSION}/performance-best-practices-overview.html" - ] - }, - { - "title": "Indexes", - "urls": [ - "/${VERSION}/indexes.html" - ] - }, - { - "title": "Inverted Indexes", - "urls": [ - "/${VERSION}/inverted-indexes.html" - ] - }, - { - "title": "Column Families", - "urls": [ - "/${VERSION}/column-families.html" - ] - }, - { - "title": "Interleaved Tables", - "urls": [ - "/${VERSION}/interleave-in-parent.html" - ] - }, - { - "title": "Parallel Statement Execution", - "urls": [ - "/${VERSION}/parallel-statement-execution.html" - ] - }, - { - "title": "Cost-Based Optimizer", - "urls": [ - "/${VERSION}/cost-based-optimizer.html" - ] - } - ] - }, - { - "title": "Information Schema", - "urls": [ - "/${VERSION}/information-schema.html" - ] - }, - { - "title": "Porting Applications", - "items": [ - { - "title": "From PostgreSQL", - "urls": [ - "/${VERSION}/porting-postgres.html" - ] - } - ] - }, - { - "title": "Experimental Features", - "urls": [ - "/${VERSION}/experimental-features.html" - ] - } - ] - }, - { - "title": "CLI", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/cockroach-commands.html" - ] - }, - { - "title": "cockroach start", - "urls": [ - "/${VERSION}/start-a-node.html" - ] - }, - { - "title": "cockroach init", - "urls": [ - "/${VERSION}/initialize-a-cluster.html" - ] - }, - { - "title": "cockroach cert", - "urls": [ - "/${VERSION}/create-security-certificates.html" - ] - }, - { - "title": "cockroach quit", - "urls": [ - "/${VERSION}/stop-a-node.html" - ] - }, - { - "title": "cockroach sql", - "urls": [ - "/${VERSION}/use-the-built-in-sql-client.html" - ] - }, - { - "title": "cockroach sqlfmt", - "urls": [ - "/${VERSION}/use-the-query-formatter.html" - ] - }, - { - "title": "cockroach user", - "urls": [ - "/${VERSION}/create-and-manage-users.html" - ] - }, - { - "title": "cockroach node", - "urls": [ - "/${VERSION}/view-node-details.html" - ] - }, - { - "title": "cockroach dump", - "urls": [ - "/${VERSION}/sql-dump.html" - ] - }, - { - "title": "cockroach demo", - "urls": [ - "/${VERSION}/cockroach-demo.html" - ] - }, - { - "title": "cockroach debug ballast", - "urls": [ - "/${VERSION}/debug-ballast.html" - ] - }, - { - "title": "cockroach debug encryption-active-key", - "urls": [ - "/${VERSION}/debug-encryption-active-key.html" - ] - }, - { - "title": "cockroach debug merge-logs", - "urls": [ - "/${VERSION}/debug-merge-logs.html" - ] - }, - { - "title": "cockroach debug zip", - "urls": [ - "/${VERSION}/debug-zip.html" - ] - }, - { - "title": "cockroach gen", - "urls": [ - "/${VERSION}/generate-cockroachdb-resources.html" - ] - }, - { - "title": "cockroach version", - "urls": [ - "/${VERSION}/view-version-details.html" - ] - }, - { - "title": "cockroach workload", - "urls": [ - "/${VERSION}/cockroach-workload.html" - ] - } - ] - }, - { - "title": "Cluster Settings", - "items": [ - { - "title": "Cluster Settings Overview", - "urls": [ - "/${VERSION}/cluster-settings.html" - ] - }, - { - "title": "Cost-Based Optimizer", - "urls": [ - "/${VERSION}/cost-based-optimizer.html" - ] - }, - { - "title": "Follower Reads", - "urls": [ - "/${VERSION}/follower-reads.html" - ] - }, - { - "title": "Load-Based Splitting", - "urls": [ - "/${VERSION}/load-based-splitting.html" - ] - }, - { - "title": "Range Merges", - "urls": [ - "/${VERSION}/range-merges.html" - ] - } - ] - }, - { - "title": "Admin UI", - "items": [ - { - "title": "Admin UI Overview", - "urls": [ - "/${VERSION}/admin-ui-overview.html" - ] - }, - { - "title": "Cluster Overview Page", - "urls": [ - "/${VERSION}/admin-ui-cluster-overview-page.html" - ] - }, - { - "title": "Overview Dashboard", - "urls": [ - "/${VERSION}/admin-ui-overview-dashboard.html" - ] - }, - { - "title": "Hardware Dashboard", - "urls": [ - "/${VERSION}/admin-ui-hardware-dashboard.html" - ] - }, - { - "title": "Runtime Dashboard", - "urls": [ - "/${VERSION}/admin-ui-runtime-dashboard.html" - ] - }, - { - "title": "SQL Dashboard", - "urls": [ - "/${VERSION}/admin-ui-sql-dashboard.html" - ] - }, - { - "title": "Storage Dashboard", - "urls": [ - "/${VERSION}/admin-ui-storage-dashboard.html" - ] - }, - { - "title": "Replication Dashboard", - "urls": [ - "/${VERSION}/admin-ui-replication-dashboard.html" - ] - }, - { - "title": "Changefeeds Dashboard", - "urls": [ - "/${VERSION}/admin-ui-cdc-dashboard.html" - ] - }, - { - "title": "Databases Page", - "urls": [ - "/${VERSION}/admin-ui-databases-page.html" - ] - }, - { - "title": "Statements Page", - "urls": [ - "/${VERSION}/admin-ui-statements-page.html" - ] - }, - { - "title": "Jobs Page", - "urls": [ - "/${VERSION}/admin-ui-jobs-page.html" - ] - }, - { - "title": "Advanced Debugging Page", - "urls": [ - "/${VERSION}/admin-ui-debug-pages.html" - ] - }, - { - "title": "Custom Chart Debug Page", - "urls": [ - "/${VERSION}/admin-ui-custom-chart-debug-page.html" - ] - } - ] - }, - { - "title": "Third-Party Database Tools", - "items": [ - { - "title": "Overview", - "urls": [ - "/${VERSION}/third-party-database-tools.html" - ] - }, - { - "title": "DBeaver", - "urls": [ - "/${VERSION}/dbeaver.html" - ] - }, - { - "title": "IntelliJ IDEA", - "urls": [ - "/${VERSION}/intellij-idea.html" - ] - } - ] - }, - { - "title": "Diagnostics Reporting", - "urls": [ - "/${VERSION}/diagnostics-reporting.html" - ] - } - ] - }, - { - "title": "FAQs", - "items": [ - { - "title": "Product FAQs", - "urls": [ - "/${VERSION}/frequently-asked-questions.html" - ] - }, - { - "title": "SQL FAQs", - "urls": [ - "/${VERSION}/sql-faqs.html" - ] - }, - { - "title": "Operational FAQs", - "urls": [ - "/${VERSION}/operational-faqs.html" - ] - }, - { - "title": "Availability FAQs", - "urls": [ - "/${VERSION}/multi-active-availability.html" - ] - }, - { - "title": "Licensing FAQs", - "urls": [ - "/${VERSION}/licensing-faqs.html" - ] - }, - { - "title": "CockroachDB in Comparison", - "urls": [ - "/${VERSION}/cockroachdb-in-comparison.html" - ] - } - ] - }, - {% include sidebar-releases.json %} - ] - } -] diff --git a/src/current/_includes/v19.1/admin-ui-custom-chart-debug-page-00.html b/src/current/_includes/v19.1/admin-ui-custom-chart-debug-page-00.html deleted file mode 100644 index 36e0764df99..00000000000 --- a/src/current/_includes/v19.1/admin-ui-custom-chart-debug-page-00.html +++ /dev/null @@ -1,109 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- Column - - Description -
- Metric Name - - How the system refers to this metric, e.g., sql.bytesin. -
- Downsampler - -

- The "Downsampler" operation is used to combine the individual datapoints over the longer period into a single datapoint. We store one data point every ten seconds, but for queries over long time spans the backend lowers the resolution of the returned data, perhaps only returning one data point for every minute, five minutes, or even an entire hour in the case of the 30 day view. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Aggregator - -

- Used to combine data points from different nodes. It has the same operations available as the Downsampler. -

-

- Options: -

    -
  • AVG: Returns the average value over the time period.
  • -
  • MIN: Returns the lowest value seen.
  • -
  • MAX: Returns the highest value seen.
  • -
  • SUM: Returns the sum of all values seen.
  • -
-

-
- Rate - -

- Determines how to display the rate of change during the selected time period. -

-

- Options: - -

    -
  • - Normal: Returns the actual recorded value. -
  • -
  • - Rate: Returns the rate of change of the value per second. -
  • -
  • - Non-negative Rate: Returns the rate-of-change, but returns 0 instead of negative values. A large number of the stats we track are actually tracked as monotonically increasing counters so each sample is just the total value of that counter. The rate of change of that counter represents the rate of events being counted, which is usually what you want to graph. "Non-negative Rate" is needed because the counters are stored in memory, and thus if a node resets it goes back to zero (whereas normally they only increase). -
  • -
-

-
- Source - - The set of nodes being queried, which is either: -
    -
  • - The entire cluster. -
  • -
  • - A single, named node. -
  • -
-
- Per Node - - If checked, the chart will show a line for each node's value of this metric. -
diff --git a/src/current/_includes/v19.1/alerts/warning-a63162.md b/src/current/_includes/v19.1/alerts/warning-a63162.md deleted file mode 100644 index d57ce0331fa..00000000000 --- a/src/current/_includes/v19.1/alerts/warning-a63162.md +++ /dev/null @@ -1,3 +0,0 @@ -Cockroach Labs has discovered a bug relating to incremental backups, for CockroachDB v19.1.0 - v19.1.11. If a backup coincides with an in-progress index creation (backfill), `RESTORE`, or `IMPORT`, it is possible that a subsequent incremental backup will not include all of the indexed, restored or imported data. - -For more information, including other affected versions, see [Technical Advisory 63162](../advisories/a63162.html). \ No newline at end of file diff --git a/src/current/_includes/v19.1/app/BasicExample.java b/src/current/_includes/v19.1/app/BasicExample.java deleted file mode 100644 index 6647252710e..00000000000 --- a/src/current/_includes/v19.1/app/BasicExample.java +++ /dev/null @@ -1,457 +0,0 @@ -import java.util.*; -import java.time.*; -import java.sql.*; -import javax.sql.DataSource; - -import org.postgresql.ds.PGSimpleDataSource; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac BasicExample.java && java BasicExample - - To build the javadoc: - - $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java - - At a high level, this code consists of two classes: - - 1. BasicExample, which is where the application logic lives. - - 2. BasicExampleDAO, which is used by the application to access the - data store. - -*/ - -public class BasicExample { - - public static void main(String[] args) { - - // Configure the database connection. - PGSimpleDataSource ds = new PGSimpleDataSource(); - ds.setServerName("localhost"); - ds.setPortNumber(26257); - ds.setDatabaseName("bank"); - ds.setUser("maxroach"); - ds.setPassword(null); - ds.setSsl(true); - ds.setSslMode("require"); - ds.setSslCert("certs/client.maxroach.crt"); - ds.setSslKey("certs/client.maxroach.key.pk8"); - ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string - ds.setApplicationName("BasicExample"); - - // Create DAO. - BasicExampleDAO dao = new BasicExampleDAO(ds); - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // necessary in production code. - dao.testRetryHandling(); - - // Set up the 'accounts' table. - dao.createAccounts(); - - // Insert a few accounts "by hand", using INSERTs on the backend. - Map balances = new HashMap(); - balances.put("1", "1000"); - balances.put("2", "250"); - int updatedAccounts = dao.updateAccounts(balances); - System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts); - - // How much money is in these accounts? - int balance1 = dao.getAccountBalance(1); - int balance2 = dao.getAccountBalance(2); - System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2); - - // Transfer $100 from account 1 to account 2 - int fromAccount = 1; - int toAccount = 2; - int transferAmount = 100; - int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount); - if (transferredAccounts != -1) { - System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts); - } - - balance1 = dao.getAccountBalance(1); - balance2 = dao.getAccountBalance(2); - System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2); - - // Bulk insertion example using JDBC's batching support. - int totalRowsInserted = dao.bulkInsertRandomAccountData(); - System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted); - - // Print out 10 account values. - int accountsRead = dao.readAccounts(10); - - // Drop the 'accounts' table so this code can be run again. - dao.tearDown(); - } -} - -/** - * Data access object used by 'BasicExample'. Abstraction over some - * common CockroachDB operations, including: - * - * - Auto-handling transaction retries in the 'runSQL' method - * - * - Example of bulk inserts in the 'bulkInsertRandomAccountData' - * method - */ - -class BasicExampleDAO { - - private static final int MAX_RETRY_COUNT = 3; - private static final String RETRY_SQL_STATE = "40001"; - private static final boolean FORCE_RETRY = false; - - private final DataSource ds; - - private final Random rand = new Random(); - - BasicExampleDAO(DataSource ds) { - this.ds = ds; - } - - /** - Used to test the retry logic in 'runSQL'. It is not necessary - in production code. Note that this calls an internal - CockroachDB function that can only be run by the 'root' user, - and will fail with an insufficient privileges error if you try - to run it as user 'maxroach'. - */ - void testRetryHandling() { - if (this.FORCE_RETRY) { - runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)"); - } - } - - /** - * Run SQL code in a way that automatically handles the - * transaction retry logic so we do not have to duplicate it in - * various places. - * - * @param sqlCode a String containing the SQL code you want to - * execute. Can have placeholders, e.g., "INSERT INTO accounts - * (id, balance) VALUES (?, ?)". - * - * @param args String Varargs to fill in the SQL code's - * placeholders. - * @return Integer Number of rows updated, or -1 if an error is thrown. - */ - public Integer runSQL(String sqlCode, String... args) { - - // This block is only used to emit class and method names in - // the program output. It is not necessary in production - // code. - StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace(); - StackTraceElement elem = stacktrace[2]; - String callerClass = elem.getClassName(); - String callerMethod = elem.getMethodName(); - - int rv = 0; - - try (Connection connection = ds.getConnection()) { - - // We're managing the commit lifecycle ourselves so we can - // automatically issue transaction retries. - connection.setAutoCommit(false); - - int retryCount = 0; - - while (retryCount <= MAX_RETRY_COUNT) { - - if (retryCount == MAX_RETRY_COUNT) { - String err = String.format("hit max of %s retries, aborting", MAX_RETRY_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryHandling()'. - if (FORCE_RETRY) { - forceRetry(connection); // SELECT 1 - } - - try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) { - - // Loop over the args and insert them into the - // prepared statement based on their types. In - // this simple example we classify the argument - // types as "integers" and "everything else" - // (a.k.a. strings). - for (int i=0; i %10s\n", name, val); - } - } - } - } else { - int updateCount = pstmt.getUpdateCount(); - rv += updateCount; - - // This printed output is for debugging and/or demonstration - // purposes only. It would not be necessary in production code. - System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt); - } - - connection.commit(); - break; - - } catch (SQLException e) { - - if (RETRY_SQL_STATE.equals(e.getSQLState())) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a - // little before trying again. Each time - // through the loop we sleep for a little - // longer than the last time - // (A.K.A. exponential backoff). - System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), retryCount); - connection.rollback(); - retryCount++; - int sleepMillis = (int)(Math.pow(2, retryCount) * 100) + rand.nextInt(100); - System.out.printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // Necessary to allow the Thread.sleep() - // above so the retry loop can continue. - } - - rv = -1; - } else { - rv = -1; - throw e; - } - } - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - rv = -1; - } - - return rv; - } - - /** - * Helper method called by 'testRetryHandling'. It simply issues - * a "SELECT 1" inside the transaction to force a retry. This is - * necessary to take the connection's session out of the AutoRetry - * state, since otherwise the other statements in the session will - * be retried automatically, and the client (us) will not see a - * retry error. Note that this information is taken from the - * following test: - * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry - * - * @param connection Connection - */ - private void forceRetry(Connection connection) throws SQLException { - try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){ - statement.executeQuery(); - } - } - - /** - * Creates a fresh, empty accounts table in the database. - */ - public void createAccounts() { - runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))"); - }; - - /** - * Update accounts by passing in a Map of (ID, Balance) pairs. - * - * @param accounts (Map) - * @return The number of updated accounts (int) - */ - public int updateAccounts(Map accounts) { - int rows = 0; - for (Map.Entry account : accounts.entrySet()) { - - String k = account.getKey(); - String v = account.getValue(); - - String[] args = {k, v}; - rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args); - } - return rows; - } - - /** - * Transfer funds between one account and another. Handles - * transaction retries in case of conflict automatically on the - * backend. - * @param fromId (int) - * @param toId (int) - * @param amount (int) - * @return The number of updated accounts (int) - */ - public int transferFunds(int fromId, int toId, int amount) { - String sFromId = Integer.toString(fromId); - String sToId = Integer.toString(toId); - String sAmount = Integer.toString(amount); - - // We have omitted explicit BEGIN/COMMIT statements for - // brevity. Individual statements are treated as implicit - // transactions by CockroachDB (see - // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements). - - String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" + - "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," + - "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))"; - - return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount); - } - - /** - * Get the account balance for one account. - * - * We skip using the retry logic in 'runSQL()' here for the - * following reasons: - * - * 1. Since this is a single read ("SELECT"), we do not expect any - * transaction conflicts to handle - * - * 2. We need to return the balance as an integer - * - * @param id (int) - * @return balance (int) - */ - public int getAccountBalance(int id) { - int balance = 0; - - try (Connection connection = ds.getConnection()) { - - // Check the current balance. - ResultSet res = connection.createStatement() - .executeQuery("SELECT balance FROM accounts WHERE id = " - + id); - if(!res.next()) { - System.out.printf("No users in the table with id %i", id); - } else { - balance = res.getInt("balance"); - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - - return balance; - } - - /** - * Insert randomized account data (ID, balance) using the JDBC - * fast path for bulk inserts. The fastest way to get data into - * CockroachDB is the IMPORT statement. However, if you must bulk - * ingest from the application using INSERT statements, the best - * option is the method shown here. It will require the following: - * - * 1. Add `rewriteBatchedInserts=true` to your JDBC connection - * settings (see the connection info in 'BasicExample.main'). - * - * 2. Inserting in batches of 128 rows, as used inside this method - * (see BATCH_SIZE), since the PGJDBC driver's logic works best - * with powers of two, such that a batch of size 128 can be 6x - * faster than a batch of size 250. - * @return The number of new accounts inserted (int) - */ - public int bulkInsertRandomAccountData() { - - Random random = new Random(); - int BATCH_SIZE = 128; - int totalNewAccounts = 0; - - try (Connection connection = ds.getConnection()) { - - // We're managing the commit lifecycle ourselves so we can - // control the size of our batch inserts. - connection.setAutoCommit(false); - - // In this example we are adding 500 rows to the database, - // but it could be any number. What's important is that - // the batch size is 128. - try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) { - for (int i=0; i<=(500/BATCH_SIZE);i++) { - for (int j=0; j %s row(s) updated in this batch\n", count.length); - } - connection.commit(); - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - return totalNewAccounts; - } - - /** - * Read out a subset of accounts from the data store. - * - * @param limit (int) - * @return Number of accounts read (int) - */ - public int readAccounts(int limit) { - return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit)); - } - - /** - * Perform any necessary cleanup of the data store so it can be - * used again. - */ - public void tearDown() { - runSQL("DROP TABLE accounts;"); - } -} diff --git a/src/current/_includes/v19.1/app/activerecord-basic-sample.rb b/src/current/_includes/v19.1/app/activerecord-basic-sample.rb deleted file mode 100644 index f1d35e1de3a..00000000000 --- a/src/current/_includes/v19.1/app/activerecord-basic-sample.rb +++ /dev/null @@ -1,48 +0,0 @@ -require 'active_record' -require 'activerecord-cockroachdb-adapter' -require 'pg' - -# Connect to CockroachDB through ActiveRecord. -# In Rails, this configuration would go in config/database.yml as usual. -ActiveRecord::Base.establish_connection( - adapter: 'cockroachdb', - username: 'maxroach', - database: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey: 'certs/client.maxroach.key', - sslcert: 'certs/client.maxroach.crt' -) - - -# Define the Account model. -# In Rails, this would go in app/models/ as usual. -class Account < ActiveRecord::Base - validates :id, presence: true - validates :balance, presence: true -end - -# Define a migration for the accounts table. -# In Rails, this would go in db/migrate/ as usual. -class Schema < ActiveRecord::Migration[5.0] - def change - create_table :accounts, force: true do |t| - t.integer :balance - end - end -end - -# Run the schema migration by hand. -# In Rails, this would be done via rake db:migrate as usual. -Schema.new.change() - -# Create two accounts, inserting two rows into the accounts table. -Account.create(id: 1, balance: 1000) -Account.create(id: 2, balance: 250) - -# Retrieve accounts and print out the balances -Account.all.each do |acct| - puts "#{acct.id} #{acct.balance}" -end diff --git a/src/current/_includes/v19.1/app/basic-sample.c b/src/current/_includes/v19.1/app/basic-sample.c deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/src/current/_includes/v19.1/app/basic-sample.clj b/src/current/_includes/v19.1/app/basic-sample.clj deleted file mode 100644 index 10c98fff2ba..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.clj +++ /dev/null @@ -1,35 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:dbtype "postgresql" - :dbname "bank" - :host "localhost" - :port "26257" - :ssl true - :sslmode "require" - :sslcert "certs/client.maxroach.crt" - :sslkey "certs/client.maxroach.key.pk8" - :user "maxroach"}) - -(defn test-basic [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Insert two rows into the "accounts" table. - (j/insert! conn :accounts {:id 1 :balance 1000}) - (j/insert! conn :accounts {:id 2 :balance 250}) - - ;; Print out the balances. - (println "Initial balances:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - doall) - - )) - - -(defn -main [& args] - (test-basic)) diff --git a/src/current/_includes/v19.1/app/basic-sample.cpp b/src/current/_includes/v19.1/app/basic-sample.cpp deleted file mode 100644 index 67b6c1d1062..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.cpp +++ /dev/null @@ -1,39 +0,0 @@ -#include -#include -#include -#include -#include -#include - -using namespace std; - -int main() { - try { - // Connect to the "bank" database. - pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost"); - - pqxx::nontransaction w(c); - - // Create the "accounts" table. - w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); - - // Insert two rows into the "accounts" table. - w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); - - // Print out the balances. - cout << "Initial balances:" << endl; - pqxx::result r = w.exec("SELECT id, balance FROM accounts"); - for (auto row : r) { - cout << row[0].as() << ' ' << row[1].as() << endl; - } - - w.commit(); // Note this doesn't doesn't do anything - // for a nontransaction, but is still required. - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v19.1/app/basic-sample.cs b/src/current/_includes/v19.1/app/basic-sample.cs deleted file mode 100644 index ffedb0cd210..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.cs +++ /dev/null @@ -1,101 +0,0 @@ -using System; -using System.Data; -using System.Security.Cryptography.X509Certificates; -using System.Net.Security; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.SslMode = SslMode.Require; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - Simple(connStringBuilder.ConnectionString); - } - - static void Simple(string connString) - { - using (var conn = new NpgsqlConnection(connString)) - { - conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback; - conn.UserCertificateValidationCallback += UserCertificateValidationCallback; - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using (var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - - static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts) - { - // To be able to add a certificate with a private key included, we must convert it to - // a PKCS #12 format. The following openssl command does this: - // openssl pkcs12 -password pass: -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx - // As of 2018-12-10, you need to provide a password for this to work on macOS. - // See https://github.com/dotnet/corefx/issues/24225 - - // Note that the password used during X509 cert creation below - // must match the password used in the openssl command above. - clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass")); - } - - // By default, .Net does all of its certificate verification using the system certificate store. - // This callback is necessary to validate the server certificate against a CA certificate file. - static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors) - { - X509Certificate2 caCert = new X509Certificate2("ca.crt"); - X509Chain caCertChain = new X509Chain(); - caCertChain.ChainPolicy = new X509ChainPolicy() - { - RevocationMode = X509RevocationMode.NoCheck, - RevocationFlag = X509RevocationFlag.EntireChain - }; - caCertChain.ChainPolicy.ExtraStore.Add(caCert); - - X509Certificate2 serverCert = new X509Certificate2(certificate); - - caCertChain.Build(serverCert); - if (caCertChain.ChainStatus.Length == 0) - { - // No errors - return true; - } - - foreach (X509ChainStatus status in caCertChain.ChainStatus) - { - // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store) - if (status.Status != X509ChainStatusFlags.UntrustedRoot) - { - return false; - } - } - return true; - } - - } -} diff --git a/src/current/_includes/v19.1/app/basic-sample.go b/src/current/_includes/v19.1/app/basic-sample.go deleted file mode 100644 index 6e22c858dbb..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.go +++ /dev/null @@ -1,46 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - // Connect to the "bank" database. - db, err := sql.Open("postgres", - "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - defer db.Close() - - // Create the "accounts" table. - if _, err := db.Exec( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil { - log.Fatal(err) - } - - // Insert two rows into the "accounts" table. - if _, err := db.Exec( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil { - log.Fatal(err) - } - - // Print out the balances. - rows, err := db.Query("SELECT id, balance FROM accounts") - if err != nil { - log.Fatal(err) - } - defer rows.Close() - fmt.Println("Initial balances:") - for rows.Next() { - var id, balance int - if err := rows.Scan(&id, &balance); err != nil { - log.Fatal(err) - } - fmt.Printf("%d %d\n", id, balance) - } -} diff --git a/src/current/_includes/v19.1/app/basic-sample.js b/src/current/_includes/v19.1/app/basic-sample.js deleted file mode 100644 index 4e86cb2cbca..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.js +++ /dev/null @@ -1,63 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257, - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } -}; - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - - // Close communication with the database and exit. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Create the 'accounts' table. - client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next); - }, - function (results, next) { - // Insert two rows into the 'accounts' table. - client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next); - }, - function (results, next) { - // Print out account balances. - client.query('SELECT id, balance FROM accounts;', next); - }, - ], - function (err, results) { - if (err) { - console.error('Error inserting into and selecting from accounts: ', err); - finish(); - } - - console.log('Initial balances:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v19.1/app/basic-sample.php b/src/current/_includes/v19.1/app/basic-sample.php deleted file mode 100644 index 4edae09b12a..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.php +++ /dev/null @@ -1,20 +0,0 @@ - PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - PDO::ATTR_PERSISTENT => true - )); - - $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)'); - - print "Account balances:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v19.1/app/basic-sample.py b/src/current/_includes/v19.1/app/basic-sample.py deleted file mode 100644 index 6d3314baf4b..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.py +++ /dev/null @@ -1,154 +0,0 @@ -#!/usr/bin/env python3 - -import psycopg2 -import psycopg2.errorcodes -import time -import logging -import random - - -def create_accounts(conn): - with conn.cursor() as cur: - cur.execute('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - cur.execute('UPSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - logging.debug("create_accounts(): status message: {}".format(cur.statusmessage)) - conn.commit() - - -def print_balances(conn): - with conn.cursor() as cur: - cur.execute("SELECT id, balance FROM accounts") - logging.debug("print_balances(): status message: {}".format(cur.statusmessage)) - rows = cur.fetchall() - conn.commit() - print("Balances at {}".format(time.asctime())) - for row in rows: - print([str(cell) for cell in row]) - - -def delete_accounts(conn): - with conn.cursor() as cur: - cur.execute("DELETE FROM bank.accounts") - logging.debug("delete_accounts(): status message: {}".format(cur.statusmessage)) - conn.commit() - - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn, op): - retries = 0 - max_retries = 3 - with conn: - while True: - retries +=1 - if retries == max_retries: - err_msg = "Transaction did not succeed after {} retries".format(max_retries) - raise ValueError(err_msg) - - try: - op(conn) - - # If we reach this point, we were able to commit, so we break - # from the retry loop. - break - except psycopg2.Error as e: - logging.debug("e.pgcode: {}".format(e.pgcode)) - if e.pgcode == '40001': - # This is a retry error, so we roll back the current - # transaction and sleep for a bit before retrying. The - # sleep time increases for each failed transaction. - conn.rollback() - logging.debug("EXECUTE SERIALIZATION_FAILURE BRANCH") - sleep_ms = (2**retries) * 0.1 * (random.random() + 0.5) - logging.debug("Sleeping {} seconds".format(sleep_ms)) - time.sleep(sleep_ms) - continue - else: - logging.debug("EXECUTE NON-SERIALIZATION_FAILURE BRANCH") - raise e - - -# This function is used to test the transaction retry logic. It can be deleted -# from production code. -def test_retry_loop(conn): - with conn.cursor() as cur: - # The first statement in a transaction can be retried transparently on - # the server, so we need to add a placeholder statement so that our - # force_retry() statement isn't the first one. - cur.execute('SELECT now()') - # The function below can only be run by the root user. Trying to run - # it as user 'maxroach' will fail with an error. - cur.execute("SELECT crdb_internal.force_retry('1s'::INTERVAL)") - logging.debug("test_retry_loop(): status message: {}".format(cur.statusmessage)) - - -def transfer_funds(conn, frm, to, amount): - with conn.cursor() as cur: - - # Check the current balance. - cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm)) - from_balance = cur.fetchone()[0] - if from_balance < amount: - err_msg = "Insufficient funds in account {}: have {}, need {}".format(frm, from_balance, amount) - raise RuntimeError(err_msg) - - # Perform the transfer. - cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", - (amount, frm)) - cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", - (amount, to)) - conn.commit() - logging.debug("transfer_funds(): status message: {}".format(cur.statusmessage)) - - -def main(): - - conn = psycopg2.connect( - database='bank', - user='maxroach', - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.maxroach.key', - sslcert='certs/client.maxroach.crt', - port=26257, - host='localhost' - ) - - # Uncomment the below to turn on logging to the console. This was useful - # when testing transaction retry handling. It is not necessary for - # production code. - # log_level = getattr(logging, 'DEBUG', None) - # logging.basicConfig(level=log_level) - - create_accounts(conn) - - print_balances(conn) - - amount = 100 - fromId = 1 - toId = 2 - - try: - run_transaction(conn, lambda conn: transfer_funds(conn, fromId, toId, amount)) - - # The function below is used to test the transaction retry logic. It - # can be deleted from production code. - # run_transaction(conn, lambda conn: test_retry_loop(conn)) - except ValueError as ve: - # Below, we print the error and continue on so this example is easy to - # run (and run, and run...). In real code you should handle this error - # and any others thrown by the database interaction. - logging.debug("run_transaction(conn, op) failed: {}".format(ve)) - pass - - print_balances(conn) - - delete_accounts(conn) - - # Close communication with the database. - conn.close() - - -if __name__ == '__main__': - main() diff --git a/src/current/_includes/v19.1/app/basic-sample.rb b/src/current/_includes/v19.1/app/basic-sample.rb deleted file mode 100644 index 93f0dc3d20c..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.rb +++ /dev/null @@ -1,31 +0,0 @@ -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey:'certs/client.maxroach.key', - sslcert:'certs/client.maxroach.crt' -) - -# Create the "accounts" table. -conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - -# Insert two rows into the "accounts" table. -conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - -# Print out the balances. -puts 'Initial balances:' -conn.exec('SELECT id, balance FROM accounts') do |res| - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v19.1/app/basic-sample.rs b/src/current/_includes/v19.1/app/basic-sample.rs deleted file mode 100644 index 4a078991cd8..00000000000 --- a/src/current/_includes/v19.1/app/basic-sample.rs +++ /dev/null @@ -1,45 +0,0 @@ -use openssl::error::ErrorStack; -use openssl::ssl::{SslConnector, SslFiletype, SslMethod}; -use postgres::Client; -use postgres_openssl::MakeTlsConnector; - -fn ssl_config() -> Result { - let mut builder = SslConnector::builder(SslMethod::tls())?; - builder.set_ca_file("certs/ca.crt")?; - builder.set_certificate_chain_file("certs/client.maxroach.crt")?; - builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?; - Ok(MakeTlsConnector::new(builder.build())) -} - -fn main() { - let connector = ssl_config().unwrap(); - let mut client = - Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap(); - - // Create the "accounts" table. - client - .execute( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", - &[], - ) - .unwrap(); - - // Insert two rows into the "accounts" table. - client - .execute( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)", - &[], - ) - .unwrap(); - - // Print out the balances. - println!("Initial balances:"); - for row in &client - .query("SELECT id, balance FROM accounts", &[]) - .unwrap() - { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v19.1/app/before-you-begin.md b/src/current/_includes/v19.1/app/before-you-begin.md deleted file mode 100644 index dfb97226414..00000000000 --- a/src/current/_includes/v19.1/app/before-you-begin.md +++ /dev/null @@ -1,8 +0,0 @@ -1. [Install CockroachDB](install-cockroachdb.html). -2. Start up a [secure](secure-a-cluster.html) or [insecure](start-a-local-cluster.html) local cluster. -3. Choose the instructions that correspond to whether your cluster is secure or insecure: - -
- - -
diff --git a/src/current/_includes/v19.1/app/create-maxroach-user-and-bank-database.md b/src/current/_includes/v19.1/app/create-maxroach-user-and-bank-database.md deleted file mode 100644 index e887162f380..00000000000 --- a/src/current/_includes/v19.1/app/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v19.1/app/gorm-sample.go b/src/current/_includes/v19.1/app/gorm-sample.go deleted file mode 100644 index 2fa4f082cc1..00000000000 --- a/src/current/_includes/v19.1/app/gorm-sample.go +++ /dev/null @@ -1,207 +0,0 @@ -package main - -import ( - "fmt" - "log" - "math" - "math/rand" - "time" - - // Import GORM-related packages. - "github.com/jinzhu/gorm" - _ "github.com/jinzhu/gorm/dialects/postgres" - - // Necessary in order to check for transaction retry error codes. - "github.com/lib/pq" -) - -// Account is our model, which corresponds to the "accounts" database -// table. -type Account struct { - ID int `gorm:"primary_key"` - Balance int -} - -// Functions of type `txnFunc` are passed as arguments to our -// `runTransaction` wrapper that handles transaction retries for us -// (see implementation below). -type txnFunc func(*gorm.DB) error - -// This function is used for testing the transaction retry loop. It -// can be deleted from production code. -var forceRetryLoop txnFunc = func(db *gorm.DB) error { - - // The first statement in a transaction can be retried transparently - // on the server, so we need to add a placeholder statement so that our - // force_retry statement isn't the first one. - if err := db.Exec("SELECT now()").Error; err != nil { - return err - } - // Used to force a transaction retry. Can only be run as the - // 'root' user. - if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil { - return err - } - return nil -} - -func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error { - var fromAccount Account - var toAccount Account - - db.First(&fromAccount, fromID) - db.First(&toAccount, toID) - - if fromAccount.Balance < amount { - return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount) - } - - fromAccount.Balance -= amount - toAccount.Balance += amount - - if err := db.Save(&fromAccount).Error; err != nil { - return err - } - if err := db.Save(&toAccount).Error; err != nil { - return err - } - return nil -} - -func main() { - // Connect to the "bank" database as the "maxroach" user. - const addr = "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt" - db, err := gorm.Open("postgres", addr) - if err != nil { - log.Fatal(err) - } - defer db.Close() - - // Set to `true` and GORM will print out all DB queries. - db.LogMode(false) - - // Automatically create the "accounts" table based on the Account - // model. - db.AutoMigrate(&Account{}) - - // Insert two rows into the "accounts" table. - var fromID = 1 - var toID = 2 - db.Create(&Account{ID: fromID, Balance: 1000}) - db.Create(&Account{ID: toID, Balance: 250}) - - // The sequence of steps in this section is: - // 1. Print account balances. - // 2. Set up some Accounts and transfer funds between them inside - // a transaction. - // 3. Print account balances again to verify the transfer occurred. - - // Print balances before transfer. - printBalances(db) - - // The amount to be transferred between the accounts. - var amount = 100 - - // Transfer funds between accounts. To handle potential - // transaction retry errors, we wrap the call to `transferFunds` - // in `runTransaction`, a wrapper which implements a retry loop - // with exponential backoff around our access to the database (see - // the implementation for details). - if err := runTransaction(db, - func(*gorm.DB) error { - return transferFunds(db, fromID, toID, amount) - }, - ); err != nil { - // If the error is returned, it's either: - // 1. Not a transaction retry error, i.e., some other kind - // of database error that you should handle here. - // 2. A transaction retry error that has occurred more than - // N times (defined by the `maxRetries` variable inside - // `runTransaction`), in which case you will need to figure - // out why your database access is resulting in so much - // contention (see 'Understanding and avoiding transaction - // contention': - // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) - fmt.Println(err) - } - - // Print balances after transfer to ensure that it worked. - printBalances(db) - - // Delete accounts so we can start fresh when we want to run this - // program again. - deleteAccounts(db) -} - -// Wrapper for a transaction. This automatically re-calls `fn` with -// the open transaction as an argument as long as the database server -// asks for the transaction to be retried. -func runTransaction(db *gorm.DB, fn txnFunc) error { - var maxRetries = 3 - for retries := 0; retries <= maxRetries; retries++ { - if retries == maxRetries { - return fmt.Errorf("hit max of %d retries, aborting", retries) - } - txn := db.Begin() - if err := fn(txn); err != nil { - // We need to cast GORM's db.Error to *pq.Error so we can - // detect the Postgres transaction retry error code and - // handle retries appropriately. - pqErr := err.(*pq.Error) - if pqErr.Code == "40001" { - // Since this is a transaction retry error, we - // ROLLBACK the transaction and sleep a little before - // trying again. Each time through the loop we sleep - // for a little longer than the last time - // (A.K.A. exponential backoff). - txn.Rollback() - var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5) - fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs) - time.Sleep(time.Millisecond * time.Duration(sleepMs)) - } else { - // If it's not a retry error, it's some other sort of - // DB interaction error that needs to be handled by - // the caller. - return err - } - } else { - // All went well, so we try to commit and break out of the - // retry loop if possible. - if err := txn.Commit().Error; err != nil { - pqErr := err.(*pq.Error) - if pqErr.Code == "40001" { - // However, our attempt to COMMIT could also - // result in a retry error, in which case we - // continue back through the loop and try again. - continue - } else { - // If it's not a retry error, it's some other sort - // of DB interaction error that needs to be - // handled by the caller. - return err - } - } - break - } - } - return nil -} - -func printBalances(db *gorm.DB) { - var accounts []Account - db.Find(&accounts) - fmt.Printf("Balance at '%s':\n", time.Now()) - for _, account := range accounts { - fmt.Printf("%d %d\n", account.ID, account.Balance) - } -} - -func deleteAccounts(db *gorm.DB) error { - // Used to tear down the accounts table so we can re-run this - // program. - err := db.Exec("DELETE from accounts where ID > 0").Error - if err != nil { - return err - } - return nil -} diff --git a/src/current/_includes/v19.1/app/hibernate-basic-sample/Sample.java b/src/current/_includes/v19.1/app/hibernate-basic-sample/Sample.java deleted file mode 100644 index 60a6b54f984..00000000000 --- a/src/current/_includes/v19.1/app/hibernate-basic-sample/Sample.java +++ /dev/null @@ -1,236 +0,0 @@ -package com.cockroachlabs; - -import org.hibernate.Session; -import org.hibernate.SessionFactory; -import org.hibernate.Transaction; -import org.hibernate.JDBCException; -import org.hibernate.cfg.Configuration; - -import java.util.*; -import java.util.function.Function; - -import javax.persistence.Column; -import javax.persistence.Entity; -import javax.persistence.Id; -import javax.persistence.Table; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - // Account is our model, which corresponds to the "accounts" database table. - @Entity - @Table(name="accounts") - public static class Account { - @Id - @Column(name="id") - public long id; - - public long getId() { - return id; - } - - @Column(name="balance") - public long balance; - public long getBalance() { - return balance; - } - public void setBalance(long newBalance) { - this.balance = newBalance; - } - - // Convenience constructor. - public Account(int id, int balance) { - this.id = id; - this.balance = balance; - } - - // Hibernate needs a default (no-arg) constructor to create model objects. - public Account() {} - } - - private static Function addAccounts() throws JDBCException{ - Function f = s -> { - long rv = 0; - try { - s.save(new Account(1, 1000)); - s.save(new Account(2, 250)); - s.save(new Account(3, 314159)); - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - } catch (JDBCException e) { - throw e; - } - return rv; - }; - return f; - } - - private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{ - Function f = s -> { - long rv = 0; - try { - Account fromAccount = (Account) s.get(Account.class, fromId); - Account toAccount = (Account) s.get(Account.class, toId); - if (!(amount > fromAccount.getBalance())) { - fromAccount.balance -= amount; - toAccount.balance += amount; - s.save(fromAccount); - s.save(toAccount); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - } catch (JDBCException e) { - throw e; - } - return rv; - }; - return f; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() throws JDBCException { - Function f = s -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate(); - } catch (JDBCException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - return f; - } - - private static Function getAccountBalance(long id) throws JDBCException{ - Function f = s -> { - long balance; - try { - Account account = s.get(Account.class, id); - balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - } catch (JDBCException e) { - throw e; - } - return balance; - }; - return f; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(Session session, Function fn) { - long rv = 0; - int attemptCount = 0; - - while (attemptCount < MAX_ATTEMPT_COUNT) { - attemptCount++; - - if (attemptCount > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount); - } - - Transaction txn = session.beginTransaction(); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.createNativeQuery("SELECT now()").list(); - } - - try { - rv = fn.apply(session); - if (rv != -1) { - txn.commit(); - System.out.printf("APP: COMMIT;\n"); - break; - } - } catch (JDBCException e) { - if (RETRY_SQL_STATE.equals(e.getSQLState())) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount); - System.out.printf("APP: ROLLBACK;\n"); - txn.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv = -1; - } else { - throw e; - } - } - } - return rv; - } - - public static void main(String[] args) { - // Create a SessionFactory based on our hibernate.cfg.xml configuration - // file, which defines how to connect to the database. - SessionFactory sessionFactory = - new Configuration() - .configure("hibernate.cfg.xml") - .addAnnotatedClass(Account.class) - .buildSessionFactory(); - - try (Session session = sessionFactory.openSession()) { - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(session, forceRetryLogic()); - } else { - - runTransaction(session, addAccounts()); - long fromBalance = runTransaction(session, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(session, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } finally { - sessionFactory.close(); - } - } -} diff --git a/src/current/_includes/v19.1/app/hibernate-basic-sample/build.gradle b/src/current/_includes/v19.1/app/hibernate-basic-sample/build.gradle deleted file mode 100644 index 36f33d73fe6..00000000000 --- a/src/current/_includes/v19.1/app/hibernate-basic-sample/build.gradle +++ /dev/null @@ -1,16 +0,0 @@ -group 'com.cockroachlabs' -version '1.0' - -apply plugin: 'java' -apply plugin: 'application' - -mainClassName = 'com.cockroachlabs.Sample' - -repositories { - mavenCentral() -} - -dependencies { - compile 'org.hibernate:hibernate-core:5.2.4.Final' - compile 'org.postgresql:postgresql:42.2.2.jre7' -} diff --git a/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz deleted file mode 100644 index 3e729bf439e..00000000000 Binary files a/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ diff --git a/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate.cfg.xml deleted file mode 100644 index 454a4950ad0..00000000000 --- a/src/current/_includes/v19.1/app/hibernate-basic-sample/hibernate.cfg.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - - org.postgresql.Driver - org.hibernate.dialect.PostgreSQL95Dialect - - maxroach - - - create - - - true - true - - diff --git a/src/current/_includes/v19.1/app/insecure/BasicExample.java b/src/current/_includes/v19.1/app/insecure/BasicExample.java deleted file mode 100644 index ba3dda26640..00000000000 --- a/src/current/_includes/v19.1/app/insecure/BasicExample.java +++ /dev/null @@ -1,453 +0,0 @@ -import java.util.*; -import java.time.*; -import java.sql.*; -import javax.sql.DataSource; - -import org.postgresql.ds.PGSimpleDataSource; - -/* - Download the Postgres JDBC driver jar from https://jdbc.postgresql.org. - - Then, compile and run this example like so: - - $ export CLASSPATH=.:/path/to/postgresql.jar - $ javac BasicExample.java && java BasicExample - - To build the javadoc: - - $ javadoc -package -cp .:./path/to/postgresql.jar BasicExample.java - - At a high level, this code consists of two classes: - - 1. BasicExample, which is where the application logic lives. - - 2. BasicExampleDAO, which is used by the application to access the - data store. - -*/ - -public class BasicExample { - - public static void main(String[] args) { - - // Configure the database connection. - PGSimpleDataSource ds = new PGSimpleDataSource(); - ds.setServerName("localhost"); - ds.setPortNumber(26257); - ds.setDatabaseName("bank"); - ds.setUser("maxroach"); - ds.setPassword(null); - ds.setReWriteBatchedInserts(true); // add `rewriteBatchedInserts=true` to pg connection string - ds.setApplicationName("BasicExample"); - - // Create DAO. - BasicExampleDAO dao = new BasicExampleDAO(ds); - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // necessary in production code. - dao.testRetryHandling(); - - // Set up the 'accounts' table. - dao.createAccounts(); - - // Insert a few accounts "by hand", using INSERTs on the backend. - Map balances = new HashMap(); - balances.put("1", "1000"); - balances.put("2", "250"); - int updatedAccounts = dao.updateAccounts(balances); - System.out.printf("BasicExampleDAO.updateAccounts:\n => %s total updated accounts\n", updatedAccounts); - - // How much money is in these accounts? - int balance1 = dao.getAccountBalance(1); - int balance2 = dao.getAccountBalance(2); - System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2); - - // Transfer $100 from account 1 to account 2 - int fromAccount = 1; - int toAccount = 2; - int transferAmount = 100; - int transferredAccounts = dao.transferFunds(fromAccount, toAccount, transferAmount); - if (transferredAccounts != -1) { - System.out.printf("BasicExampleDAO.transferFunds:\n => $%s transferred between accounts %s and %s, %s rows updated\n", transferAmount, fromAccount, toAccount, transferredAccounts); - } - - balance1 = dao.getAccountBalance(1); - balance2 = dao.getAccountBalance(2); - System.out.printf("main:\n => Account balances at time '%s':\n ID %s => $%s\n ID %s => $%s\n", LocalTime.now(), 1, balance1, 2, balance2); - - // Bulk insertion example using JDBC's batching support. - int totalRowsInserted = dao.bulkInsertRandomAccountData(); - System.out.printf("\nBasicExampleDAO.bulkInsertRandomAccountData:\n => finished, %s total rows inserted\n", totalRowsInserted); - - // Print out 10 account values. - int accountsRead = dao.readAccounts(10); - - // Drop the 'accounts' table so this code can be run again. - dao.tearDown(); - } -} - -/** - * Data access object used by 'BasicExample'. Abstraction over some - * common CockroachDB operations, including: - * - * - Auto-handling transaction retries in the 'runSQL' method - * - * - Example of bulk inserts in the 'bulkInsertRandomAccountData' - * method - */ - -class BasicExampleDAO { - - private static final int MAX_RETRY_COUNT = 3; - private static final String RETRY_SQL_STATE = "40001"; - private static final boolean FORCE_RETRY = false; - - private final DataSource ds; - - private final Random rand = new Random(); - - BasicExampleDAO(DataSource ds) { - this.ds = ds; - } - - /** - Used to test the retry logic in 'runSQL'. It is not necessary - in production code. Note that this calls an internal - CockroachDB function that can only be run by the 'root' user, - and will fail with an insufficient privileges error if you try - to run it as user 'maxroach'. - */ - void testRetryHandling() { - if (this.FORCE_RETRY) { - runSQL("SELECT crdb_internal.force_retry('1s':::INTERVAL)"); - } - } - - /** - * Run SQL code in a way that automatically handles the - * transaction retry logic so we do not have to duplicate it in - * various places. - * - * @param sqlCode a String containing the SQL code you want to - * execute. Can have placeholders, e.g., "INSERT INTO accounts - * (id, balance) VALUES (?, ?)". - * - * @param args String Varargs to fill in the SQL code's - * placeholders. - * @return Integer Number of rows updated, or -1 if an error is thrown. - */ - public Integer runSQL(String sqlCode, String... args) { - - // This block is only used to emit class and method names in - // the program output. It is not necessary in production - // code. - StackTraceElement[] stacktrace = Thread.currentThread().getStackTrace(); - StackTraceElement elem = stacktrace[2]; - String callerClass = elem.getClassName(); - String callerMethod = elem.getMethodName(); - - int rv = 0; - - try (Connection connection = ds.getConnection()) { - - // We're managing the commit lifecycle ourselves so we can - // automatically issue transaction retries. - connection.setAutoCommit(false); - - int retryCount = 0; - - while (retryCount <= MAX_RETRY_COUNT) { - - if (retryCount == MAX_RETRY_COUNT) { - String err = String.format("hit max of %s retries, aborting", MAX_RETRY_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryHandling()'. - if (FORCE_RETRY) { - forceRetry(connection); // SELECT 1 - } - - try (PreparedStatement pstmt = connection.prepareStatement(sqlCode)) { - - // Loop over the args and insert them into the - // prepared statement based on their types. In - // this simple example we classify the argument - // types as "integers" and "everything else" - // (a.k.a. strings). - for (int i=0; i %10s\n", name, val); - } - } - } - } else { - int updateCount = pstmt.getUpdateCount(); - rv += updateCount; - - // This printed output is for debugging and/or demonstration - // purposes only. It would not be necessary in production code. - System.out.printf("\n%s.%s:\n '%s'\n", callerClass, callerMethod, pstmt); - } - - connection.commit(); - break; - - } catch (SQLException e) { - - if (RETRY_SQL_STATE.equals(e.getSQLState())) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a - // little before trying again. Each time - // through the loop we sleep for a little - // longer than the last time - // (A.K.A. exponential backoff). - System.out.printf("retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), retryCount); - connection.rollback(); - retryCount++; - int sleepMillis = (int)(Math.pow(2, retryCount) * 100) + rand.nextInt(100); - System.out.printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // Necessary to allow the Thread.sleep() - // above so the retry loop can continue. - } - - rv = -1; - } else { - rv = -1; - throw e; - } - } - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.runSQL ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - rv = -1; - } - - return rv; - } - - /** - * Helper method called by 'testRetryHandling'. It simply issues - * a "SELECT 1" inside the transaction to force a retry. This is - * necessary to take the connection's session out of the AutoRetry - * state, since otherwise the other statements in the session will - * be retried automatically, and the client (us) will not see a - * retry error. Note that this information is taken from the - * following test: - * https://github.com/cockroachdb/cockroach/blob/master/pkg/sql/logictest/testdata/logic_test/manual_retry - * - * @param connection Connection - */ - private void forceRetry(Connection connection) throws SQLException { - try (PreparedStatement statement = connection.prepareStatement("SELECT 1")){ - statement.executeQuery(); - } - } - - /** - * Creates a fresh, empty accounts table in the database. - */ - public void createAccounts() { - runSQL("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))"); - }; - - /** - * Update accounts by passing in a Map of (ID, Balance) pairs. - * - * @param accounts (Map) - * @return The number of updated accounts (int) - */ - public int updateAccounts(Map accounts) { - int rows = 0; - for (Map.Entry account : accounts.entrySet()) { - - String k = account.getKey(); - String v = account.getValue(); - - String[] args = {k, v}; - rows += runSQL("INSERT INTO accounts (id, balance) VALUES (?, ?)", args); - } - return rows; - } - - /** - * Transfer funds between one account and another. Handles - * transaction retries in case of conflict automatically on the - * backend. - * @param fromId (int) - * @param toId (int) - * @param amount (int) - * @return The number of updated accounts (int) - */ - public int transferFunds(int fromId, int toId, int amount) { - String sFromId = Integer.toString(fromId); - String sToId = Integer.toString(toId); - String sAmount = Integer.toString(amount); - - // We have omitted explicit BEGIN/COMMIT statements for - // brevity. Individual statements are treated as implicit - // transactions by CockroachDB (see - // https://www.cockroachlabs.com/docs/stable/transactions.html#individual-statements). - - String sqlCode = "UPSERT INTO accounts (id, balance) VALUES" + - "(?, ((SELECT balance FROM accounts WHERE id = ?) - ?))," + - "(?, ((SELECT balance FROM accounts WHERE id = ?) + ?))"; - - return runSQL(sqlCode, sFromId, sFromId, sAmount, sToId, sToId, sAmount); - } - - /** - * Get the account balance for one account. - * - * We skip using the retry logic in 'runSQL()' here for the - * following reasons: - * - * 1. Since this is a single read ("SELECT"), we do not expect any - * transaction conflicts to handle - * - * 2. We need to return the balance as an integer - * - * @param id (int) - * @return balance (int) - */ - public int getAccountBalance(int id) { - int balance = 0; - - try (Connection connection = ds.getConnection()) { - - // Check the current balance. - ResultSet res = connection.createStatement() - .executeQuery("SELECT balance FROM accounts WHERE id = " - + id); - if(!res.next()) { - System.out.printf("No users in the table with id %i", id); - } else { - balance = res.getInt("balance"); - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.getAccountBalance ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - - return balance; - } - - /** - * Insert randomized account data (ID, balance) using the JDBC - * fast path for bulk inserts. The fastest way to get data into - * CockroachDB is the IMPORT statement. However, if you must bulk - * ingest from the application using INSERT statements, the best - * option is the method shown here. It will require the following: - * - * 1. Add `rewriteBatchedInserts=true` to your JDBC connection - * settings (see the connection info in 'BasicExample.main'). - * - * 2. Inserting in batches of 128 rows, as used inside this method - * (see BATCH_SIZE), since the PGJDBC driver's logic works best - * with powers of two, such that a batch of size 128 can be 6x - * faster than a batch of size 250. - * @return The number of new accounts inserted (int) - */ - public int bulkInsertRandomAccountData() { - - Random random = new Random(); - int BATCH_SIZE = 128; - int totalNewAccounts = 0; - - try (Connection connection = ds.getConnection()) { - - // We're managing the commit lifecycle ourselves so we can - // control the size of our batch inserts. - connection.setAutoCommit(false); - - // In this example we are adding 500 rows to the database, - // but it could be any number. What's important is that - // the batch size is 128. - try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) { - for (int i=0; i<=(500/BATCH_SIZE);i++) { - for (int j=0; j %s row(s) updated in this batch\n", count.length); - } - connection.commit(); - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - } catch (SQLException e) { - System.out.printf("BasicExampleDAO.bulkInsertRandomAccountData ERROR: { state => %s, cause => %s, message => %s }\n", - e.getSQLState(), e.getCause(), e.getMessage()); - } - return totalNewAccounts; - } - - /** - * Read out a subset of accounts from the data store. - * - * @param limit (int) - * @return Number of accounts read (int) - */ - public int readAccounts(int limit) { - return runSQL("SELECT id, balance FROM accounts LIMIT ?", Integer.toString(limit)); - } - - /** - * Perform any necessary cleanup of the data store so it can be - * used again. - */ - public void tearDown() { - runSQL("DROP TABLE accounts;"); - } -} diff --git a/src/current/_includes/v19.1/app/insecure/activerecord-basic-sample.rb b/src/current/_includes/v19.1/app/insecure/activerecord-basic-sample.rb deleted file mode 100644 index 601838ee789..00000000000 --- a/src/current/_includes/v19.1/app/insecure/activerecord-basic-sample.rb +++ /dev/null @@ -1,44 +0,0 @@ -require 'active_record' -require 'activerecord-cockroachdb-adapter' -require 'pg' - -# Connect to CockroachDB through ActiveRecord. -# In Rails, this configuration would go in config/database.yml as usual. -ActiveRecord::Base.establish_connection( - adapter: 'cockroachdb', - username: 'maxroach', - database: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -# Define the Account model. -# In Rails, this would go in app/models/ as usual. -class Account < ActiveRecord::Base - validates :id, presence: true - validates :balance, presence: true -end - -# Define a migration for the accounts table. -# In Rails, this would go in db/migrate/ as usual. -class Schema < ActiveRecord::Migration[5.0] - def change - create_table :accounts, force: true do |t| - t.integer :balance - end - end -end - -# Run the schema migration by hand. -# In Rails, this would be done via rake db:migrate as usual. -Schema.new.change() - -# Create two accounts, inserting two rows into the accounts table. -Account.create(id: 1, balance: 1000) -Account.create(id: 2, balance: 250) - -# Retrieve accounts and print out the balances -Account.all.each do |acct| - puts "#{acct.id} #{acct.balance}" -end diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.clj b/src/current/_includes/v19.1/app/insecure/basic-sample.clj deleted file mode 100644 index 182b78b675e..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.clj +++ /dev/null @@ -1,31 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:dbtype "postgresql" - :dbname "bank" - :host "localhost" - :port "26257" - :user "maxroach"}) - -(defn test-basic [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Insert two rows into the "accounts" table. - (j/insert! conn :accounts {:id 1 :balance 1000}) - (j/insert! conn :accounts {:id 2 :balance 250}) - - ;; Print out the balances. - (println "Initial balances:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - doall) - - )) - - -(defn -main [& args] - (test-basic)) diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.cpp b/src/current/_includes/v19.1/app/insecure/basic-sample.cpp deleted file mode 100644 index a06d84d1a25..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.cpp +++ /dev/null @@ -1,39 +0,0 @@ -#include -#include -#include -#include -#include -#include - -using namespace std; - -int main() { - try { - // Connect to the "bank" database. - pqxx::connection c("postgresql://maxroach@localhost:26257/bank"); - - pqxx::nontransaction w(c); - - // Create the "accounts" table. - w.exec("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); - - // Insert two rows into the "accounts" table. - w.exec("INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); - - // Print out the balances. - cout << "Initial balances:" << endl; - pqxx::result r = w.exec("SELECT id, balance FROM accounts"); - for (auto row : r) { - cout << row[0].as() << ' ' << row[1].as() << endl; - } - - w.commit(); // Note this doesn't doesn't do anything - // for a nontransaction, but is still required. - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.cs b/src/current/_includes/v19.1/app/insecure/basic-sample.cs deleted file mode 100644 index b7cf8e1ff3f..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.cs +++ /dev/null @@ -1,50 +0,0 @@ -using System; -using System.Data; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.SslMode = SslMode.Disable; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - Simple(connStringBuilder.ConnectionString); - } - - static void Simple(string connString) - { - using (var conn = new NpgsqlConnection(connString)) - { - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using (var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - } -} diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.go b/src/current/_includes/v19.1/app/insecure/basic-sample.go deleted file mode 100644 index 6a647f51641..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.go +++ /dev/null @@ -1,44 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - // Connect to the "bank" database. - db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Create the "accounts" table. - if _, err := db.Exec( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)"); err != nil { - log.Fatal(err) - } - - // Insert two rows into the "accounts" table. - if _, err := db.Exec( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)"); err != nil { - log.Fatal(err) - } - - // Print out the balances. - rows, err := db.Query("SELECT id, balance FROM accounts") - if err != nil { - log.Fatal(err) - } - defer rows.Close() - fmt.Println("Initial balances:") - for rows.Next() { - var id, balance int - if err := rows.Scan(&id, &balance); err != nil { - log.Fatal(err) - } - fmt.Printf("%d %d\n", id, balance) - } -} diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.js b/src/current/_includes/v19.1/app/insecure/basic-sample.js deleted file mode 100644 index f89ea020a74..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.js +++ /dev/null @@ -1,55 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - - // Close communication with the database and exit. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Create the 'accounts' table. - client.query('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT);', next); - }, - function (results, next) { - // Insert two rows into the 'accounts' table. - client.query('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250);', next); - }, - function (results, next) { - // Print out account balances. - client.query('SELECT id, balance FROM accounts;', next); - }, - ], - function (err, results) { - if (err) { - console.error('Error inserting into and selecting from accounts: ', err); - finish(); - } - - console.log('Initial balances:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.php b/src/current/_includes/v19.1/app/insecure/basic-sample.php deleted file mode 100644 index db5a26e3111..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.php +++ /dev/null @@ -1,20 +0,0 @@ - PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - PDO::ATTR_PERSISTENT => true - )); - - $dbh->exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)'); - - print "Account balances:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.py b/src/current/_includes/v19.1/app/insecure/basic-sample.py deleted file mode 100644 index 7a77e689ccc..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.py +++ /dev/null @@ -1,146 +0,0 @@ -#!/usr/bin/env python3 - -import psycopg2 -import psycopg2.errorcodes -import time -import logging -import random - - -def create_accounts(conn): - with conn.cursor() as cur: - cur.execute('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - cur.execute('UPSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - logging.debug("create_accounts(): status message: {}".format(cur.statusmessage)) - conn.commit() - - -def print_balances(conn): - with conn.cursor() as cur: - cur.execute("SELECT id, balance FROM accounts") - logging.debug("print_balances(): status message: {}".format(cur.statusmessage)) - rows = cur.fetchall() - conn.commit() - print("Balances at {}".format(time.asctime())) - for row in rows: - print([str(cell) for cell in row]) - - -def delete_accounts(conn): - with conn.cursor() as cur: - cur.execute("DELETE FROM bank.accounts") - logging.debug("delete_accounts(): status message: {}".format(cur.statusmessage)) - conn.commit() - - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn, op): - retries = 0 - max_retries = 3 - with conn: - while True: - retries +=1 - if retries == max_retries: - err_msg = "Transaction did not succeed after {} retries".format(max_retries) - raise ValueError(err_msg) - - try: - op(conn) - - # If we reach this point, we were able to commit, so we break - # from the retry loop. - break - except psycopg2.Error as e: - logging.debug("e.pgcode: {}".format(e.pgcode)) - if e.pgcode == '40001': - # This is a retry error, so we roll back the current - # transaction and sleep for a bit before retrying. The - # sleep time increases for each failed transaction. - conn.rollback() - logging.debug("EXECUTE SERIALIZATION_FAILURE BRANCH") - sleep_ms = (2**retries) * 0.1 * (random.random() + 0.5) - logging.debug("Sleeping {} seconds".format(sleep_ms)) - time.sleep(sleep_ms) - continue - else: - logging.debug("EXECUTE NON-SERIALIZATION_FAILURE BRANCH") - raise e - - -# This function is used to test the transaction retry logic. It can be deleted -# from production code. -def test_retry_loop(conn): - with conn.cursor() as cur: - # The first statement in a transaction can be retried transparently on - # the server, so we need to add a placeholder statement so that our - # force_retry() statement isn't the first one. - cur.execute('SELECT now()') - # The function below can only be run by the root user. Trying to run - # it as user 'maxroach' will fail with an error. - cur.execute("SELECT crdb_internal.force_retry('1s'::INTERVAL)") - logging.debug("test_retry_loop(): status message: {}".format(cur.statusmessage)) - - -def transfer_funds(conn, frm, to, amount): - with conn.cursor() as cur: - - # Check the current balance. - cur.execute("SELECT balance FROM accounts WHERE id = " + str(frm)) - from_balance = cur.fetchone()[0] - if from_balance < amount: - err_msg = "Insufficient funds in account {}: have {}, need {}".format(frm, from_balance, amount) - raise RuntimeError(err_msg) - - # Perform the transfer. - cur.execute("UPDATE accounts SET balance = balance - %s WHERE id = %s", - (amount, frm)) - cur.execute("UPDATE accounts SET balance = balance + %s WHERE id = %s", - (amount, to)) - conn.commit() - logging.debug("transfer_funds(): status message: {}".format(cur.statusmessage)) - - -def main(): - - dsn = 'postgresql://maxroach@localhost:26257/bank?sslmode=disable' - conn = psycopg2.connect(dsn) - - # Uncomment the below to turn on logging to the console. This was useful - # when testing transaction retry handling. It is not necessary for - # production code. - # log_level = getattr(logging, 'DEBUG', None) - # logging.basicConfig(level=log_level) - - create_accounts(conn) - - print_balances(conn) - - amount = 100 - fromId = 1 - toId = 2 - - try: - run_transaction(conn, lambda conn: transfer_funds(conn, fromId, toId, amount)) - - # The function below is used to test the transaction retry logic. It - # can be deleted from production code. - # run_transaction(conn, lambda conn: test_retry_loop(conn)) - except ValueError as ve: - # Below, we print the error and continue on so this example is easy to - # run (and run, and run...). In real code you should handle this error - # and any others thrown by the database interaction. - logging.debug("run_transaction(conn, op) failed: {}".format(ve)) - pass - - print_balances(conn) - - delete_accounts(conn) - - # Close communication with the database. - conn.close() - - -if __name__ == '__main__': - main() diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.rb b/src/current/_includes/v19.1/app/insecure/basic-sample.rb deleted file mode 100644 index 904460381f6..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.rb +++ /dev/null @@ -1,28 +0,0 @@ -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -# Create the "accounts" table. -conn.exec('CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)') - -# Insert two rows into the "accounts" table. -conn.exec('INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)') - -# Print out the balances. -puts 'Initial balances:' -conn.exec('SELECT id, balance FROM accounts') do |res| - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v19.1/app/insecure/basic-sample.rs b/src/current/_includes/v19.1/app/insecure/basic-sample.rs deleted file mode 100644 index 8b7c3b115a9..00000000000 --- a/src/current/_includes/v19.1/app/insecure/basic-sample.rs +++ /dev/null @@ -1,32 +0,0 @@ -use postgres::{Client, NoTls}; - -fn main() { - let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap(); - - // Create the "accounts" table. - client - .execute( - "CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", - &[], - ) - .unwrap(); - - // Insert two rows into the "accounts" table. - client - .execute( - "INSERT INTO accounts (id, balance) VALUES (1, 1000), (2, 250)", - &[], - ) - .unwrap(); - - // Print out the balances. - println!("Initial balances:"); - for row in &client - .query("SELECT id, balance FROM accounts", &[]) - .unwrap() - { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v19.1/app/insecure/create-maxroach-user-and-bank-database.md b/src/current/_includes/v19.1/app/insecure/create-maxroach-user-and-bank-database.md deleted file mode 100644 index 3c7859f0d8d..00000000000 --- a/src/current/_includes/v19.1/app/insecure/create-maxroach-user-and-bank-database.md +++ /dev/null @@ -1,32 +0,0 @@ -Start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -In the SQL shell, issue the following statements to create the `maxroach` user and `bank` database: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER IF NOT EXISTS maxroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -Give the `maxroach` user the necessary permissions: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE bank TO maxroach; -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v19.1/app/insecure/gorm-sample.go b/src/current/_includes/v19.1/app/insecure/gorm-sample.go deleted file mode 100644 index 95df79a96fb..00000000000 --- a/src/current/_includes/v19.1/app/insecure/gorm-sample.go +++ /dev/null @@ -1,207 +0,0 @@ -package main - -import ( - "fmt" - "log" - "math" - "math/rand" - "time" - - // Import GORM-related packages. - "github.com/jinzhu/gorm" - _ "github.com/jinzhu/gorm/dialects/postgres" - - // Necessary in order to check for transaction retry error codes. - "github.com/lib/pq" -) - -// Account is our model, which corresponds to the "accounts" database -// table. -type Account struct { - ID int `gorm:"primary_key"` - Balance int -} - -// Functions of type `txnFunc` are passed as arguments to our -// `runTransaction` wrapper that handles transaction retries for us -// (see implementation below). -type txnFunc func(*gorm.DB) error - -// This function is used for testing the transaction retry loop. It -// can be deleted from production code. -var forceRetryLoop txnFunc = func(db *gorm.DB) error { - - // The first statement in a transaction can be retried transparently - // on the server, so we need to add a placeholder statement so that our - // force_retry statement isn't the first one. - if err := db.Exec("SELECT now()").Error; err != nil { - return err - } - // Used to force a transaction retry. Can only be run as the - // 'root' user. - if err := db.Exec("SELECT crdb_internal.force_retry('1s'::INTERVAL)").Error; err != nil { - return err - } - return nil -} - -func transferFunds(db *gorm.DB, fromID int, toID int, amount int) error { - var fromAccount Account - var toAccount Account - - db.First(&fromAccount, fromID) - db.First(&toAccount, toID) - - if fromAccount.Balance < amount { - return fmt.Errorf("account %d balance %d is lower than transfer amount %d", fromAccount.ID, fromAccount.Balance, amount) - } - - fromAccount.Balance -= amount - toAccount.Balance += amount - - if err := db.Save(&fromAccount).Error; err != nil { - return err - } - if err := db.Save(&toAccount).Error; err != nil { - return err - } - return nil -} - -func main() { - // Connect to the "bank" database as the "maxroach" user. - const addr = "postgresql://maxroach@localhost:26257/bank?sslmode=disable" - db, err := gorm.Open("postgres", addr) - if err != nil { - log.Fatal(err) - } - defer db.Close() - - // Set to `true` and GORM will print out all DB queries. - db.LogMode(false) - - // Automatically create the "accounts" table based on the Account - // model. - db.AutoMigrate(&Account{}) - - // Insert two rows into the "accounts" table. - var fromID = 1 - var toID = 2 - db.Create(&Account{ID: fromID, Balance: 1000}) - db.Create(&Account{ID: toID, Balance: 250}) - - // The sequence of steps in this section is: - // 1. Print account balances. - // 2. Set up some Accounts and transfer funds between them inside - // a transaction. - // 3. Print account balances again to verify the transfer occurred. - - // Print balances before transfer. - printBalances(db) - - // The amount to be transferred between the accounts. - var amount = 100 - - // Transfer funds between accounts. To handle potential - // transaction retry errors, we wrap the call to `transferFunds` - // in `runTransaction`, a wrapper which implements a retry loop - // with exponential backoff around our access to the database (see - // the implementation for details). - if err := runTransaction(db, - func(*gorm.DB) error { - return transferFunds(db, fromID, toID, amount) - }, - ); err != nil { - // If the error is returned, it's either: - // 1. Not a transaction retry error, i.e., some other kind - // of database error that you should handle here. - // 2. A transaction retry error that has occurred more than - // N times (defined by the `maxRetries` variable inside - // `runTransaction`), in which case you will need to figure - // out why your database access is resulting in so much - // contention (see 'Understanding and avoiding transaction - // contention': - // https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) - fmt.Println(err) - } - - // Print balances after transfer to ensure that it worked. - printBalances(db) - - // Delete accounts so we can start fresh when we want to run this - // program again. - deleteAccounts(db) -} - -// Wrapper for a transaction. This automatically re-calls `fn` with -// the open transaction as an argument as long as the database server -// asks for the transaction to be retried. -func runTransaction(db *gorm.DB, fn txnFunc) error { - var maxRetries = 3 - for retries := 0; retries <= maxRetries; retries++ { - if retries == maxRetries { - return fmt.Errorf("hit max of %d retries, aborting", retries) - } - txn := db.Begin() - if err := fn(txn); err != nil { - // We need to cast GORM's db.Error to *pq.Error so we can - // detect the Postgres transaction retry error code and - // handle retries appropriately. - pqErr := err.(*pq.Error) - if pqErr.Code == "40001" { - // Since this is a transaction retry error, we - // ROLLBACK the transaction and sleep a little before - // trying again. Each time through the loop we sleep - // for a little longer than the last time - // (A.K.A. exponential backoff). - txn.Rollback() - var sleepMs = math.Pow(2, float64(retries)) * 100 * (rand.Float64() + 0.5) - fmt.Printf("Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMs) - time.Sleep(time.Millisecond * time.Duration(sleepMs)) - } else { - // If it's not a retry error, it's some other sort of - // DB interaction error that needs to be handled by - // the caller. - return err - } - } else { - // All went well, so we try to commit and break out of the - // retry loop if possible. - if err := txn.Commit().Error; err != nil { - pqErr := err.(*pq.Error) - if pqErr.Code == "40001" { - // However, our attempt to COMMIT could also - // result in a retry error, in which case we - // continue back through the loop and try again. - continue - } else { - // If it's not a retry error, it's some other sort - // of DB interaction error that needs to be - // handled by the caller. - return err - } - } - break - } - } - return nil -} - -func printBalances(db *gorm.DB) { - var accounts []Account - db.Find(&accounts) - fmt.Printf("Balance at '%s':\n", time.Now()) - for _, account := range accounts { - fmt.Printf("%d %d\n", account.ID, account.Balance) - } -} - -func deleteAccounts(db *gorm.DB) error { - // Used to tear down the accounts table so we can re-run this - // program. - err := db.Exec("DELETE from accounts where ID > 0").Error - if err != nil { - return err - } - return nil -} diff --git a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/Sample.java b/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/Sample.java deleted file mode 100644 index 60a6b54f984..00000000000 --- a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/Sample.java +++ /dev/null @@ -1,236 +0,0 @@ -package com.cockroachlabs; - -import org.hibernate.Session; -import org.hibernate.SessionFactory; -import org.hibernate.Transaction; -import org.hibernate.JDBCException; -import org.hibernate.cfg.Configuration; - -import java.util.*; -import java.util.function.Function; - -import javax.persistence.Column; -import javax.persistence.Entity; -import javax.persistence.Id; -import javax.persistence.Table; - -public class Sample { - - private static final Random RAND = new Random(); - private static final boolean FORCE_RETRY = false; - private static final String RETRY_SQL_STATE = "40001"; - private static final int MAX_ATTEMPT_COUNT = 6; - - // Account is our model, which corresponds to the "accounts" database table. - @Entity - @Table(name="accounts") - public static class Account { - @Id - @Column(name="id") - public long id; - - public long getId() { - return id; - } - - @Column(name="balance") - public long balance; - public long getBalance() { - return balance; - } - public void setBalance(long newBalance) { - this.balance = newBalance; - } - - // Convenience constructor. - public Account(int id, int balance) { - this.id = id; - this.balance = balance; - } - - // Hibernate needs a default (no-arg) constructor to create model objects. - public Account() {} - } - - private static Function addAccounts() throws JDBCException{ - Function f = s -> { - long rv = 0; - try { - s.save(new Account(1, 1000)); - s.save(new Account(2, 250)); - s.save(new Account(3, 314159)); - rv = 1; - System.out.printf("APP: addAccounts() --> %d\n", rv); - } catch (JDBCException e) { - throw e; - } - return rv; - }; - return f; - } - - private static Function transferFunds(long fromId, long toId, long amount) throws JDBCException{ - Function f = s -> { - long rv = 0; - try { - Account fromAccount = (Account) s.get(Account.class, fromId); - Account toAccount = (Account) s.get(Account.class, toId); - if (!(amount > fromAccount.getBalance())) { - fromAccount.balance -= amount; - toAccount.balance += amount; - s.save(fromAccount); - s.save(toAccount); - rv = amount; - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d\n", fromId, toId, amount, rv); - } - } catch (JDBCException e) { - throw e; - } - return rv; - }; - return f; - } - - // Test our retry handling logic if FORCE_RETRY is true. This - // method is only used to test the retry logic. It is not - // intended for production code. - private static Function forceRetryLogic() throws JDBCException { - Function f = s -> { - long rv = -1; - try { - System.out.printf("APP: testRetryLogic: BEFORE EXCEPTION\n"); - s.createNativeQuery("SELECT crdb_internal.force_retry('1s')").executeUpdate(); - } catch (JDBCException e) { - System.out.printf("APP: testRetryLogic: AFTER EXCEPTION\n"); - throw e; - } - return rv; - }; - return f; - } - - private static Function getAccountBalance(long id) throws JDBCException{ - Function f = s -> { - long balance; - try { - Account account = s.get(Account.class, id); - balance = account.getBalance(); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", id, balance); - } catch (JDBCException e) { - throw e; - } - return balance; - }; - return f; - } - - // Run SQL code in a way that automatically handles the - // transaction retry logic so we do not have to duplicate it in - // various places. - private static long runTransaction(Session session, Function fn) { - long rv = 0; - int attemptCount = 0; - - while (attemptCount < MAX_ATTEMPT_COUNT) { - attemptCount++; - - if (attemptCount > 1) { - System.out.printf("APP: Entering retry loop again, iteration %d\n", attemptCount); - } - - Transaction txn = session.beginTransaction(); - System.out.printf("APP: BEGIN;\n"); - - if (attemptCount == MAX_ATTEMPT_COUNT) { - String err = String.format("hit max of %s attempts, aborting", MAX_ATTEMPT_COUNT); - throw new RuntimeException(err); - } - - // This block is only used to test the retry logic. - // It is not necessary in production code. See also - // the method 'testRetryLogic()'. - if (FORCE_RETRY) { - session.createNativeQuery("SELECT now()").list(); - } - - try { - rv = fn.apply(session); - if (rv != -1) { - txn.commit(); - System.out.printf("APP: COMMIT;\n"); - break; - } - } catch (JDBCException e) { - if (RETRY_SQL_STATE.equals(e.getSQLState())) { - // Since this is a transaction retry error, we - // roll back the transaction and sleep a little - // before trying again. Each time through the - // loop we sleep for a little longer than the last - // time (A.K.A. exponential backoff). - System.out.printf("APP: retryable exception occurred:\n sql state = [%s]\n message = [%s]\n retry counter = %s\n", e.getSQLState(), e.getMessage(), attemptCount); - System.out.printf("APP: ROLLBACK;\n"); - txn.rollback(); - int sleepMillis = (int)(Math.pow(2, attemptCount) * 100) + RAND.nextInt(100); - System.out.printf("APP: Hit 40001 transaction retry error, sleeping %s milliseconds\n", sleepMillis); - try { - Thread.sleep(sleepMillis); - } catch (InterruptedException ignored) { - // no-op - } - rv = -1; - } else { - throw e; - } - } - } - return rv; - } - - public static void main(String[] args) { - // Create a SessionFactory based on our hibernate.cfg.xml configuration - // file, which defines how to connect to the database. - SessionFactory sessionFactory = - new Configuration() - .configure("hibernate.cfg.xml") - .addAnnotatedClass(Account.class) - .buildSessionFactory(); - - try (Session session = sessionFactory.openSession()) { - long fromAccountId = 1; - long toAccountId = 2; - long transferAmount = 100; - - if (FORCE_RETRY) { - System.out.printf("APP: About to test retry logic in 'runTransaction'\n"); - runTransaction(session, forceRetryLogic()); - } else { - - runTransaction(session, addAccounts()); - long fromBalance = runTransaction(session, getAccountBalance(fromAccountId)); - long toBalance = runTransaction(session, getAccountBalance(toAccountId)); - if (fromBalance != -1 && toBalance != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalance); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalance); - } - - // Transfer $100 from account 1 to account 2 - long transferResult = runTransaction(session, transferFunds(fromAccountId, toAccountId, transferAmount)); - if (transferResult != -1) { - // Success! - System.out.printf("APP: transferFunds(%d, %d, %d) --> %d \n", fromAccountId, toAccountId, transferAmount, transferResult); - - long fromBalanceAfter = runTransaction(session, getAccountBalance(fromAccountId)); - long toBalanceAfter = runTransaction(session, getAccountBalance(toAccountId)); - if (fromBalanceAfter != -1 && toBalanceAfter != -1) { - // Success! - System.out.printf("APP: getAccountBalance(%d) --> %d\n", fromAccountId, fromBalanceAfter); - System.out.printf("APP: getAccountBalance(%d) --> %d\n", toAccountId, toBalanceAfter); - } - } - } - } finally { - sessionFactory.close(); - } - } -} diff --git a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/build.gradle b/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/build.gradle deleted file mode 100644 index 36f33d73fe6..00000000000 --- a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/build.gradle +++ /dev/null @@ -1,16 +0,0 @@ -group 'com.cockroachlabs' -version '1.0' - -apply plugin: 'java' -apply plugin: 'application' - -mainClassName = 'com.cockroachlabs.Sample' - -repositories { - mavenCentral() -} - -dependencies { - compile 'org.hibernate:hibernate-core:5.2.4.Final' - compile 'org.postgresql:postgresql:42.2.2.jre7' -} diff --git a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz b/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz deleted file mode 100644 index 8205b379229..00000000000 Binary files a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate-basic-sample.tgz and /dev/null differ diff --git a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml b/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml deleted file mode 100644 index ad27c7d746c..00000000000 --- a/src/current/_includes/v19.1/app/insecure/hibernate-basic-sample/hibernate.cfg.xml +++ /dev/null @@ -1,20 +0,0 @@ - - - - - - org.postgresql.Driver - org.hibernate.dialect.PostgreSQL95Dialect - jdbc:postgresql://127.0.0.1:26257/bank?sslmode=disable - maxroach - - - create - - - true - true - - diff --git a/src/current/_includes/v19.1/app/insecure/sequelize-basic-sample.js b/src/current/_includes/v19.1/app/insecure/sequelize-basic-sample.js deleted file mode 100644 index ca92b98e375..00000000000 --- a/src/current/_includes/v19.1/app/insecure/sequelize-basic-sample.js +++ /dev/null @@ -1,35 +0,0 @@ -var Sequelize = require('sequelize-cockroachdb'); - -// Connect to CockroachDB through Sequelize. -var sequelize = new Sequelize('bank', 'maxroach', '', { - dialect: 'postgres', - port: 26257, - logging: false -}); - -// Define the Account model for the "accounts" table. -var Account = sequelize.define('accounts', { - id: { type: Sequelize.INTEGER, primaryKey: true }, - balance: { type: Sequelize.INTEGER } -}); - -// Create the "accounts" table. -Account.sync({force: true}).then(function() { - // Insert two rows into the "accounts" table. - return Account.bulkCreate([ - {id: 1, balance: 1000}, - {id: 2, balance: 250} - ]); -}).then(function() { - // Retrieve accounts. - return Account.findAll(); -}).then(function(accounts) { - // Print out the balances. - accounts.forEach(function(account) { - console.log(account.id + ' ' + account.balance); - }); - process.exit(0); -}).catch(function(err) { - console.error('error: ' + err.message); - process.exit(1); -}); diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.clj b/src/current/_includes/v19.1/app/insecure/txn-sample.clj deleted file mode 100644 index 0e2d9df55e3..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.clj +++ /dev/null @@ -1,44 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:dbtype "postgresql" - :dbname "bank" - :host "localhost" - :port "26257" - :user "maxroach"}) - -;; The transaction we want to run. -(defn transferFunds - [txn from to amount] - - ;; Check the current balance. - (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from]) - (mapv :balance) - (first))] - (when (< fromBalance amount) - (throw (Exception. "Insufficient funds")))) - - ;; Perform the transfer. - (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)]) - (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)])) - -(defn test-txn [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Execute the transaction within an automatic retry block; - ;; the transaction object is bound to 'txn'. - (util/with-txn-retry [txn conn] - (transferFunds txn 1 2 100)) - - ;; Execute a query outside of an automatic retry block. - (println "Balances after transfer:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - (doall)))) - -(defn -main [& args] - (test-txn)) diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.cpp b/src/current/_includes/v19.1/app/insecure/txn-sample.cpp deleted file mode 100644 index 0f65137be22..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.cpp +++ /dev/null @@ -1,74 +0,0 @@ -#include -#include -#include -#include -#include -#include - -using namespace std; - -void transferFunds( - pqxx::dbtransaction *tx, int from, int to, int amount) { - // Read the balance. - pqxx::result r = tx->exec( - "SELECT balance FROM accounts WHERE id = " + to_string(from)); - assert(r.size() == 1); - int fromBalance = r[0][0].as(); - - if (fromBalance < amount) { - throw domain_error("insufficient funds"); - } - - // Perform the transfer. - tx->exec("UPDATE accounts SET balance = balance - " - + to_string(amount) + " WHERE id = " + to_string(from)); - tx->exec("UPDATE accounts SET balance = balance + " - + to_string(amount) + " WHERE id = " + to_string(to)); -} - - -// ExecuteTx runs fn inside a transaction and retries it as needed. -// On non-retryable failures, the transaction is aborted and rolled -// back; on success, the transaction is committed. -// -// For more information about CockroachDB's transaction model see -// https://cockroachlabs.com/docs/transactions.html. -// -// NOTE: the supplied exec closure should not have external side -// effects beyond changes to the database. -void executeTx( - pqxx::connection *c, function fn) { - pqxx::work tx(*c); - while (true) { - try { - pqxx::subtransaction s(tx, "cockroach_restart"); - fn(&s); - s.commit(); - break; - } catch (const pqxx::pqxx_exception& e) { - // Swallow "transaction restart" errors; the transaction will be retried. - // Unfortunately libpqxx doesn't give us access to the error code, so we - // do string matching to identify retryable errors. - if (string(e.base().what()).find("restart transaction:") == string::npos) { - throw; - } - } - } - tx.commit(); -} - -int main() { - try { - pqxx::connection c("postgresql://maxroach@localhost:26257/bank"); - - executeTx(&c, [](pqxx::dbtransaction *tx) { - transferFunds(tx, 1, 2, 100); - }); - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.cs b/src/current/_includes/v19.1/app/insecure/txn-sample.cs deleted file mode 100644 index f64a664ccff..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.cs +++ /dev/null @@ -1,120 +0,0 @@ -using System; -using System.Data; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.SslMode = SslMode.Disable; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - TxnSample(connStringBuilder.ConnectionString); - } - - static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount) - { - int balance = 0; - using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran)) - using (var reader = cmd.ExecuteReader()) - { - if (reader.Read()) - { - balance = reader.GetInt32(0); - } - else - { - throw new DataException(String.Format("Account id={0} not found", from)); - } - } - if (balance < amount) - { - throw new DataException(String.Format("Insufficient balance in account id={0}", from)); - } - using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - } - - static void TxnSample(string connString) - { - using (var conn = new NpgsqlConnection(connString)) - { - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using (var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - - try - { - using (var tran = conn.BeginTransaction()) - { - tran.Save("cockroach_restart"); - while (true) - { - try - { - TransferFunds(conn, tran, 1, 2, 100); - tran.Commit(); - break; - } - catch (NpgsqlException e) - { - // Check if the error code indicates a SERIALIZATION_FAILURE. - if (e.ErrorCode == 40001) - { - // Signal the database that we will attempt a retry. - tran.Rollback("cockroach_restart"); - } - else - { - throw; - } - } - } - } - } - catch (DataException e) - { - Console.WriteLine(e.Message); - } - - // Now printout the results. - Console.WriteLine("Final balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - } -} diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.go b/src/current/_includes/v19.1/app/insecure/txn-sample.go deleted file mode 100644 index 2c0cd1b6da6..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.go +++ /dev/null @@ -1,51 +0,0 @@ -package main - -import ( - "context" - "database/sql" - "fmt" - "log" - - "github.com/cockroachdb/cockroach-go/crdb" -) - -func transferFunds(tx *sql.Tx, from int, to int, amount int) error { - // Read the balance. - var fromBalance int - if err := tx.QueryRow( - "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil { - return err - } - - if fromBalance < amount { - return fmt.Errorf("insufficient funds") - } - - // Perform the transfer. - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil { - return err - } - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil { - return err - } - return nil -} - -func main() { - db, err := sql.Open("postgres", "postgresql://maxroach@localhost:26257/bank?sslmode=disable") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Run a transfer in a transaction. - err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error { - return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */) - }) - if err == nil { - fmt.Println("Success") - } else { - log.Fatal("error: ", err) - } -} diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.js b/src/current/_includes/v19.1/app/insecure/txn-sample.js deleted file mode 100644 index c44309b01a2..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.js +++ /dev/null @@ -1,146 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the bank database. - -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -// Wrapper for a transaction. This automatically re-calls "op" with -// the client as an argument as long as the database server asks for -// the transaction to be retried. - -function txnWrapper(client, op, next) { - client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) { - if (err) { - return next(err); - } - - var released = false; - async.doWhilst(function (done) { - var handleError = function (err) { - // If we got an error, see if it's a retryable one - // and, if so, restart. - if (err.code === '40001') { - // Signal the database that we'll retry. - return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done); - } - // A non-retryable error; break out of the - // doWhilst with an error. - return done(err); - }; - - // Attempt the work. - op(client, function (err) { - if (err) { - return handleError(err); - } - var opResults = arguments; - - // If we reach this point, release and commit. - client.query('RELEASE SAVEPOINT cockroach_restart', function (err) { - if (err) { - return handleError(err); - } - released = true; - return done.apply(null, opResults); - }); - }); - }, - function () { - return !released; - }, - function (err) { - if (err) { - client.query('ROLLBACK', function () { - next(err); - }); - } else { - var txnResults = arguments; - client.query('COMMIT', function (err) { - if (err) { - return next(err); - } else { - return next.apply(null, txnResults); - } - }); - } - }); - }); -} - -// The transaction we want to run. - -function transferFunds(client, from, to, amount, next) { - // Check the current balance. - client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) { - if (err) { - return next(err); - } else if (results.rows.length === 0) { - return next(new Error('account not found in table')); - } - - var acctBal = results.rows[0].balance; - if (acctBal >= amount) { - // Perform the transfer. - async.waterfall([ - function (next) { - // Subtract amount from account 1. - client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next); - }, - function (updateResult, next) { - // Add amount to account 2. - client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next); - }, - function (updateResult, next) { - // Fetch account balances after updates. - client.query('SELECT id, balance FROM accounts', function (err, selectResult) { - next(err, selectResult ? selectResult.rows : null); - }); - } - ], next); - } else { - next(new Error('insufficient funds')); - } - }); -} - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - - // Execute the transaction. - txnWrapper(client, - function (client, next) { - transferFunds(client, 1, 2, 100, next); - }, - function (err, results) { - if (err) { - console.error('error performing transaction', err); - finish(); - } - - console.log('Balances after transfer:'); - results.forEach(function (result) { - console.log(result); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.php b/src/current/_includes/v19.1/app/insecure/txn-sample.php deleted file mode 100644 index e060d311cc3..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.php +++ /dev/null @@ -1,71 +0,0 @@ -beginTransaction(); - // This savepoint allows us to retry our transaction. - $dbh->exec("SAVEPOINT cockroach_restart"); - } catch (Exception $e) { - throw $e; - } - - while (true) { - try { - $stmt = $dbh->prepare( - 'UPDATE accounts SET balance = balance + :deposit ' . - 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)'); - - // First, withdraw the money from the old account (if possible). - $stmt->bindValue(':account', $from, PDO::PARAM_INT); - $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "source account does not exist or is underfunded\r\n"; - return; - } - - // Next, deposit into the new account (if it exists). - $stmt->bindValue(':account', $to, PDO::PARAM_INT); - $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "destination account does not exist\r\n"; - return; - } - - // Attempt to release the savepoint (which is really the commit). - $dbh->exec('RELEASE SAVEPOINT cockroach_restart'); - $dbh->commit(); - return; - } catch (PDOException $e) { - if ($e->getCode() != '40001') { - // Non-recoverable error. Rollback and bubble error up the chain. - $dbh->rollBack(); - throw $e; - } else { - // Cockroach transaction retry code. Rollback to the savepoint and - // restart. - $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart'); - } - } - } -} - -try { - $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=disable', - 'maxroach', null, array( - PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - )); - - transferMoney($dbh, 1, 2, 10); - - print "Account balances after transfer:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.rb b/src/current/_includes/v19.1/app/insecure/txn-sample.rb deleted file mode 100644 index 416efb9e24d..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.rb +++ /dev/null @@ -1,49 +0,0 @@ -# Import the driver. -require 'pg' - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn) - conn.transaction do |txn| - txn.exec('SAVEPOINT cockroach_restart') - while - begin - # Attempt the work. - yield txn - - # If we reach this point, commit. - txn.exec('RELEASE SAVEPOINT cockroach_restart') - break - rescue PG::TRSerializationFailure - txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart') - end - end - end -end - -def transfer_funds(txn, from, to, amount) - txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res| - res.each do |row| - raise 'insufficient funds' if Integer(row['balance']) < amount - end - end - txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from]) - txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to]) -end - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'disable' -) - -run_transaction(conn) do |txn| - transfer_funds(txn, 1, 2, 100) -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v19.1/app/insecure/txn-sample.rs b/src/current/_includes/v19.1/app/insecure/txn-sample.rs deleted file mode 100644 index d1dd0e021c9..00000000000 --- a/src/current/_includes/v19.1/app/insecure/txn-sample.rs +++ /dev/null @@ -1,60 +0,0 @@ -use postgres::{error::SqlState, Client, Error, NoTls, Transaction}; - -/// Runs op inside a transaction and retries it as needed. -/// On non-retryable failures, the transaction is aborted and -/// rolled back; on success, the transaction is committed. -fn execute_txn(client: &mut Client, op: F) -> Result -where - F: Fn(&mut Transaction) -> Result, -{ - let mut txn = client.transaction()?; - loop { - let mut sp = txn.savepoint("cockroach_restart")?; - match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) { - Err(ref err) - if err - .code() - .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE) - .unwrap_or(false) => {} - r => break r, - } - } - .and_then(|t| txn.commit().map(|_| t)) -} - -fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> { - // Read the balance. - let from_balance: i64 = txn - .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])? - .get(0); - - assert!(from_balance >= amount); - - // Perform the transfer. - txn.execute( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", - &[&amount, &from], - )?; - txn.execute( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", - &[&amount, &to], - )?; - Ok(()) -} - -fn main() { - let mut client = Client::connect("postgresql://maxroach@localhost:26257/bank", NoTls).unwrap(); - - // Run a transfer in a transaction. - execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap(); - - // Check account balances after the transaction. - for row in &client - .query("SELECT id, balance FROM accounts", &[]) - .unwrap() - { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v19.1/app/project.clj b/src/current/_includes/v19.1/app/project.clj deleted file mode 100644 index 41efc324b59..00000000000 --- a/src/current/_includes/v19.1/app/project.clj +++ /dev/null @@ -1,7 +0,0 @@ -(defproject test "0.1" - :description "CockroachDB test" - :url "http://cockroachlabs.com/" - :dependencies [[org.clojure/clojure "1.8.0"] - [org.clojure/java.jdbc "0.6.1"] - [org.postgresql/postgresql "9.4.1211"]] - :main test.test) diff --git a/src/current/_includes/v19.1/app/see-also-links.md b/src/current/_includes/v19.1/app/see-also-links.md deleted file mode 100644 index e5dd6173c99..00000000000 --- a/src/current/_includes/v19.1/app/see-also-links.md +++ /dev/null @@ -1,9 +0,0 @@ -You might also be interested in the following pages: - -- [Client Connection Parameters](connection-parameters.html) -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Automated Operations](orchestrate-a-local-cluster-with-kubernetes-insecure.html) diff --git a/src/current/_includes/v19.1/app/sequelize-basic-sample.js b/src/current/_includes/v19.1/app/sequelize-basic-sample.js deleted file mode 100644 index d87ff2ca5a5..00000000000 --- a/src/current/_includes/v19.1/app/sequelize-basic-sample.js +++ /dev/null @@ -1,62 +0,0 @@ -var Sequelize = require('sequelize-cockroachdb'); -var fs = require('fs'); - -// Connect to CockroachDB through Sequelize. -var sequelize = new Sequelize('bank', 'maxroach', '', { - dialect: 'postgres', - port: 26257, - logging: false, - dialectOptions: { - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } - } -}); - -// Define the Account model for the "accounts" table. -var Account = sequelize.define('accounts', { - id: { - type: Sequelize.INTEGER, - primaryKey: true - }, - balance: { - type: Sequelize.INTEGER - } -}); - -// Create the "accounts" table. -Account.sync({ - force: true - }) - .then(function () { - // Insert two rows into the "accounts" table. - return Account.bulkCreate([{ - id: 1, - balance: 1000 - }, - { - id: 2, - balance: 250 - } - ]); - }) - .then(function () { - // Retrieve accounts. - return Account.findAll(); - }) - .then(function (accounts) { - // Print out the balances. - accounts.forEach(function (account) { - console.log(account.id + ' ' + account.balance); - }); - process.exit(0); - }) - .catch(function (err) { - console.error('error: ' + err.message); - process.exit(1); - }); diff --git a/src/current/_includes/v19.1/app/sqlalchemy-basic-sample.py b/src/current/_includes/v19.1/app/sqlalchemy-basic-sample.py deleted file mode 100644 index 1b8801c5173..00000000000 --- a/src/current/_includes/v19.1/app/sqlalchemy-basic-sample.py +++ /dev/null @@ -1,110 +0,0 @@ -import random -from math import floor -from sqlalchemy import create_engine, Column, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker -from cockroachdb.sqlalchemy import run_transaction - -Base = declarative_base() - - -# The Account class corresponds to the "accounts" database table. -class Account(Base): - __tablename__ = 'accounts' - id = Column(Integer, primary_key=True) - balance = Column(Integer) - - -# Create an engine to communicate with the database. The -# "cockroachdb://" prefix for the engine URL indicates that we are -# connecting to CockroachDB using the 'cockroachdb' dialect. -# For more information, see -# https://github.com/cockroachdb/sqlalchemy-cockroachdb. - -secure_cluster = True # Set to False for insecure clusters -connect_args = {} - -if secure_cluster: - connect_args = { - 'sslmode': 'require', - 'sslrootcert': 'certs/ca.crt', - 'sslkey': 'certs/client.maxroach.key', - 'sslcert': 'certs/client.maxroach.crt' - } -else: - connect_args = {'sslmode': 'disable'} - -engine = create_engine( - 'cockroachdb://maxroach@localhost:26257/bank', - connect_args=connect_args, - echo=True # Log SQL queries to stdout -) - -# Automatically create the "accounts" table based on the Account class. -Base.metadata.create_all(engine) - - -# Store the account IDs we create for later use. - -seen_account_ids = set() - - -# The code below generates random IDs for new accounts. - -def create_random_accounts(sess, n): - """Create N new accounts with random IDs and random account balances. - - Note that since this is a demo, we do not do any work to ensure the - new IDs do not collide with existing IDs. - """ - new_accounts = [] - elems = iter(range(n)) - for i in elems: - billion = 1000000000 - new_id = floor(random.random()*billion) - seen_account_ids.add(new_id) - new_accounts.append( - Account( - id=new_id, - balance=floor(random.random()*1000000) - ) - ) - sess.add_all(new_accounts) - - -run_transaction(sessionmaker(bind=engine), - lambda s: create_random_accounts(s, 100)) - - -# Helper for getting random existing account IDs. - -def get_random_account_id(): - id = random.choice(tuple(seen_account_ids)) - return id - - -def transfer_funds_randomly(session): - """Transfer money randomly between accounts (during SESSION). - - Cuts a randomly selected account's balance in half, and gives the - other half to some other randomly selected account. - """ - source_id = get_random_account_id() - sink_id = get_random_account_id() - - source = session.query(Account).filter_by(id=source_id).one() - amount = floor(source.balance/2) - - # Check balance of the first account. - if source.balance < amount: - raise "Insufficient funds" - - source.balance -= amount - session.query(Account).filter_by(id=sink_id).update( - {"balance": (Account.balance + amount)} - ) - - -# Run the transfer inside a transaction. - -run_transaction(sessionmaker(bind=engine), transfer_funds_randomly) diff --git a/src/current/_includes/v19.1/app/sqlalchemy-large-txns.py b/src/current/_includes/v19.1/app/sqlalchemy-large-txns.py deleted file mode 100644 index bc7399b663c..00000000000 --- a/src/current/_includes/v19.1/app/sqlalchemy-large-txns.py +++ /dev/null @@ -1,60 +0,0 @@ -from sqlalchemy import create_engine, Column, Float, Integer -from sqlalchemy.ext.declarative import declarative_base -from sqlalchemy.orm import sessionmaker -from cockroachdb.sqlalchemy import run_transaction -from random import random - -Base = declarative_base() - -# The code below assumes you are running as 'root' and have run -# the following SQL statements against an insecure cluster. - -# CREATE DATABASE pointstore; - -# USE pointstore; - -# CREATE TABLE points ( -# id INT PRIMARY KEY DEFAULT unique_rowid(), -# x FLOAT NOT NULL, -# y FLOAT NOT NULL, -# z FLOAT NOT NULL -# ); - -engine = create_engine( - 'cockroachdb://root@localhost:26257/pointstore', - connect_args={ - 'sslmode': 'disable', - }, - echo=True -) - - -class Point(Base): - __tablename__ = 'points' - id = Column(Integer, primary_key=True) - x = Column(Float) - y = Column(Float) - z = Column(Float) - - -def add_points(num_points): - chunk_size = 1000 # Tune this based on object sizes. - - def add_points_helper(sess, chunk, num_points): - points = [] - for i in range(chunk, min(chunk + chunk_size, num_points)): - points.append( - Point(x=random()*1024, y=random()*1024, z=random()*1024) - ) - sess.bulk_save_objects(points) - - for chunk in range(0, num_points, chunk_size): - run_transaction( - sessionmaker(bind=engine), - lambda s: add_points_helper( - s, chunk, min(chunk + chunk_size, num_points) - ) - ) - - -add_points(10000) diff --git a/src/current/_includes/v19.1/app/txn-sample.clj b/src/current/_includes/v19.1/app/txn-sample.clj deleted file mode 100644 index c093078ebc4..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.clj +++ /dev/null @@ -1,48 +0,0 @@ -(ns test.test - (:require [clojure.java.jdbc :as j] - [test.util :as util])) - -;; Define the connection parameters to the cluster. -(def db-spec {:dbtype "postgresql" - :dbname "bank" - :host "localhost" - :port "26257" - :ssl true - :sslmode "require" - :sslcert "certs/client.maxroach.crt" - :sslkey "certs/client.maxroach.key.pk8" - :user "maxroach"}) - -;; The transaction we want to run. -(defn transferFunds - [txn from to amount] - - ;; Check the current balance. - (let [fromBalance (->> (j/query txn ["SELECT balance FROM accounts WHERE id = ?" from]) - (mapv :balance) - (first))] - (when (< fromBalance amount) - (throw (Exception. "Insufficient funds")))) - - ;; Perform the transfer. - (j/execute! txn [(str "UPDATE accounts SET balance = balance - " amount " WHERE id = " from)]) - (j/execute! txn [(str "UPDATE accounts SET balance = balance + " amount " WHERE id = " to)])) - -(defn test-txn [] - ;; Connect to the cluster and run the code below with - ;; the connection object bound to 'conn'. - (j/with-db-connection [conn db-spec] - - ;; Execute the transaction within an automatic retry block; - ;; the transaction object is bound to 'txn'. - (util/with-txn-retry [txn conn] - (transferFunds txn 1 2 100)) - - ;; Execute a query outside of an automatic retry block. - (println "Balances after transfer:") - (->> (j/query conn ["SELECT id, balance FROM accounts"]) - (map println) - (doall)))) - -(defn -main [& args] - (test-txn)) diff --git a/src/current/_includes/v19.1/app/txn-sample.cpp b/src/current/_includes/v19.1/app/txn-sample.cpp deleted file mode 100644 index 728e4a2e5cc..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.cpp +++ /dev/null @@ -1,74 +0,0 @@ -#include -#include -#include -#include -#include -#include - -using namespace std; - -void transferFunds( - pqxx::dbtransaction *tx, int from, int to, int amount) { - // Read the balance. - pqxx::result r = tx->exec( - "SELECT balance FROM accounts WHERE id = " + to_string(from)); - assert(r.size() == 1); - int fromBalance = r[0][0].as(); - - if (fromBalance < amount) { - throw domain_error("insufficient funds"); - } - - // Perform the transfer. - tx->exec("UPDATE accounts SET balance = balance - " - + to_string(amount) + " WHERE id = " + to_string(from)); - tx->exec("UPDATE accounts SET balance = balance + " - + to_string(amount) + " WHERE id = " + to_string(to)); -} - - -// ExecuteTx runs fn inside a transaction and retries it as needed. -// On non-retryable failures, the transaction is aborted and rolled -// back; on success, the transaction is committed. -// -// For more information about CockroachDB's transaction model see -// https://cockroachlabs.com/docs/transactions.html. -// -// NOTE: the supplied exec closure should not have external side -// effects beyond changes to the database. -void executeTx( - pqxx::connection *c, function fn) { - pqxx::work tx(*c); - while (true) { - try { - pqxx::subtransaction s(tx, "cockroach_restart"); - fn(&s); - s.commit(); - break; - } catch (const pqxx::pqxx_exception& e) { - // Swallow "transaction restart" errors; the transaction will be retried. - // Unfortunately libpqxx doesn't give us access to the error code, so we - // do string matching to identify retryable errors. - if (string(e.base().what()).find("restart transaction:") == string::npos) { - throw; - } - } - } - tx.commit(); -} - -int main() { - try { - pqxx::connection c("dbname=bank user=maxroach sslmode=require sslkey=certs/client.maxroach.key sslcert=certs/client.maxroach.crt port=26257 host=localhost"); - - executeTx(&c, [](pqxx::dbtransaction *tx) { - transferFunds(tx, 1, 2, 100); - }); - } - catch (const exception &e) { - cerr << e.what() << endl; - return 1; - } - cout << "Success" << endl; - return 0; -} diff --git a/src/current/_includes/v19.1/app/txn-sample.cs b/src/current/_includes/v19.1/app/txn-sample.cs deleted file mode 100644 index 4815bf7e61b..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.cs +++ /dev/null @@ -1,168 +0,0 @@ -using System; -using System.Data; -using System.Security.Cryptography.X509Certificates; -using System.Net.Security; -using Npgsql; - -namespace Cockroach -{ - class MainClass - { - static void Main(string[] args) - { - var connStringBuilder = new NpgsqlConnectionStringBuilder(); - connStringBuilder.Host = "localhost"; - connStringBuilder.Port = 26257; - connStringBuilder.SslMode = SslMode.Require; - connStringBuilder.Username = "maxroach"; - connStringBuilder.Database = "bank"; - TxnSample(connStringBuilder.ConnectionString); - } - - static void TransferFunds(NpgsqlConnection conn, NpgsqlTransaction tran, int from, int to, int amount) - { - int balance = 0; - using (var cmd = new NpgsqlCommand(String.Format("SELECT balance FROM accounts WHERE id = {0}", from), conn, tran)) - using (var reader = cmd.ExecuteReader()) - { - if (reader.Read()) - { - balance = reader.GetInt32(0); - } - else - { - throw new DataException(String.Format("Account id={0} not found", from)); - } - } - if (balance < amount) - { - throw new DataException(String.Format("Insufficient balance in account id={0}", from)); - } - using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance - {0} where id = {1}", amount, from), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - using (var cmd = new NpgsqlCommand(String.Format("UPDATE accounts SET balance = balance + {0} where id = {1}", amount, to), conn, tran)) - { - cmd.ExecuteNonQuery(); - } - } - - static void TxnSample(string connString) - { - using (var conn = new NpgsqlConnection(connString)) - { - conn.ProvideClientCertificatesCallback += ProvideClientCertificatesCallback; - conn.UserCertificateValidationCallback += UserCertificateValidationCallback; - - conn.Open(); - - // Create the "accounts" table. - new NpgsqlCommand("CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT)", conn).ExecuteNonQuery(); - - // Insert two rows into the "accounts" table. - using (var cmd = new NpgsqlCommand()) - { - cmd.Connection = conn; - cmd.CommandText = "UPSERT INTO accounts(id, balance) VALUES(@id1, @val1), (@id2, @val2)"; - cmd.Parameters.AddWithValue("id1", 1); - cmd.Parameters.AddWithValue("val1", 1000); - cmd.Parameters.AddWithValue("id2", 2); - cmd.Parameters.AddWithValue("val2", 250); - cmd.ExecuteNonQuery(); - } - - // Print out the balances. - System.Console.WriteLine("Initial balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - - try - { - using (var tran = conn.BeginTransaction()) - { - tran.Save("cockroach_restart"); - while (true) - { - try - { - TransferFunds(conn, tran, 1, 2, 100); - tran.Commit(); - break; - } - catch (NpgsqlException e) - { - // Check if the error code indicates a SERIALIZATION_FAILURE. - if (e.ErrorCode == 40001) - { - // Signal the database that we will attempt a retry. - tran.Rollback("cockroach_restart"); - } - else - { - throw; - } - } - } - } - } - catch (DataException e) - { - Console.WriteLine(e.Message); - } - - // Now printout the results. - Console.WriteLine("Final balances:"); - using (var cmd = new NpgsqlCommand("SELECT id, balance FROM accounts", conn)) - using (var reader = cmd.ExecuteReader()) - while (reader.Read()) - Console.Write("\taccount {0}: {1}\n", reader.GetValue(0), reader.GetValue(1)); - } - } - - static void ProvideClientCertificatesCallback(X509CertificateCollection clientCerts) - { - // To be able to add a certificate with a private key included, we must convert it to - // a PKCS #12 format. The following openssl command does this: - // openssl pkcs12 -inkey client.maxroach.key -in client.maxroach.crt -export -out client.maxroach.pfx - // As of 2018-12-10, you need to provide a password for this to work on macOS. - // See https://github.com/dotnet/corefx/issues/24225 - clientCerts.Add(new X509Certificate2("client.maxroach.pfx", "pass")); - } - - // By default, .Net does all of its certificate verification using the system certificate store. - // This callback is necessary to validate the server certificate against a CA certificate file. - static bool UserCertificateValidationCallback(object sender, X509Certificate certificate, X509Chain defaultChain, SslPolicyErrors defaultErrors) - { - X509Certificate2 caCert = new X509Certificate2("ca.crt"); - X509Chain caCertChain = new X509Chain(); - caCertChain.ChainPolicy = new X509ChainPolicy() - { - RevocationMode = X509RevocationMode.NoCheck, - RevocationFlag = X509RevocationFlag.EntireChain - }; - caCertChain.ChainPolicy.ExtraStore.Add(caCert); - - X509Certificate2 serverCert = new X509Certificate2(certificate); - - caCertChain.Build(serverCert); - if (caCertChain.ChainStatus.Length == 0) - { - // No errors - return true; - } - - foreach (X509ChainStatus status in caCertChain.ChainStatus) - { - // Check if we got any errors other than UntrustedRoot (which we will always get if we do not install the CA cert to the system store) - if (status.Status != X509ChainStatusFlags.UntrustedRoot) - { - return false; - } - } - return true; - } - } -} diff --git a/src/current/_includes/v19.1/app/txn-sample.go b/src/current/_includes/v19.1/app/txn-sample.go deleted file mode 100644 index fc15275abca..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.go +++ /dev/null @@ -1,53 +0,0 @@ -package main - -import ( - "context" - "database/sql" - "fmt" - "log" - - "github.com/cockroachdb/cockroach-go/crdb" -) - -func transferFunds(tx *sql.Tx, from int, to int, amount int) error { - // Read the balance. - var fromBalance int - if err := tx.QueryRow( - "SELECT balance FROM accounts WHERE id = $1", from).Scan(&fromBalance); err != nil { - return err - } - - if fromBalance < amount { - return fmt.Errorf("insufficient funds") - } - - // Perform the transfer. - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from); err != nil { - return err - } - if _, err := tx.Exec( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to); err != nil { - return err - } - return nil -} - -func main() { - db, err := sql.Open("postgres", - "postgresql://maxroach@localhost:26257/bank?ssl=true&sslmode=require&sslrootcert=certs/ca.crt&sslkey=certs/client.maxroach.key&sslcert=certs/client.maxroach.crt") - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - defer db.Close() - - // Run a transfer in a transaction. - err = crdb.ExecuteTx(context.Background(), db, nil, func(tx *sql.Tx) error { - return transferFunds(tx, 1 /* from acct# */, 2 /* to acct# */, 100 /* amount */) - }) - if err == nil { - fmt.Println("Success") - } else { - log.Fatal("error: ", err) - } -} diff --git a/src/current/_includes/v19.1/app/txn-sample.js b/src/current/_includes/v19.1/app/txn-sample.js deleted file mode 100644 index 1eebaacad30..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.js +++ /dev/null @@ -1,154 +0,0 @@ -var async = require('async'); -var fs = require('fs'); -var pg = require('pg'); - -// Connect to the bank database. - -var config = { - user: 'maxroach', - host: 'localhost', - database: 'bank', - port: 26257, - ssl: { - ca: fs.readFileSync('certs/ca.crt') - .toString(), - key: fs.readFileSync('certs/client.maxroach.key') - .toString(), - cert: fs.readFileSync('certs/client.maxroach.crt') - .toString() - } -}; - -// Wrapper for a transaction. This automatically re-calls "op" with -// the client as an argument as long as the database server asks for -// the transaction to be retried. - -function txnWrapper(client, op, next) { - client.query('BEGIN; SAVEPOINT cockroach_restart', function (err) { - if (err) { - return next(err); - } - - var released = false; - async.doWhilst(function (done) { - var handleError = function (err) { - // If we got an error, see if it's a retryable one - // and, if so, restart. - if (err.code === '40001') { - // Signal the database that we'll retry. - return client.query('ROLLBACK TO SAVEPOINT cockroach_restart', done); - } - // A non-retryable error; break out of the - // doWhilst with an error. - return done(err); - }; - - // Attempt the work. - op(client, function (err) { - if (err) { - return handleError(err); - } - var opResults = arguments; - - // If we reach this point, release and commit. - client.query('RELEASE SAVEPOINT cockroach_restart', function (err) { - if (err) { - return handleError(err); - } - released = true; - return done.apply(null, opResults); - }); - }); - }, - function () { - return !released; - }, - function (err) { - if (err) { - client.query('ROLLBACK', function () { - next(err); - }); - } else { - var txnResults = arguments; - client.query('COMMIT', function (err) { - if (err) { - return next(err); - } else { - return next.apply(null, txnResults); - } - }); - } - }); - }); -} - -// The transaction we want to run. - -function transferFunds(client, from, to, amount, next) { - // Check the current balance. - client.query('SELECT balance FROM accounts WHERE id = $1', [from], function (err, results) { - if (err) { - return next(err); - } else if (results.rows.length === 0) { - return next(new Error('account not found in table')); - } - - var acctBal = results.rows[0].balance; - if (acctBal >= amount) { - // Perform the transfer. - async.waterfall([ - function (next) { - // Subtract amount from account 1. - client.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from], next); - }, - function (updateResult, next) { - // Add amount to account 2. - client.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to], next); - }, - function (updateResult, next) { - // Fetch account balances after updates. - client.query('SELECT id, balance FROM accounts', function (err, selectResult) { - next(err, selectResult ? selectResult.rows : null); - }); - } - ], next); - } else { - next(new Error('insufficient funds')); - } - }); -} - -// Create a pool. -var pool = new pg.Pool(config); - -pool.connect(function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - - // Execute the transaction. - txnWrapper(client, - function (client, next) { - transferFunds(client, 1, 2, 100, next); - }, - function (err, results) { - if (err) { - console.error('error performing transaction', err); - finish(); - } - - console.log('Balances after transfer:'); - results.forEach(function (result) { - console.log(result); - }); - - finish(); - }); -}); diff --git a/src/current/_includes/v19.1/app/txn-sample.php b/src/current/_includes/v19.1/app/txn-sample.php deleted file mode 100644 index 363dbcd73cd..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.php +++ /dev/null @@ -1,71 +0,0 @@ -beginTransaction(); - // This savepoint allows us to retry our transaction. - $dbh->exec("SAVEPOINT cockroach_restart"); - } catch (Exception $e) { - throw $e; - } - - while (true) { - try { - $stmt = $dbh->prepare( - 'UPDATE accounts SET balance = balance + :deposit ' . - 'WHERE id = :account AND (:deposit > 0 OR balance + :deposit >= 0)'); - - // First, withdraw the money from the old account (if possible). - $stmt->bindValue(':account', $from, PDO::PARAM_INT); - $stmt->bindValue(':deposit', -$amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "source account does not exist or is underfunded\r\n"; - return; - } - - // Next, deposit into the new account (if it exists). - $stmt->bindValue(':account', $to, PDO::PARAM_INT); - $stmt->bindValue(':deposit', $amount, PDO::PARAM_INT); - $stmt->execute(); - if ($stmt->rowCount() == 0) { - print "destination account does not exist\r\n"; - return; - } - - // Attempt to release the savepoint (which is really the commit). - $dbh->exec('RELEASE SAVEPOINT cockroach_restart'); - $dbh->commit(); - return; - } catch (PDOException $e) { - if ($e->getCode() != '40001') { - // Non-recoverable error. Rollback and bubble error up the chain. - $dbh->rollBack(); - throw $e; - } else { - // Cockroach transaction retry code. Rollback to the savepoint and - // restart. - $dbh->exec('ROLLBACK TO SAVEPOINT cockroach_restart'); - } - } - } -} - -try { - $dbh = new PDO('pgsql:host=localhost;port=26257;dbname=bank;sslmode=require;sslrootcert=certs/ca.crt;sslkey=certs/client.maxroach.key;sslcert=certs/client.maxroach.crt', - 'maxroach', null, array( - PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, - PDO::ATTR_EMULATE_PREPARES => true, - )); - - transferMoney($dbh, 1, 2, 10); - - print "Account balances after transfer:\r\n"; - foreach ($dbh->query('SELECT id, balance FROM accounts') as $row) { - print $row['id'] . ': ' . $row['balance'] . "\r\n"; - } -} catch (Exception $e) { - print $e->getMessage() . "\r\n"; - exit(1); -} -?> diff --git a/src/current/_includes/v19.1/app/txn-sample.rb b/src/current/_includes/v19.1/app/txn-sample.rb deleted file mode 100644 index 1c3e028fdf7..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.rb +++ /dev/null @@ -1,52 +0,0 @@ -# Import the driver. -require 'pg' - -# Wrapper for a transaction. -# This automatically re-calls "op" with the open transaction as an argument -# as long as the database server asks for the transaction to be retried. -def run_transaction(conn) - conn.transaction do |txn| - txn.exec('SAVEPOINT cockroach_restart') - while - begin - # Attempt the work. - yield txn - - # If we reach this point, commit. - txn.exec('RELEASE SAVEPOINT cockroach_restart') - break - rescue PG::TRSerializationFailure - txn.exec('ROLLBACK TO SAVEPOINT cockroach_restart') - end - end - end -end - -def transfer_funds(txn, from, to, amount) - txn.exec_params('SELECT balance FROM accounts WHERE id = $1', [from]) do |res| - res.each do |row| - raise 'insufficient funds' if Integer(row['balance']) < amount - end - end - txn.exec_params('UPDATE accounts SET balance = balance - $1 WHERE id = $2', [amount, from]) - txn.exec_params('UPDATE accounts SET balance = balance + $1 WHERE id = $2', [amount, to]) -end - -# Connect to the "bank" database. -conn = PG.connect( - user: 'maxroach', - dbname: 'bank', - host: 'localhost', - port: 26257, - sslmode: 'require', - sslrootcert: 'certs/ca.crt', - sslkey:'certs/client.maxroach.key', - sslcert:'certs/client.maxroach.crt' -) - -run_transaction(conn) do |txn| - transfer_funds(txn, 1, 2, 100) -end - -# Close communication with the database. -conn.close() diff --git a/src/current/_includes/v19.1/app/txn-sample.rs b/src/current/_includes/v19.1/app/txn-sample.rs deleted file mode 100644 index c8e099b89e6..00000000000 --- a/src/current/_includes/v19.1/app/txn-sample.rs +++ /dev/null @@ -1,73 +0,0 @@ -use openssl::error::ErrorStack; -use openssl::ssl::{SslConnector, SslFiletype, SslMethod}; -use postgres::{error::SqlState, Client, Error, Transaction}; -use postgres_openssl::MakeTlsConnector; - -/// Runs op inside a transaction and retries it as needed. -/// On non-retryable failures, the transaction is aborted and -/// rolled back; on success, the transaction is committed. -fn execute_txn(client: &mut Client, op: F) -> Result -where - F: Fn(&mut Transaction) -> Result, -{ - let mut txn = client.transaction()?; - loop { - let mut sp = txn.savepoint("cockroach_restart")?; - match op(&mut sp).and_then(|t| sp.commit().map(|_| t)) { - Err(ref err) - if err - .code() - .map(|e| *e == SqlState::T_R_SERIALIZATION_FAILURE) - .unwrap_or(false) => {} - r => break r, - } - } - .and_then(|t| txn.commit().map(|_| t)) -} - -fn transfer_funds(txn: &mut Transaction, from: i64, to: i64, amount: i64) -> Result<(), Error> { - // Read the balance. - let from_balance: i64 = txn - .query_one("SELECT balance FROM accounts WHERE id = $1", &[&from])? - .get(0); - - assert!(from_balance >= amount); - - // Perform the transfer. - txn.execute( - "UPDATE accounts SET balance = balance - $1 WHERE id = $2", - &[&amount, &from], - )?; - txn.execute( - "UPDATE accounts SET balance = balance + $1 WHERE id = $2", - &[&amount, &to], - )?; - Ok(()) -} - -fn ssl_config() -> Result { - let mut builder = SslConnector::builder(SslMethod::tls())?; - builder.set_ca_file("certs/ca.crt")?; - builder.set_certificate_chain_file("certs/client.maxroach.crt")?; - builder.set_private_key_file("certs/client.maxroach.key", SslFiletype::PEM)?; - Ok(MakeTlsConnector::new(builder.build())) -} - -fn main() { - let connector = ssl_config().unwrap(); - let mut client = - Client::connect("postgresql://maxroach@localhost:26257/bank", connector).unwrap(); - - // Run a transfer in a transaction. - execute_txn(&mut client, |txn| transfer_funds(txn, 1, 2, 100)).unwrap(); - - // Check account balances after the transaction. - for row in &client - .query("SELECT id, balance FROM accounts", &[]) - .unwrap() - { - let id: i64 = row.get(0); - let balance: i64 = row.get(1); - println!("{} {}", id, balance); - } -} diff --git a/src/current/_includes/v19.1/app/util.clj b/src/current/_includes/v19.1/app/util.clj deleted file mode 100644 index d040affe794..00000000000 --- a/src/current/_includes/v19.1/app/util.clj +++ /dev/null @@ -1,38 +0,0 @@ -(ns test.util - (:require [clojure.java.jdbc :as j] - [clojure.walk :as walk])) - -(defn txn-restart-err? - "Takes an exception and returns true if it is a CockroachDB retry error." - [e] - (when-let [m (.getMessage e)] - (condp instance? e - java.sql.BatchUpdateException - (and (re-find #"getNextExc" m) - (txn-restart-err? (.getNextException e))) - - org.postgresql.util.PSQLException - (= (.getSQLState e) "40001") ; 40001 is the code returned by CockroachDB retry errors. - - false))) - -;; Wrapper for a transaction. -;; This automatically invokes the body again as long as the database server -;; asks the transaction to be retried. - -(defmacro with-txn-retry - "Wrap an evaluation within a CockroachDB retry block." - [[txn c] & body] - `(j/with-db-transaction [~txn ~c] - (loop [] - (j/execute! ~txn ["savepoint cockroach_restart"]) - (let [res# (try (let [r# (do ~@body)] - {:ok r#}) - (catch java.sql.SQLException e# - (if (txn-restart-err? e#) - {:retry true} - (throw e#))))] - (if (:retry res#) - (do (j/execute! ~txn ["rollback to savepoint cockroach_restart"]) - (recur)) - (:ok res#)))))) diff --git a/src/current/_includes/v19.1/cdc/core-csv.md b/src/current/_includes/v19.1/cdc/core-csv.md deleted file mode 100644 index 678c78860ee..00000000000 --- a/src/current/_includes/v19.1/cdc/core-csv.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -To determine how wide the columns need to be, the default `table` display format in `cockroach sql` buffers the results it receives from the server before printing them to the console. When consuming core changefeed data using `cockroach sql`, it's important to use a display format like `csv` that does not buffer its results. To set the display format, use the [`--format=csv` flag](use-the-built-in-sql-client.html#sql-flag-format) when starting the [built-in SQL client](use-the-built-in-sql-client.html), or set the [`\set display_format=csv` option](use-the-built-in-sql-client.html#client-side-options) once the SQL client is open. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/cdc/correctness-warning.md b/src/current/_includes/v19.1/cdc/correctness-warning.md deleted file mode 100644 index c5e30948f28..00000000000 --- a/src/current/_includes/v19.1/cdc/correctness-warning.md +++ /dev/null @@ -1,7 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is an experimental feature.** The interface and output are subject to change. - -There is an open correctness issue with changefeeds using resolved timestamps connected to cloud storage sinks. While this issue is unlikely, new row information could display with a lower timestamp than what has already been emitted, which violates our [ordering guarantees](change-data-capture.html#ordering-guarantees). - -This issue is fixed in v19.2 and beyond. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/cdc/create-core-changefeed-avro.md b/src/current/_includes/v19.1/cdc/create-core-changefeed-avro.md deleted file mode 100644 index 046f878b32c..00000000000 --- a/src/current/_includes/v19.1/cdc/create-core-changefeed-avro.md +++ /dev/null @@ -1,99 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster that emits Avro records. CockroachDB's Avro binary encoding convention uses the [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/docs/serializer-formatter.html) to store Avro schemas. - -1. In a terminal window, start `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure --listen-addr=localhost --background - ~~~ - -2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/). - -3. Move into the extracted `confluent-` directory and start Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives). - -4. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --url="postgresql://root@127.0.0.1:26257?sslmode=disable" --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -6. Create table `bar`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT PRIMARY KEY); - ~~~ - -7. Insert a row into the table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bar VALUES (0); - ~~~ - -8. Start the core changefeed: - - {% include copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR bar WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - ~~~ - table,key,value - bar,\000\000\000\000\001\002\000,\000\000\000\000\002\002\002\000 - ~~~ - -9. In a new terminal, add another row: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO bar VALUES (1)" - ~~~ - -10. Back in the terminal where the core changefeed is streaming, the output will appear: - - ~~~ - bar,\000\000\000\000\001\002\002,\000\000\000\000\002\002\002\002 - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -11. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -12. To stop `cockroach`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ - -13. To stop Confluent, move into the extracted `confluent-` directory and stop Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent stop - ~~~ - - To terminate all Confluent processes, use: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent destroy - ~~~ diff --git a/src/current/_includes/v19.1/cdc/create-core-changefeed.md b/src/current/_includes/v19.1/cdc/create-core-changefeed.md deleted file mode 100644 index 2337d5f6546..00000000000 --- a/src/current/_includes/v19.1/cdc/create-core-changefeed.md +++ /dev/null @@ -1,78 +0,0 @@ -In this example, you'll set up a core changefeed for a single-node cluster. - -1. In a terminal window, start `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --listen-addr=localhost \ - --background - ~~~ - -2. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - --url="postgresql://root@127.0.0.1:26257?sslmode=disable" \ - --format=csv - ~~~ - - {% include {{ page.version.version }}/cdc/core-csv.md %} - -3. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -4. Create table `foo`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ - -5. Insert a row into the table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES (0); - ~~~ - -6. Start the core changefeed: - - {% include copy-clipboard.html %} - ~~~ sql - > EXPERIMENTAL CHANGEFEED FOR foo; - ~~~ - ~~~ - table,key,value - foo,[0],"{""after"": {""a"": 0}}" - ~~~ - -7. In a new terminal, add another row: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure -e "INSERT INTO foo VALUES (1)" - ~~~ - -8. Back in the terminal where the core changefeed is streaming, the following output has appeared: - - ~~~ - foo,[1],"{""after"": {""a"": 1}}" - ~~~ - - Note that records may take a couple of seconds to display in the core changefeed. - -9. To stop streaming the changefeed, enter **CTRL+C** into the terminal where the changefeed is running. - -10. To stop `cockroach`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ diff --git a/src/current/_includes/v19.1/client-transaction-retry.md b/src/current/_includes/v19.1/client-transaction-retry.md deleted file mode 100644 index 6a54534169e..00000000000 --- a/src/current/_includes/v19.1/client-transaction-retry.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -With the default `SERIALIZABLE` [isolation level](transactions.html#isolation-levels), CockroachDB may require the client to [retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a [generic retry function](transactions.html#client-side-intervention) that runs inside a transaction and retries it as needed. The code sample below shows how it is used. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/computed-columns/add-computed-column.md b/src/current/_includes/v19.1/computed-columns/add-computed-column.md deleted file mode 100644 index c670b1c7285..00000000000 --- a/src/current/_includes/v19.1/computed-columns/add-computed-column.md +++ /dev/null @@ -1,55 +0,0 @@ -In this example, create a table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE x ( - a INT NULL, - b INT NULL AS (a * 2) STORED, - c INT NULL AS (a + 4) STORED, - FAMILY "primary" (a, b, rowid, c) - ); -~~~ - -Then, insert a row of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO x VALUES (6); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+ -| a | b | c | -+---+----+----+ -| 6 | 12 | 10 | -+---+----+----+ -(1 row) -~~~ - -Now add another computed column to the table: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE x ADD COLUMN d INT AS (a // 2) STORED; -~~~ - -The `d` column is added to the table and computed from the `a` column divided by 2. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM x; -~~~ - -~~~ -+---+----+----+---+ -| a | b | c | d | -+---+----+----+---+ -| 6 | 12 | 10 | 3 | -+---+----+----+---+ -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/computed-columns/convert-computed-column.md b/src/current/_includes/v19.1/computed-columns/convert-computed-column.md deleted file mode 100644 index 12fd6e7d418..00000000000 --- a/src/current/_includes/v19.1/computed-columns/convert-computed-column.md +++ /dev/null @@ -1,108 +0,0 @@ -You can convert a stored, computed column into a regular column by using `ALTER TABLE`. - -In this example, create a simple table with a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name) VALUES - (1, 'Petee', 'Hirata'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+---------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+---------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-----------+---------------+ -(3 rows) -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). You can view the column details with the [`SHOW COLUMNS`](show-columns.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | concat(first_name, ' ', last_name) | {} | -+-------------+-----------+-------------+----------------+------------------------------------+-------------+ -(4 rows) -~~~ - -Now, convert the computed column (`full_name`) to a regular column: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ALTER COLUMN full_name DROP STORED; -~~~ - -Check that the computed column was converted: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM office_dogs; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| first_name | STRING | true | NULL | | {} | -| last_name | STRING | true | NULL | | {} | -| full_name | STRING | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(4 rows) -~~~ - -The computed column is now a regular column and can be updated as such: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO office_dogs (id, first_name, last_name, full_name) VALUES (4, 'Lola', 'McDog', 'This is not computed'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM office_dogs; -~~~ - -~~~ -+----+------------+-----------+----------------------+ -| id | first_name | last_name | full_name | -+----+------------+-----------+----------------------+ -| 1 | Petee | Hirata | Petee Hirata | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -| 4 | Lola | McDog | This is not computed | -+----+------------+-----------+----------------------+ -(4 rows) -~~~ diff --git a/src/current/_includes/v19.1/computed-columns/jsonb.md b/src/current/_includes/v19.1/computed-columns/jsonb.md deleted file mode 100644 index 76a5b08ad8a..00000000000 --- a/src/current/_includes/v19.1/computed-columns/jsonb.md +++ /dev/null @@ -1,35 +0,0 @@ -In this example, create a table with a `JSONB` column and a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE student_profiles ( - id STRING PRIMARY KEY AS (profile->>'id') STORED, - profile JSONB -); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO student_profiles (profile) VALUES - ('{"id": "d78236", "name": "Arthur Read", "age": "16", "school": "PVPHS", "credits": 120, "sports": "none"}'), - ('{"name": "Buster Bunny", "age": "15", "id": "f98112", "school": "THS", "credits": 67, "clubs": "MUN"}'), - ('{"name": "Ernie Narayan", "school" : "Brooklyn Tech", "id": "t63512", "sports": "Track and Field", "clubs": "Chess"}'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM student_profiles; -~~~ -~~~ -+--------+---------------------------------------------------------------------------------------------------------------------+ -| id | profile | -+--------+---------------------------------------------------------------------------------------------------------------------+ -| d78236 | {"age": "16", "credits": 120, "id": "d78236", "name": "Arthur Read", "school": "PVPHS", "sports": "none"} | -| f98112 | {"age": "15", "clubs": "MUN", "credits": 67, "id": "f98112", "name": "Buster Bunny", "school": "THS"} | -| t63512 | {"clubs": "Chess", "id": "t63512", "name": "Ernie Narayan", "school": "Brooklyn Tech", "sports": "Track and Field"} | -+--------+---------------------------------------------------------------------------------------------------------------------+ -~~~ - -The primary key `id` is computed as a field from the `profile` column. diff --git a/src/current/_includes/v19.1/computed-columns/partitioning.md b/src/current/_includes/v19.1/computed-columns/partitioning.md deleted file mode 100644 index 926c45793b4..00000000000 --- a/src/current/_includes/v19.1/computed-columns/partitioning.md +++ /dev/null @@ -1,53 +0,0 @@ -{{site.data.alerts.callout_info}}Partioning is an enterprise feature. To request and enable a trial or full enterprise license, see Enterprise Licensing.{{site.data.alerts.end}} - -In this example, create a table with geo-partitioning and a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE user_locations ( - locality STRING AS (CASE - WHEN country IN ('ca', 'mx', 'us') THEN 'north_america' - WHEN country IN ('au', 'nz') THEN 'australia' - END) STORED, - id SERIAL, - name STRING, - country STRING, - PRIMARY KEY (locality, id)) - PARTITION BY LIST (locality) - (PARTITION north_america VALUES IN ('north_america'), - PARTITION australia VALUES IN ('australia')); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO user_locations (name, country) VALUES - ('Leonard McCoy', 'us'), - ('Uhura', 'nz'), - ('Spock', 'ca'), - ('James Kirk', 'us'), - ('Scotty', 'mx'), - ('Hikaru Sulu', 'us'), - ('Pavel Chekov', 'au'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM user_locations; -~~~ -~~~ -+---------------+--------------------+---------------+---------+ -| locality | id | name | country | -+---------------+--------------------+---------------+---------+ -| australia | 333153890100609025 | Uhura | nz | -| australia | 333153890100772865 | Pavel Chekov | au | -| north_america | 333153890100576257 | Leonard McCoy | us | -| north_america | 333153890100641793 | Spock | ca | -| north_america | 333153890100674561 | James Kirk | us | -| north_america | 333153890100707329 | Scotty | mx | -| north_america | 333153890100740097 | Hikaru Sulu | us | -+---------------+--------------------+---------------+---------+ -~~~ - -The `locality` column is computed from the `country` column. diff --git a/src/current/_includes/v19.1/computed-columns/secondary-index.md b/src/current/_includes/v19.1/computed-columns/secondary-index.md deleted file mode 100644 index e274db59d7e..00000000000 --- a/src/current/_includes/v19.1/computed-columns/secondary-index.md +++ /dev/null @@ -1,63 +0,0 @@ -In this example, create a table with a computed columns and an index on that column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE gymnastics ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - athlete STRING, - vault DECIMAL, - bars DECIMAL, - beam DECIMAL, - floor DECIMAL, - combined_score DECIMAL AS (vault + bars + beam + floor) STORED, - INDEX total (combined_score DESC) - ); -~~~ - -Then, insert a few rows a data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO gymnastics (athlete, vault, bars, beam, floor) VALUES - ('Simone Biles', 15.933, 14.800, 15.300, 15.800), - ('Gabby Douglas', 0, 15.766, 0, 0), - ('Laurie Hernandez', 15.100, 0, 15.233, 14.833), - ('Madison Kocian', 0, 15.933, 0, 0), - ('Aly Raisman', 15.833, 0, 15.000, 15.366); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM gymnastics; -~~~ -~~~ -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| id | athlete | vault | bars | beam | floor | combined_score | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -| 3fe11371-6a6a-49de-bbef-a8dd16560fac | Aly Raisman | 15.833 | 0 | 15.000 | 15.366 | 46.199 | -| 56055a70-b4c7-4522-909b-8f3674b705e5 | Madison Kocian | 0 | 15.933 | 0 | 0 | 15.933 | -| 69f73fd1-da34-48bf-aff8-71296ce4c2c7 | Gabby Douglas | 0 | 15.766 | 0 | 0 | 15.766 | -| 8a7b730b-668d-4845-8d25-48bda25114d6 | Laurie Hernandez | 15.100 | 0 | 15.233 | 14.833 | 45.166 | -| b2c5ca80-21c2-4853-9178-b96ce220ea4d | Simone Biles | 15.933 | 14.800 | 15.300 | 15.800 | 61.833 | -+--------------------------------------+------------------+--------+--------+--------+--------+----------------+ -~~~ - -Now, run a query using the secondary index: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT athlete, combined_score FROM gymnastics ORDER BY combined_score DESC; -~~~ -~~~ -+------------------+----------------+ -| athlete | combined_score | -+------------------+----------------+ -| Simone Biles | 61.833 | -| Aly Raisman | 46.199 | -| Laurie Hernandez | 45.166 | -| Madison Kocian | 15.933 | -| Gabby Douglas | 15.766 | -+------------------+----------------+ -~~~ - -The athlete with the highest combined score of 61.833 is Simone Biles. diff --git a/src/current/_includes/v19.1/computed-columns/simple.md b/src/current/_includes/v19.1/computed-columns/simple.md deleted file mode 100644 index d2bf9c16969..00000000000 --- a/src/current/_includes/v19.1/computed-columns/simple.md +++ /dev/null @@ -1,37 +0,0 @@ -In this example, let's create a simple table with a computed column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE names ( - id INT PRIMARY KEY, - first_name STRING, - last_name STRING, - full_name STRING AS (CONCAT(first_name, ' ', last_name)) STORED - ); -~~~ - -Then, insert a few rows of data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO names (id, first_name, last_name) VALUES - (1, 'Lola', 'McDog'), - (2, 'Carl', 'Kimball'), - (3, 'Ernie', 'Narayan'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM names; -~~~ -~~~ -+----+------------+-------------+----------------+ -| id | first_name | last_name | full_name | -+----+------------+-------------+----------------+ -| 1 | Lola | McDog | Lola McDog | -| 2 | Carl | Kimball | Carl Kimball | -| 3 | Ernie | Narayan | Ernie Narayan | -+----+------------+-------------+----------------+ -~~~ - -The `full_name` column is computed from the `first_name` and `last_name` columns without the need to define a [view](views.html). diff --git a/src/current/_includes/v19.1/faq/auto-generate-unique-ids.html b/src/current/_includes/v19.1/faq/auto-generate-unique-ids.html deleted file mode 100644 index 29d42f46c75..00000000000 --- a/src/current/_includes/v19.1/faq/auto-generate-unique-ids.html +++ /dev/null @@ -1,89 +0,0 @@ -To auto-generate unique row IDs, use the [`UUID`](uuid.html) column with the `gen_random_uuid()` [function](functions-and-operators.html#id-generation-functions) as the [default value](default-value.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t1 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+--------------------------------------+------+ -| id | name | -+--------------------------------------+------+ -| 60853a85-681d-4620-9677-946bbfdc8fbc | c | -| 77c9bc2e-76a5-4ebc-80c3-7ad3159466a1 | b | -| bd3a56e1-c75e-476c-b221-0da9d74d66eb | a | -+--------------------------------------+------+ -(3 rows) -~~~ - -Alternatively, you can use the [`BYTES`](bytes.html) column with the `uuid_v4()` function as the default value instead: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t2 (id BYTES PRIMARY KEY DEFAULT uuid_v4(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t2 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t2; -~~~ - -~~~ -+---------------------------------------------------+------+ -| id | name | -+---------------------------------------------------+------+ -| "\x9b\x10\xdc\x11\x9a\x9cGB\xbd\x8d\t\x8c\xf6@vP" | a | -| "\xd9s\xd7\x13\n_L*\xb0\x87c\xb6d\xe1\xd8@" | c | -| "\uac74\x1dd@B\x97\xac\x04N&\x9eBg\x86" | b | -+---------------------------------------------------+------+ -(3 rows) -~~~ - -In either case, generated IDs will be 128-bit, large enough for there to be virtually no chance of generating non-unique values. Also, once the table grows beyond a single key-value range (more than 64MB by default), new IDs will be scattered across all of the table's ranges and, therefore, likely across different nodes. This means that multiple nodes will share in the load. - -This approach has the disadvantage of creating a primary key that may not be useful in a query directly, which can require a join with another table or a secondary index. - -If it is important for generated IDs to be stored in the same key-value range, you can use an [integer type](int.html) with the `unique_rowid()` [function](functions-and-operators.html#id-generation-functions) as the default value, either explicitly or via the [`SERIAL` pseudo-type](serial.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t3 (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t3 (name) VALUES ('a'), ('b'), ('c'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t3; -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 293807573840855041 | a | -| 293807573840887809 | b | -| 293807573840920577 | c | -+--------------------+------+ -(3 rows) -~~~ - -Upon insert or upsert, the `unique_rowid()` function generates a default value from the timestamp and ID of the node executing the insert. Such time-ordered values are likely to be globally unique except in cases where a very large number of IDs (100,000+) are generated per node per second. Also, there can be gaps and the order is not completely guaranteed. diff --git a/src/current/_includes/v19.1/faq/clock-synchronization-effects.md b/src/current/_includes/v19.1/faq/clock-synchronization-effects.md deleted file mode 100644 index b61c45679f9..00000000000 --- a/src/current/_includes/v19.1/faq/clock-synchronization-effects.md +++ /dev/null @@ -1,24 +0,0 @@ -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -The one rare case to note is when a node's clock suddenly jumps beyond the maximum offset before the node detects it. Although extremely unlikely, this could occur, for example, when running CockroachDB inside a VM and the VM hypervisor decides to migrate the VM to different hardware with a different time. In this case, there can be a small window of time between when the node's clock becomes unsynchronized and when the node spontaneously shuts down. During this window, it would be possible for a client to read stale data and write data derived from stale reads. To protect against this, we recommend using the `server.clock.forward_jump_check_enabled` and `server.clock.persist_upper_bound_interval` [cluster settings](cluster-settings.html). - -### Considerations - -When setting up clock synchronization: - -- All nodes in the cluster must be synced to the same time source, or to different sources that implement leap second smearing in the same way. For example, Google and Amazon have time sources that are compatible with each other (they implement [leap second smearing](https://developers.google.com/time/smear) in the same way), but are incompatible with the default NTP pool (which does not implement leap second smearing). -- For nodes running in AWS, we recommend [Amazon Time Sync Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). For nodes running in GCP, we recommend [Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). For nodes running elsewhere, we recommend [Google Public NTP](https://developers.google.com/time/). Note that the Google and Amazon time services can be mixed with each other, but they cannot be mixed with other time services (unless you have verified leap second behavior). Either all of your nodes should use the Google and Amazon services, or none of them should. -- If you do not want to use the Google or Amazon time sources, you can use [`chrony`](https://chrony.tuxfamily.org/index.html) and enable client-side leap smearing, unless the time source you're using already does server-side smearing. In most cases, we recommend the Google Public NTP time source because it handles smearing the leap second. If you use a different NTP time source that doesn't smear the leap second, you must configure client-side smearing manually and do so in the same way on each machine. -- Do not run more than one clock sync service on VMs where `cockroach` is running. - -### Tutorials - -For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Featured Approach -------------|--------------------- -[On-Premises](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. diff --git a/src/current/_includes/v19.1/faq/clock-synchronization-monitoring.html b/src/current/_includes/v19.1/faq/clock-synchronization-monitoring.html deleted file mode 100644 index 7fb82e4d188..00000000000 --- a/src/current/_includes/v19.1/faq/clock-synchronization-monitoring.html +++ /dev/null @@ -1,8 +0,0 @@ -As explained in more detail [in our monitoring documentation](monitoring-and-alerting.html#prometheus-endpoint), each CockroachDB node exports a wide variety of metrics at `http://:/_status/vars` in the format used by the popular Prometheus timeseries database. Two of these metrics export how close each node's clock is to the clock of all other nodes: - -Metric | Definition --------|----------- -`clock_offset_meannanos` | The mean difference between the node's clock and other nodes' clocks in nanoseconds -`clock_offset_stddevnanos` | The standard deviation of the difference between the node's clock and other nodes' clocks in nanoseconds - -As described in [the above answer](#what-happens-when-node-clocks-are-not-properly-synchronized), a node will shut down if the mean offset of its clock from the other nodes' clocks exceeds 80% of the maximum offset allowed. It's recommended to monitor the `clock_offset_meannanos` metric and alert if it's approaching the 80% threshold of your cluster's configured max offset. diff --git a/src/current/_includes/v19.1/faq/differences-between-numberings.md b/src/current/_includes/v19.1/faq/differences-between-numberings.md deleted file mode 100644 index 741ec4f8066..00000000000 --- a/src/current/_includes/v19.1/faq/differences-between-numberings.md +++ /dev/null @@ -1,11 +0,0 @@ - -| Property | UUID generated with `uuid_v4()` | INT generated with `unique_rowid()` | Sequences | -|--------------------------------------|-----------------------------------------|-----------------------------------------------|--------------------------------| -| Size | 16 bytes | 8 bytes | 1 to 8 bytes | -| Ordering properties | Unordered | Highly time-ordered | Highly time-ordered | -| Performance cost at generation | Small, scalable | Small, scalable | Variable, can cause contention | -| Value distribution | Uniformly distributed (128 bits) | Contains time and space (node ID) components | Dense, small values | -| Data locality | Maximally distributed | Values generated close in time are co-located | Highly local | -| `INSERT` latency when used as key | Small, insensitive to concurrency | Small, but increases with concurrent INSERTs | Higher | -| `INSERT` throughput when used as key | Highest | Limited by max throughput on 1 node | Limited by max throughput on 1 node | -| Read throughput when used as key | Highest (maximal parallelism) | Limited | Limited | diff --git a/src/current/_includes/v19.1/faq/planned-maintenance.md b/src/current/_includes/v19.1/faq/planned-maintenance.md deleted file mode 100644 index 2d22000fc21..00000000000 --- a/src/current/_includes/v19.1/faq/planned-maintenance.md +++ /dev/null @@ -1,22 +0,0 @@ -By default, if a node stays offline for more than 5 minutes, the cluster will consider it dead and will rebalance its data to other nodes. Before temporarily stopping nodes for planned maintenance (e.g., upgrading system software), if you expect any nodes to be offline for longer than 5 minutes, you can prevent the cluster from unnecessarily rebalancing data off the nodes by increasing the `server.time_until_store_dead` [cluster setting](cluster-settings.html) to match the estimated maintenance window. - -For example, let's say you want to maintain a group of servers, and the nodes running on the servers may be offline for up to 15 minutes as a result. Before shutting down the nodes, you would change the `server.time_until_store_dead` cluster setting as follows: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING server.time_until_store_dead = '15m0s'; -~~~ - -After completing the maintenance work and [restarting the nodes](start-a-node.html), you would then change the setting back to its default: - -{% include copy-clipboard.html %} -~~~ sql -> RESET CLUSTER SETTING server.time_until_store_dead; -~~~ - -It's also important to ensure that load balancers do not send client traffic to a node about to be shut down, even if it will only be down for a few seconds. If you find that your load balancer's health check is not always recognizing a node as unready before the node shuts down, you can increase the `server.shutdown.drain_wait` setting, which tells the node to wait in an unready state for the specified duration. For example: - -{% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING server.shutdown.drain_wait = '10s'; - ~~~ diff --git a/src/current/_includes/v19.1/faq/sequential-numbers.md b/src/current/_includes/v19.1/faq/sequential-numbers.md deleted file mode 100644 index ee5bd96d9c4..00000000000 --- a/src/current/_includes/v19.1/faq/sequential-numbers.md +++ /dev/null @@ -1,7 +0,0 @@ -Sequential numbers can be generated in CockroachDB using the `unique_rowid()` built-in function or using [SQL sequences](create-sequence.html). However, note the following considerations: - -- Unless you need roughly-ordered numbers, we recommend using [`UUID`](uuid.html) values instead. See the [previous -FAQ](#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) for details. -- [Sequences](create-sequence.html) produce **unique** values. However, not all values are guaranteed to be produced (e.g., when a transaction is canceled after it consumes a value) and the values may be slightly reordered (e.g., when a transaction that -consumes a lower sequence number commits after a transaction that consumes a higher number). -- For maximum performance, avoid using sequences or `unique_rowid()` to generate row IDs or indexed columns. Values generated in these ways are logically close to each other and can cause contention on few data ranges during inserts. Instead, prefer [`UUID`](uuid.html) identifiers. diff --git a/src/current/_includes/v19.1/faq/sequential-transactions.md b/src/current/_includes/v19.1/faq/sequential-transactions.md deleted file mode 100644 index 684f2ce5d2a..00000000000 --- a/src/current/_includes/v19.1/faq/sequential-transactions.md +++ /dev/null @@ -1,19 +0,0 @@ -Most use cases that ask for a strong time-based write ordering can be solved with other, more distribution-friendly -solutions instead. For example, CockroachDB's [time travel queries (`AS OF SYSTEM -TIME`)](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) support the following: - -- Paginating through all the changes to a table or dataset -- Determining the order of changes to data over time -- Determining the state of data at some point in the past -- Determining the changes to data between two points of time - -Consider also that the values generated by `unique_rowid()`, described in the previous FAQ entries, also provide an approximate time ordering. - -However, if your application absolutely requires strong time-based write ordering, it is possible to create a strictly monotonic counter in CockroachDB that increases over time as follows: - -- Initially: `CREATE TABLE cnt(val INT PRIMARY KEY); INSERT INTO cnt(val) VALUES(1);` -- In each transaction: `INSERT INTO cnt(val) SELECT max(val)+1 FROM cnt RETURNING val;` - -This will cause [`INSERT`](insert.html) transactions to conflict with each other and effectively force the transactions to commit one at a time throughout the cluster, which in turn guarantees the values generated in this way are strictly increasing over time without gaps. The caveat is that performance is severely limited as a result. - -If you find yourself interested in this problem, please [contact us](support-resources.html) and describe your situation. We would be glad to help you find alternative solutions and possibly extend CockroachDB to better match your needs. diff --git a/src/current/_includes/v19.1/faq/simulate-key-value-store.html b/src/current/_includes/v19.1/faq/simulate-key-value-store.html deleted file mode 100644 index 4772fa5358c..00000000000 --- a/src/current/_includes/v19.1/faq/simulate-key-value-store.html +++ /dev/null @@ -1,13 +0,0 @@ -CockroachDB is a distributed SQL database built on a transactional and strongly-consistent key-value store. Although it is not possible to access the key-value store directly, you can mirror direct access using a "simple" table of two columns, with one set as the primary key: - -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v BYTES); -~~~ - -When such a "simple" table has no indexes or foreign keys, [`INSERT`](insert.html)/[`UPSERT`](upsert.html)/[`UPDATE`](update.html)/[`DELETE`](delete.html) statements translate to key-value operations with minimal overhead (single digit percent slowdowns). For example, the following `UPSERT` to add or replace a row in the table would translate into a single key-value Put operation: - -~~~ sql -> UPSERT INTO kv VALUES (1, b'hello') -~~~ - -This SQL table approach also offers you a well-defined query language, a known transaction model, and the flexibility to add more columns to the table if the need arises. diff --git a/src/current/_includes/v19.1/faq/sql-query-logging.md b/src/current/_includes/v19.1/faq/sql-query-logging.md deleted file mode 100644 index b84937f9300..00000000000 --- a/src/current/_includes/v19.1/faq/sql-query-logging.md +++ /dev/null @@ -1,63 +0,0 @@ -There are several ways to log SQL queries. The type of logging you use will depend on your requirements. - -- For per-table audit logs, turn on [SQL audit logs](#sql-audit-logs). -- For system troubleshooting and performance optimization, turn on [cluster-wide execution logs](#cluster-wide-execution-logs). -- For local testing, turn on [per-node execution logs](#per-node-execution-logs). - -### SQL audit logs - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - -SQL audit logging is useful if you want to log all queries that are run against specific tables. - -- For a tutorial, see [SQL Audit Logging](sql-audit-logging.html). - -- For SQL reference documentation, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). - -### Cluster-wide execution logs - -For production clusters, the best way to log all queries is to turn on the [cluster-wide setting](cluster-settings.html) `sql.trace.log_statement_execute`: - -~~~ sql -> SET CLUSTER SETTING sql.trace.log_statement_execute = true; -~~~ - -With this setting on, each node of the cluster writes all SQL queries it executes to a separate log file `cockroach-sql-exec.log`. When you no longer need to log queries, you can turn the setting back off: - -~~~ sql -> SET CLUSTER SETTING sql.trace.log_statement_execute = false; -~~~ - -### Per-node execution logs - -Alternatively, if you are testing CockroachDB locally and want to log queries executed just by a specific node, you can either pass a CLI flag at node startup, or execute a SQL function on a running node. - -Using the CLI to start a new node, pass the `--vmodule` flag to the [`cockroach start`](start-a-node.html) command. For example, to start a single node locally and log all SQL queries it executes, you'd run: - -~~~ shell -$ cockroach start --insecure --listen-addr=localhost --vmodule=exec_log=2 -~~~ - -From the SQL prompt on a running node, execute the `crdb_internal.set_vmodule()` [function](functions-and-operators.html): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT crdb_internal.set_vmodule('exec_log=2'); -~~~ - -This will result in the following output: - -~~~ -+---------------------------+ -| crdb_internal.set_vmodule | -+---------------------------+ -| 0 | -+---------------------------+ -(1 row) -~~~ - -Once the logging is enabled, all of the node's queries will be written to the [CockroachDB log file](debug-and-error-logs.html) as follows: - -~~~ -I180402 19:12:28.112957 394661 sql/exec_log.go:173 [n1,client=127.0.0.1:50155,user=root] exec "psql" {} "SELECT version()" {} 0.795 1 "" -~~~ diff --git a/src/current/_includes/v19.1/faq/when-to-interleave-tables.html b/src/current/_includes/v19.1/faq/when-to-interleave-tables.html deleted file mode 100644 index a65196ad693..00000000000 --- a/src/current/_includes/v19.1/faq/when-to-interleave-tables.html +++ /dev/null @@ -1,5 +0,0 @@ -You're most likely to benefit from interleaved tables when: - - - Your tables form a [hierarchy](interleave-in-parent.html#interleaved-hierarchy) - - Queries maximize the [benefits of interleaving](interleave-in-parent.html#benefits) - - Queries do not suffer too greatly from interleaving's [tradeoffs](interleave-in-parent.html#tradeoffs) diff --git a/src/current/_includes/v19.1/json/json-sample.go b/src/current/_includes/v19.1/json/json-sample.go deleted file mode 100644 index ecba73acc55..00000000000 --- a/src/current/_includes/v19.1/json/json-sample.go +++ /dev/null @@ -1,79 +0,0 @@ -package main - -import ( - "database/sql" - "fmt" - "io/ioutil" - "net/http" - "time" - - _ "github.com/lib/pq" -) - -func main() { - db, err := sql.Open("postgres", "user=maxroach dbname=jsonb_test sslmode=disable port=26257") - if err != nil { - panic(err) - } - - // The Reddit API wants us to tell it where to start from. The first request - // we just say "null" to say "from the start", subsequent requests will use - // the value received from the last call. - after := "null" - - for i := 0; i < 300; i++ { - after, err = makeReq(db, after) - if err != nil { - panic(err) - } - // Reddit limits to 30 requests per minute, so do not do any more than that. - time.Sleep(2 * time.Second) - } -} - -func makeReq(db *sql.DB, after string) (string, error) { - // First, make a request to reddit using the appropriate "after" string. - client := &http.Client{} - req, err := http.NewRequest("GET", fmt.Sprintf("https://www.reddit.com/r/programming.json?after=%s", after), nil) - - req.Header.Add("User-Agent", `Go`) - - resp, err := client.Do(req) - if err != nil { - return "", err - } - - res, err := ioutil.ReadAll(resp.Body) - if err != nil { - return "", err - } - - // We've gotten back our JSON from reddit, we can use a couple SQL tricks to - // accomplish multiple things at once. - // The JSON reddit returns looks like this: - // { - // "data": { - // "children": [ ... ] - // }, - // "after": ... - // } - // We structure our query so that we extract the `children` field, and then - // expand that and insert each individual element into the database as a - // separate row. We then return the "after" field so we know how to make the - // next request. - r, err := db.Query(` - INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements($1->'data'->'children') - RETURNING $1->'data'->'after'`, - string(res)) - if err != nil { - return "", err - } - - // Since we did a RETURNING, we need to grab the result of our query. - r.Next() - var newAfter string - r.Scan(&newAfter) - - return newAfter, nil -} diff --git a/src/current/_includes/v19.1/json/json-sample.py b/src/current/_includes/v19.1/json/json-sample.py deleted file mode 100644 index 68b7fd1ef37..00000000000 --- a/src/current/_includes/v19.1/json/json-sample.py +++ /dev/null @@ -1,44 +0,0 @@ -import json -import psycopg2 -import requests -import time - -conn = psycopg2.connect(database="jsonb_test", user="maxroach", host="localhost", port=26257) -conn.set_session(autocommit=True) -cur = conn.cursor() - -# The Reddit API wants us to tell it where to start from. The first request -# we just say "null" to say "from the start"; subsequent requests will use -# the value received from the last call. -url = "https://www.reddit.com/r/programming.json" -after = {"after": "null"} - -for n in range(300): - # First, make a request to reddit using the appropriate "after" string. - req = requests.get(url, params=after, headers={"User-Agent": "Python"}) - - # Decode the JSON and set "after" for the next request. - resp = req.json() - after = {"after": str(resp['data']['after'])} - - # Convert the JSON to a string to send to the database. - data = json.dumps(resp) - - # The JSON reddit returns looks like this: - # { - # "data": { - # "children": [ ... ] - # }, - # "after": ... - # } - # We structure our query so that we extract the `children` field, and then - # expand that and insert each individual element into the database as a - # separate row. - cur.execute("""INSERT INTO jsonb_test.programming (posts) - SELECT json_array_elements(%s->'data'->'children')""", (data,)) - - # Reddit limits to 30 requests per minute, so do not do any more than that. - time.sleep(2) - -cur.close() -conn.close() diff --git a/src/current/_includes/v19.1/known-limitations/adding-stores-to-node.md b/src/current/_includes/v19.1/known-limitations/adding-stores-to-node.md deleted file mode 100644 index ee4844e2433..00000000000 --- a/src/current/_includes/v19.1/known-limitations/adding-stores-to-node.md +++ /dev/null @@ -1,5 +0,0 @@ -After a node has initially joined a cluster, it is not possible to add additional [stores](start-a-node.html#store) to the node. Stopping the node and restarting it with additional stores causes the node to not reconnect to the cluster. - -To work around this limitation, [decommission the node](remove-nodes.html), remove its data directory, and then run [`cockroach start`](start-a-node.html) to join the cluster again as a new node. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/39415) diff --git a/src/current/_includes/v19.1/known-limitations/cdc.md b/src/current/_includes/v19.1/known-limitations/cdc.md deleted file mode 100644 index 7d647fd8c9a..00000000000 --- a/src/current/_includes/v19.1/known-limitations/cdc.md +++ /dev/null @@ -1,11 +0,0 @@ -The following are limitations in the v19.1 release and will be addressed in the future: - -- Changefeeds only work on tables with a single [column family](column-families.html) (which is the default for new tables). -- Many DDL queries (including [`TRUNCATE`](truncate.html) and [`DROP TABLE`](drop-table.html)) will cause errors on a changefeed watching the affected tables. You will need to [start a new changefeed](create-changefeed.html#start-a-new-changefeed-where-another-ended). -- Changefeeds cannot be [backed up](backup.html) or [restored](restore.html). -- Partial or intermittent sink unavailability may impact changefeed stability; however, [ordering guarantees](change-data-capture.html#ordering-guarantees) will still hold for as long as a changefeed [remains active](change-data-capture.html#monitor-a-changefeed). -- Changefeeds cannot be altered. To alter, cancel the changefeed and [create a new one with updated settings from where it left off](create-changefeed.html#start-a-new-changefeed-where-another-ended). -- Additional target options will be added, including partitions and ranges of primary key rows. -- There is an open correctness issue with changefeeds using resolved timestamps connected to cloud storage sinks. While this issue is unlikely, new row information could display with a lower timestamp than what has already been emitted, which violates our [ordering guarantees](change-data-capture.html#ordering-guarantees). This issue is fixed in v19.2 and beyond. -- In v19.1.0, when emitting deletes, [cloud storage sinks](create-changefeed.html#cloud-storage-sink) do not emit the record's keys; therefore, the deleted record is not identifiable. This has been fixed in v19.1.1 and above. -- Using a [cloud storage sink](create-changefeed.html#cloud-storage-sink) only works with `JSON` and emits [newline-delimited json](http://ndjson.org) files. diff --git a/src/current/_includes/v19.1/known-limitations/cte-by-name.md b/src/current/_includes/v19.1/known-limitations/cte-by-name.md deleted file mode 100644 index d33a6f8c7e8..00000000000 --- a/src/current/_includes/v19.1/known-limitations/cte-by-name.md +++ /dev/null @@ -1,10 +0,0 @@ -It is currently not possible to refer to a [common table expression](common-table-expressions.html) by name more than once. - -For example, the following query is invalid because the CTE `a` is -referred to twice: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (VALUES (1), (2), (3)) - SELECT * FROM a, a; -~~~ diff --git a/src/current/_includes/v19.1/known-limitations/dump-cyclic-foreign-keys.md b/src/current/_includes/v19.1/known-limitations/dump-cyclic-foreign-keys.md deleted file mode 100644 index 4e3c43644ea..00000000000 --- a/src/current/_includes/v19.1/known-limitations/dump-cyclic-foreign-keys.md +++ /dev/null @@ -1 +0,0 @@ -The [`cockroach dump`](sql-dump.html) command will successfully create a dump file for a table with a [foreign key](foreign-key.html) reference to itself, or a set of tables with a cyclic foreign key dependency (e.g., a depends on b depends on a). That dump file, however, can only be executed after manually editing the output to remove the foreign key definitions from the `CREATE TABLE` statements and adding them as `ALTER TABLE ... ADD CONSTRAINT` statements after the `INSERT` statements. diff --git a/src/current/_includes/v19.1/known-limitations/dump-table-with-no-columns.md b/src/current/_includes/v19.1/known-limitations/dump-table-with-no-columns.md deleted file mode 100644 index f09b2229d8b..00000000000 --- a/src/current/_includes/v19.1/known-limitations/dump-table-with-no-columns.md +++ /dev/null @@ -1 +0,0 @@ -It is not currently possible to use [`cockroach dump`](sql-dump.html) to dump the schema and data of a table with no user-defined columns. See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for more details. diff --git a/src/current/_includes/v19.1/known-limitations/import-interleaved-table.md b/src/current/_includes/v19.1/known-limitations/import-interleaved-table.md deleted file mode 100644 index 2198a72933a..00000000000 --- a/src/current/_includes/v19.1/known-limitations/import-interleaved-table.md +++ /dev/null @@ -1 +0,0 @@ -After using [`cockroach dump`](sql-dump.html) to dump the schema and data of an interleaved table, the output must be edited before it can be imported via [`IMPORT`](import.html). See [#35462](https://github.com/cockroachdb/cockroach/issues/35462) for the workaround and more details. diff --git a/src/current/_includes/v19.1/known-limitations/node-map.md b/src/current/_includes/v19.1/known-limitations/node-map.md deleted file mode 100644 index 863f09c3ac2..00000000000 --- a/src/current/_includes/v19.1/known-limitations/node-map.md +++ /dev/null @@ -1,8 +0,0 @@ -You cannot assign latitude/longitude coordinates to localities if the components of your localities have the same name. For example, consider the following partial configuration: - -| Node | Region | Datacenter | -| ------ | ------ | ------ | -| Node1 | us-east | datacenter-1 | -| Node2 | us-west | datacenter-1 | - -In this case, if you try to set the latitude/longitude coordinates to the datacenter level of the localities, you will get the "primary key exists" error and the **Node Map** will not be displayed. You can, however, set the latitude/longitude coordinates to the region components of the localities, and the **Node Map** will be displayed. diff --git a/src/current/_includes/v19.1/known-limitations/partitioning-with-placeholders.md b/src/current/_includes/v19.1/known-limitations/partitioning-with-placeholders.md deleted file mode 100644 index b3c3345200d..00000000000 --- a/src/current/_includes/v19.1/known-limitations/partitioning-with-placeholders.md +++ /dev/null @@ -1 +0,0 @@ -When defining a [table partition](partitioning.html), either during table creation or table alteration, it is not possible to use placeholders in the `PARTITION BY` clause. diff --git a/src/current/_includes/v19.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md b/src/current/_includes/v19.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md deleted file mode 100644 index 952766dbeed..00000000000 --- a/src/current/_includes/v19.1/known-limitations/schema-change-ddl-inside-multi-statement-transactions.md +++ /dev/null @@ -1,52 +0,0 @@ -Schema change [DDL](https://en.wikipedia.org/wiki/Data_definition_language#ALTER_statement) statements that run inside a multi-statement transaction with non-DDL statements can fail at [`COMMIT`](commit-transaction.html) time, even if other statements in the transaction succeed. This leaves such transactions in a "partially committed, partially aborted" state that may require manual intervention to determine whether the DDL statements succeeded. - -If such a failure occurs, CockroachDB will emit the Postgres error code `40003`, `"statement completion unknown"`. - -{{site.data.alerts.callout_danger}} -If you must execute schema change DDL statements inside a multi-statement transaction, we **strongly recommend** checking for this error code and handling it appropriately every time you execute such transactions. -{{site.data.alerts.end}} - -This error will occur in various scenarios, including but not limited to: - -- Creating a unique index fails because values aren't unique. -- The evaluation of a computed value fails. -- Adding a constraint (or a column with a constraint) fails because the constraint is violated for the default/computed values in the column. - -To see an example of this error, start by creating the following table. - -{% include copy-clipboard.html %} -~~~ sql -CREATE TABLE T(x INT); -INSERT INTO T(x) VALUES (1), (2), (3); -~~~ - -Then, enter the following multi-statement transaction, which will trigger the error. - -{% include copy-clipboard.html %} -~~~ sql -BEGIN; -ALTER TABLE t ADD CONSTRAINT unique_x UNIQUE(x); -INSERT INTO T(x) VALUES (3); -COMMIT; -~~~ - -~~~ -pq: sql/row/errors.go:138 in NewUniquenessConstraintViolationError(): (23505) duplicate key value (x)=(3) violates unique constraint "unique_x" -~~~ - -In this example, the [`INSERT`](insert.html) statement committed, but the [`ALTER TABLE`](alter-table.html) statement adding a [`UNIQUE` constraint](unique.html) failed. We can verify this by looking at the data in table `t` and seeing that the additional non-unique value `3` was successfully inserted. - -{% include copy-clipboard.html %} -~~~ sql -SELECT * FROM t; -~~~ - -~~~ - x -+---+ - 1 - 2 - 3 - 3 -(4 rows) -~~~ diff --git a/src/current/_includes/v19.1/known-limitations/schema-changes-between-prepared-statements.md b/src/current/_includes/v19.1/known-limitations/schema-changes-between-prepared-statements.md deleted file mode 100644 index 736fe99df61..00000000000 --- a/src/current/_includes/v19.1/known-limitations/schema-changes-between-prepared-statements.md +++ /dev/null @@ -1,33 +0,0 @@ -When the schema of a table targeted by a prepared statement changes after the prepared statement is created, future executions of the prepared statement could result in an error. For example, adding a column to a table referenced in a prepared statement with a `SELECT *` clause will result in an error: - -{% include_cached copy-clipboard.html %} -~~~ sql -CREATE TABLE users (id INT PRIMARY KEY); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -PREPARE prep1 AS SELECT * FROM users; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -ALTER TABLE users ADD COLUMN name STRING; -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -INSERT INTO users VALUES (1, 'Max Roach'); -~~~ - -{% include_cached copy-clipboard.html %} -~~~ sql -EXECUTE prep1; -~~~ - -~~~ -ERROR: cached plan must not change result type -SQLSTATE: 0A000 -~~~ - -It's therefore recommended to explicitly list result columns instead of using `SELECT *` in prepared statements, when possible. diff --git a/src/current/_includes/v19.1/known-limitations/schema-changes-within-transactions.md b/src/current/_includes/v19.1/known-limitations/schema-changes-within-transactions.md deleted file mode 100644 index 3747cb0fad8..00000000000 --- a/src/current/_includes/v19.1/known-limitations/schema-changes-within-transactions.md +++ /dev/null @@ -1,10 +0,0 @@ -Within a single [transaction](transactions.html): - -- DDL statements cannot be mixed with DML statements. As a workaround, you can split the statements into separate transactions. For more details, [see examples of unsupported statements](online-schema-changes.html#examples-of-statements-that-fail). -- A [`CREATE TABLE`](create-table.html) statement containing [`FOREIGN KEY`](foreign-key.html) or [`INTERLEAVE`](interleave-in-parent.html) clauses cannot be followed by statements that reference the new table. -- A table cannot be dropped and then recreated with the same name. This is not possible within a single transaction because `DROP TABLE` does not immediately drop the name of the table. As a workaround, split the [`DROP TABLE`](drop-table.html) and [`CREATE TABLE`](create-table.html) statements into separate transactions. -- [Schema change DDL statements inside a multi-statement transaction can fail while other statements succeed](#schema-change-ddl-statements-inside-a-multi-statement-transaction-can-fail-while-other-statements-succeed) - -{{site.data.alerts.callout_success}} -As of version v2.1, you can run schema changes inside the same transaction as a `CREATE TABLE` statement. For more information, [see this example](online-schema-changes.html#run-schema-changes-inside-a-transaction-with-create-table). Also, as of v19.1, some schema changes can be used in combination in a single `ALTER TABLE` statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/metric-names.md b/src/current/_includes/v19.1/metric-names.md deleted file mode 100644 index 7eebed323d8..00000000000 --- a/src/current/_includes/v19.1/metric-names.md +++ /dev/null @@ -1,246 +0,0 @@ -Name | Help ------|----- -`addsstable.applications` | Number of SSTable ingestions applied (i.e., applied by Replicas) -`addsstable.copies` | Number of SSTable ingestions that required copying files during application -`addsstable.proposals` | Number of SSTable ingestions proposed (i.e., sent to Raft by lease holders) -`build.timestamp` | Build information -`capacity.available` | Available storage capacity -`capacity.reserved` | Capacity reserved for snapshots -`capacity.used` | Used storage capacity -`capacity` | Total storage capacity -`clock-offset.meannanos` | Mean clock offset with other nodes in nanoseconds -`clock-offset.stddevnanos` | Std dev clock offset with other nodes in nanoseconds -`compactor.compactingnanos` | Number of nanoseconds spent compacting ranges -`compactor.compactions.failure` | Number of failed compaction requests sent to the storage engine -`compactor.compactions.success` | Number of successful compaction requests sent to the storage engine -`compactor.suggestionbytes.compacted` | Number of logical bytes compacted from suggested compactions -`compactor.suggestionbytes.queued` | Number of logical bytes in suggested compactions in the queue -`compactor.suggestionbytes.skipped` | Number of logical bytes in suggested compactions which were not compacted -`distsender.batches.partial` | Number of partial batches processed -`distsender.batches` | Number of batches processed -`distsender.errors.notleaseholder` | Number of NotLeaseHolderErrors encountered -`distsender.rpc.sent.local` | Number of local RPCs sent -`distsender.rpc.sent.nextreplicaerror` | Number of RPCs sent due to per-replica errors -`distsender.rpc.sent` | Number of RPCs sent -`exec.error` | Number of batch KV requests that failed to execute on this node -`exec.latency` | Latency in nanoseconds of batch KV requests executed on this node -`exec.success` | Number of batch KV requests executed successfully on this node -`gcbytesage` | Cumulative age of non-live data in seconds -`gossip.bytes.received` | Number of received gossip bytes -`gossip.bytes.sent` | Number of sent gossip bytes -`gossip.connections.incoming` | Number of active incoming gossip connections -`gossip.connections.outgoing` | Number of active outgoing gossip connections -`gossip.connections.refused` | Number of refused incoming gossip connections -`gossip.infos.received` | Number of received gossip Info objects -`gossip.infos.sent` | Number of sent gossip Info objects -`intentage` | Cumulative age of intents in seconds -`intentbytes` | Number of bytes in intent KV pairs -`intentcount` | Count of intent keys -`keybytes` | Number of bytes taken up by keys -`keycount` | Count of all keys -`lastupdatenanos` | Time in nanoseconds since Unix epoch at which bytes/keys/intents metrics were last updated -`leases.epoch` | Number of replica leaseholders using epoch-based leases -`leases.error` | Number of failed lease requests -`leases.expiration` | Number of replica leaseholders using expiration-based leases -`leases.success` | Number of successful lease requests -`leases.transfers.error` | Number of failed lease transfers -`leases.transfers.success` | Number of successful lease transfers -`livebytes` | Number of bytes of live data (keys plus values) -`livecount` | Count of live keys -`liveness.epochincrements` | Number of times this node has incremented its liveness epoch -`liveness.heartbeatfailures` | Number of failed node liveness heartbeats from this node -`liveness.heartbeatlatency` | Node liveness heartbeat latency in nanoseconds -`liveness.heartbeatsuccesses` | Number of successful node liveness heartbeats from this node -`liveness.livenodes` | Number of live nodes in the cluster (will be 0 if this node is not itself live) -`node-id` | node ID with labels for advertised RPC and HTTP addresses -`queue.consistency.pending` | Number of pending replicas in the consistency checker queue -`queue.consistency.process.failure` | Number of replicas which failed processing in the consistency checker queue -`queue.consistency.process.success` | Number of replicas successfully processed by the consistency checker queue -`queue.consistency.processingnanos` | Nanoseconds spent processing replicas in the consistency checker queue -`queue.gc.info.abortspanconsidered` | Number of AbortSpan entries old enough to be considered for removal -`queue.gc.info.abortspangcnum` | Number of AbortSpan entries fit for removal -`queue.gc.info.abortspanscanned` | Number of transactions present in the AbortSpan scanned from the engine -`queue.gc.info.intentsconsidered` | Number of 'old' intents -`queue.gc.info.intenttxns` | Number of associated distinct transactions -`queue.gc.info.numkeysaffected` | Number of keys with GC'able data -`queue.gc.info.pushtxn` | Number of attempted pushes -`queue.gc.info.resolvesuccess` | Number of successful intent resolutions -`queue.gc.info.resolvetotal` | Number of attempted intent resolutions -`queue.gc.info.transactionspangcaborted` | Number of GC'able entries corresponding to aborted txns -`queue.gc.info.transactionspangccommitted` | Number of GC'able entries corresponding to committed txns -`queue.gc.info.transactionspangcpending` | Number of GC'able entries corresponding to pending txns -`queue.gc.info.transactionspanscanned` | Number of entries in transaction spans scanned from the engine -`queue.gc.pending` | Number of pending replicas in the GC queue -`queue.gc.process.failure` | Number of replicas which failed processing in the GC queue -`queue.gc.process.success` | Number of replicas successfully processed by the GC queue -`queue.gc.processingnanos` | Nanoseconds spent processing replicas in the GC queue -`queue.raftlog.pending` | Number of pending replicas in the Raft log queue -`queue.raftlog.process.failure` | Number of replicas which failed processing in the Raft log queue -`queue.raftlog.process.success` | Number of replicas successfully processed by the Raft log queue -`queue.raftlog.processingnanos` | Nanoseconds spent processing replicas in the Raft log queue -`queue.raftsnapshot.pending` | Number of pending replicas in the Raft repair queue -`queue.raftsnapshot.process.failure` | Number of replicas which failed processing in the Raft repair queue -`queue.raftsnapshot.process.success` | Number of replicas successfully processed by the Raft repair queue -`queue.raftsnapshot.processingnanos` | Nanoseconds spent processing replicas in the Raft repair queue -`queue.replicagc.pending` | Number of pending replicas in the replica GC queue -`queue.replicagc.process.failure` | Number of replicas which failed processing in the replica GC queue -`queue.replicagc.process.success` | Number of replicas successfully processed by the replica GC queue -`queue.replicagc.processingnanos` | Nanoseconds spent processing replicas in the replica GC queue -`queue.replicagc.removereplica` | Number of replica removals attempted by the replica gc queue -`queue.replicate.addreplica` | Number of replica additions attempted by the replicate queue -`queue.replicate.pending` | Number of pending replicas in the replicate queue -`queue.replicate.process.failure` | Number of replicas which failed processing in the replicate queue -`queue.replicate.process.success` | Number of replicas successfully processed by the replicate queue -`queue.replicate.processingnanos` | Nanoseconds spent processing replicas in the replicate queue -`queue.replicate.purgatory` | Number of replicas in the replicate queue's purgatory, awaiting allocation options -`queue.replicate.rebalancereplica` | Number of replica rebalancer-initiated additions attempted by the replicate queue -`queue.replicate.removedeadreplica` | Number of dead replica removals attempted by the replicate queue (typically in response to a node outage) -`queue.replicate.removereplica` | Number of replica removals attempted by the replicate queue (typically in response to a rebalancer-initiated addition) -`queue.replicate.transferlease` | Number of range lease transfers attempted by the replicate queue -`queue.split.pending` | Number of pending replicas in the split queue -`queue.split.process.failure` | Number of replicas which failed processing in the split queue -`queue.split.process.success` | Number of replicas successfully processed by the split queue -`queue.split.processingnanos` | Nanoseconds spent processing replicas in the split queue -`queue.tsmaintenance.pending` | Number of pending replicas in the time series maintenance queue -`queue.tsmaintenance.process.failure` | Number of replicas which failed processing in the time series maintenance queue -`queue.tsmaintenance.process.success` | Number of replicas successfully processed by the time series maintenance queue -`queue.tsmaintenance.processingnanos` | Nanoseconds spent processing replicas in the time series maintenance queue -`raft.commandsapplied` | Count of Raft commands applied -`raft.enqueued.pending` | Number of pending outgoing messages in the Raft Transport queue -`raft.heartbeats.pending` | Number of pending heartbeats and responses waiting to be coalesced -`raft.process.commandcommit.latency` | Latency histogram in nanoseconds for committing Raft commands -`raft.process.logcommit.latency` | Latency histogram in nanoseconds for committing Raft log entries -`raft.process.tickingnanos` | Nanoseconds spent in store.processRaft() processing replica.Tick() -`raft.process.workingnanos` | Nanoseconds spent in store.processRaft() working -`raft.rcvd.app` | Number of MsgApp messages received by this store -`raft.rcvd.appresp` | Number of MsgAppResp messages received by this store -`raft.rcvd.dropped` | Number of dropped incoming Raft messages -`raft.rcvd.heartbeat` | Number of (coalesced, if enabled) MsgHeartbeat messages received by this store -`raft.rcvd.heartbeatresp` | Number of (coalesced, if enabled) MsgHeartbeatResp messages received by this store -`raft.rcvd.prevote` | Number of MsgPreVote messages received by this store -`raft.rcvd.prevoteresp` | Number of MsgPreVoteResp messages received by this store -`raft.rcvd.prop` | Number of MsgProp messages received by this store -`raft.rcvd.snap` | Number of MsgSnap messages received by this store -`raft.rcvd.timeoutnow` | Number of MsgTimeoutNow messages received by this store -`raft.rcvd.transferleader` | Number of MsgTransferLeader messages received by this store -`raft.rcvd.vote` | Number of MsgVote messages received by this store -`raft.rcvd.voteresp` | Number of MsgVoteResp messages received by this store -`raft.ticks` | Number of Raft ticks queued -`raftlog.behind` | Number of Raft log entries followers on other stores are behind -`raftlog.truncated` | Number of Raft log entries truncated -`range.adds` | Number of range additions -`range.raftleadertransfers` | Number of raft leader transfers -`range.removes` | Number of range removals -`range.snapshots.generated` | Number of generated snapshots -`range.snapshots.normal-applied` | Number of applied snapshots -`range.snapshots.preemptive-applied` | Number of applied preemptive snapshots -`range.splits` | Number of range splits -`ranges.unavailable` | Number of ranges with fewer live replicas than needed for quorum -`ranges.underreplicated` | Number of ranges with fewer live replicas than the replication target -`ranges` | Number of ranges -`rebalancing.writespersecond` | Number of keys written (i.e., applied by raft) per second to the store, averaged over a large time period as used in rebalancing decisions -`replicas.commandqueue.combinedqueuesize` | Number of commands in all CommandQueues combined -`replicas.commandqueue.combinedreadcount` | Number of read-only commands in all CommandQueues combined -`replicas.commandqueue.combinedwritecount` | Number of read-write commands in all CommandQueues combined -`replicas.commandqueue.maxoverlaps` | Largest number of overlapping commands seen when adding to any CommandQueue -`replicas.commandqueue.maxreadcount` | Largest number of read-only commands in any CommandQueue -`replicas.commandqueue.maxsize` | Largest number of commands in any CommandQueue -`replicas.commandqueue.maxtreesize` | Largest number of intervals in any CommandQueue's interval tree -`replicas.commandqueue.maxwritecount` | Largest number of read-write commands in any CommandQueue -`replicas.leaders_not_leaseholders` | Number of replicas that are Raft leaders whose range lease is held by another store -`replicas.leaders` | Number of raft leaders -`replicas.leaseholders` | Number of lease holders -`replicas.quiescent` | Number of quiesced replicas -`replicas.reserved` | Number of replicas reserved for snapshots -`replicas` | Number of replicas -`requests.backpressure.split` | Number of backpressured writes waiting on a Range split -`requests.slow.commandqueue` | Number of requests that have been stuck for a long time in the command queue -`requests.slow.distsender` | Number of requests that have been stuck for a long time in the dist sender -`requests.slow.lease` | Number of requests that have been stuck for a long time acquiring a lease -`requests.slow.raft` | Number of requests that have been stuck for a long time in raft -`rocksdb.block.cache.hits` | Count of block cache hits -`rocksdb.block.cache.misses` | Count of block cache misses -`rocksdb.block.cache.pinned-usage` | Bytes pinned by the block cache -`rocksdb.block.cache.usage` | Bytes used by the block cache -`rocksdb.bloom.filter.prefix.checked` | Number of times the bloom filter was checked -`rocksdb.bloom.filter.prefix.useful` | Number of times the bloom filter helped avoid iterator creation -`rocksdb.compactions` | Number of table compactions -`rocksdb.flushes` | Number of table flushes -`rocksdb.memtable.total-size` | Current size of memtable in bytes -`rocksdb.num-sstables` | Number of rocksdb SSTables -`rocksdb.read-amplification` | Number of disk reads per query -`rocksdb.table-readers-mem-estimate` | Memory used by index and filter blocks -`round-trip-latency` | Distribution of round-trip latencies with other nodes in nanoseconds -`security.certificate.expiration.ca` | Expiration timestamp in seconds since Unix epoch for the CA certificate. 0 means no certificate or error. -`security.certificate.expiration.node` | Expiration timestamp in seconds since Unix epoch for the node certificate. 0 means no certificate or error. -`sql.bytesin` | Number of sql bytes received -`sql.bytesout` | Number of sql bytes sent -`sql.conns` | Number of active sql connections -`sql.ddl.count` | Number of SQL DDL statements -`sql.delete.count` | Number of SQL DELETE statements -`sql.distsql.exec.latency` | Latency in nanoseconds of DistSQL statement execution -`sql.distsql.flows.active` | Number of distributed SQL flows currently active -`sql.distsql.flows.total` | Number of distributed SQL flows executed -`sql.distsql.queries.active` | Number of distributed SQL queries currently active -`sql.distsql.queries.total` | Number of distributed SQL queries executed -`sql.distsql.select.count` | Number of DistSQL SELECT statements -`sql.distsql.service.latency` | Latency in nanoseconds of DistSQL request execution -`sql.exec.latency` | Latency in nanoseconds of SQL statement execution -`sql.insert.count` | Number of SQL INSERT statements -`sql.mem.current` | Current sql statement memory usage -`sql.mem.distsql.current` | Current sql statement memory usage for distsql -`sql.mem.distsql.max` | Memory usage per sql statement for distsql -`sql.mem.max` | Memory usage per sql statement -`sql.mem.session.current` | Current sql session memory usage -`sql.mem.session.max` | Memory usage per sql session -`sql.mem.txn.current` | Current sql transaction memory usage -`sql.mem.txn.max` | Memory usage per sql transaction -`sql.misc.count` | Number of other SQL statements -`sql.query.count` | Number of SQL queries -`sql.select.count` | Number of SQL SELECT statements -`sql.service.latency` | Latency in nanoseconds of SQL request execution -`sql.txn.abort.count` | Number of SQL transaction ABORT statements -`sql.txn.begin.count` | Number of SQL transaction BEGIN statements -`sql.txn.commit.count` | Number of SQL transaction COMMIT statements -`sql.txn.rollback.count` | Number of SQL transaction ROLLBACK statements -`sql.update.count` | Number of SQL UPDATE statements -`sys.cgo.allocbytes` | Current bytes of memory allocated by cgo -`sys.cgo.totalbytes` | Total bytes of memory allocated by cgo, but not released -`sys.cgocalls` | Total number of cgo call -`sys.cpu.sys.ns` | Total system cpu time in nanoseconds -`sys.cpu.sys.percent` | Current system cpu percentage -`sys.cpu.user.ns` | Total user cpu time in nanoseconds -`sys.cpu.user.percent` | Current user cpu percentage -`sys.fd.open` | Process open file descriptors -`sys.fd.softlimit` | Process open FD soft limit -`sys.gc.count` | Total number of GC runs -`sys.gc.pause.ns` | Total GC pause in nanoseconds -`sys.gc.pause.percent` | Current GC pause percentage -`sys.go.allocbytes` | Current bytes of memory allocated by go -`sys.go.totalbytes` | Total bytes of memory allocated by go, but not released -`sys.goroutines` | Current number of goroutines -`sys.rss` | Current process RSS -`sys.uptime` | Process uptime in seconds -`sysbytes` | Number of bytes in system KV pairs -`syscount` | Count of system KV pairs -`timeseries.write.bytes` | Total size in bytes of metric samples written to disk -`timeseries.write.errors` | Total errors encountered while attempting to write metrics to disk -`timeseries.write.samples` | Total number of metric samples written to disk -`totalbytes` | Total number of bytes taken up by keys and values including non-live data -`tscache.skl.read.pages` | Number of pages in the read timestamp cache -`tscache.skl.read.rotations` | Number of page rotations in the read timestamp cache -`tscache.skl.write.pages` | Number of pages in the write timestamp cache -`tscache.skl.write.rotations` | Number of page rotations in the write timestamp cache -`txn.abandons` | Number of abandoned KV transactions -`txn.aborts` | Number of aborted KV transactions -`txn.autoretries` | Number of automatic retries to avoid serializable restarts -`txn.commits1PC` | Number of committed one-phase KV transactions -`txn.commits` | Number of committed KV transactions (including 1PC) -`txn.durations` | KV transaction durations in nanoseconds -`txn.restarts.deleterange` | Number of restarts due to a forwarded commit timestamp and a DeleteRange command -`txn.restarts.possiblereplay` | Number of restarts due to possible replays of command batches at the storage layer -`txn.restarts.serializable` | Number of restarts due to a forwarded commit timestamp and isolation=SERIALIZABLE -`txn.restarts.writetooold` | Number of restarts due to a concurrent writer committing first -`txn.restarts` | Number of restarted KV transactions -`valbytes` | Number of bytes taken up by values -`valcount` | Count of all values diff --git a/src/current/_includes/v19.1/misc/available-capacity-metric.md b/src/current/_includes/v19.1/misc/available-capacity-metric.md deleted file mode 100644 index 11511de2d37..00000000000 --- a/src/current/_includes/v19.1/misc/available-capacity-metric.md +++ /dev/null @@ -1 +0,0 @@ -If you are running multiple nodes on a single machine (not recommended in production) and didn't specify the maximum allocated storage capacity for each node using the [`--store`](start-a-node.html#store) flag, the capacity metrics in the Admin UI are incorrect. This is because when multiple nodes are running on a single machine, the machine's hard disk is treated as an available store for each node, while in reality, only one hard disk is available for all nodes. The total available capacity is then calculated as the hard disk size multiplied by the number of nodes on the machine. diff --git a/src/current/_includes/v19.1/misc/aws-locations.md b/src/current/_includes/v19.1/misc/aws-locations.md deleted file mode 100644 index 8b073c1f230..00000000000 --- a/src/current/_includes/v19.1/misc/aws-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| US East (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east-1', 37.478397, -76.453077)`| -| US East (Ohio) | `INSERT into system.locations VALUES ('region', 'us-east-2', 40.417287, -76.453077)` | -| US West (N. California) | `INSERT into system.locations VALUES ('region', 'us-west-1', 38.837522, -120.895824)` | -| US West (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west-2', 43.804133, -120.554201)` | -| Canada (Central) | `INSERT into system.locations VALUES ('region', 'ca-central-1', 56.130366, -106.346771)` | -| EU (Frankfurt) | `INSERT into system.locations VALUES ('region', 'eu-central-1', 50.110922, 8.682127)` | -| EU (Ireland) | `INSERT into system.locations VALUES ('region', 'eu-west-1', 53.142367, -7.692054)` | -| EU (London) | `INSERT into system.locations VALUES ('region', 'eu-west-2', 51.507351, -0.127758)` | -| EU (Paris) | `INSERT into system.locations VALUES ('region', 'eu-west-3', 48.856614, 2.352222)` | -| Asia Pacific (Tokyo) | `INSERT into system.locations VALUES ('region', 'ap-northeast-1', 35.689487, 139.691706)` | -| Asia Pacific (Seoul) | `INSERT into system.locations VALUES ('region', 'ap-northeast-2', 37.566535, 126.977969)` | -| Asia Pacific (Osaka-Local) | `INSERT into system.locations VALUES ('region', 'ap-northeast-3', 34.693738, 135.502165)` | -| Asia Pacific (Singapore) | `INSERT into system.locations VALUES ('region', 'ap-southeast-1', 1.352083, 103.819836)` | -| Asia Pacific (Sydney) | `INSERT into system.locations VALUES ('region', 'ap-southeast-2', -33.86882, 151.209296)` | -| Asia Pacific (Mumbai) | `INSERT into system.locations VALUES ('region', 'ap-south-1', 19.075984, 72.877656)` | -| South America (São Paulo) | `INSERT into system.locations VALUES ('region', 'sa-east-1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v19.1/misc/azure-locations.md b/src/current/_includes/v19.1/misc/azure-locations.md deleted file mode 100644 index 7119ff8b7cb..00000000000 --- a/src/current/_includes/v19.1/misc/azure-locations.md +++ /dev/null @@ -1,30 +0,0 @@ -| Location | SQL Statement | -| -------- | ------------- | -| eastasia (East Asia) | `INSERT into system.locations VALUES ('region', 'eastasia', 22.267, 114.188)` | -| southeastasia (Southeast Asia) | `INSERT into system.locations VALUES ('region', 'southeastasia', 1.283, 103.833)` | -| centralus (Central US) | `INSERT into system.locations VALUES ('region', 'centralus', 41.5908, -93.6208)` | -| eastus (East US) | `INSERT into system.locations VALUES ('region', 'eastus', 37.3719, -79.8164)` | -| eastus2 (East US 2) | `INSERT into system.locations VALUES ('region', 'eastus2', 36.6681, -78.3889)` | -| westus (West US) | `INSERT into system.locations VALUES ('region', 'westus', 37.783, -122.417)` | -| northcentralus (North Central US) | `INSERT into system.locations VALUES ('region', 'northcentralus', 41.8819, -87.6278)` | -| southcentralus (South Central US) | `INSERT into system.locations VALUES ('region', 'southcentralus', 29.4167, -98.5)` | -| northeurope (North Europe) | `INSERT into system.locations VALUES ('region', 'northeurope', 53.3478, -6.2597)` | -| westeurope (West Europe) | `INSERT into system.locations VALUES ('region', 'westeurope', 52.3667, 4.9)` | -| japanwest (Japan West) | `INSERT into system.locations VALUES ('region', 'japanwest', 34.6939, 135.5022)` | -| japaneast (Japan East) | `INSERT into system.locations VALUES ('region', 'japaneast', 35.68, 139.77)` | -| brazilsouth (Brazil South) | `INSERT into system.locations VALUES ('region', 'brazilsouth', -23.55, -46.633)` | -| australiaeast (Australia East) | `INSERT into system.locations VALUES ('region', 'australiaeast', -33.86, 151.2094)` | -| australiasoutheast (Australia Southeast) | `INSERT into system.locations VALUES ('region', 'australiasoutheast', -37.8136, 144.9631)` | -| southindia (South India) | `INSERT into system.locations VALUES ('region', 'southindia', 12.9822, 80.1636)` | -| centralindia (Central India) | `INSERT into system.locations VALUES ('region', 'centralindia', 18.5822, 73.9197)` | -| westindia (West India) | `INSERT into system.locations VALUES ('region', 'westindia', 19.088, 72.868)` | -| canadacentral (Canada Central) | `INSERT into system.locations VALUES ('region', 'canadacentral', 43.653, -79.383)` | -| canadaeast (Canada East) | `INSERT into system.locations VALUES ('region', 'canadaeast', 46.817, -71.217)` | -| uksouth (UK South) | `INSERT into system.locations VALUES ('region', 'uksouth', 50.941, -0.799)` | -| ukwest (UK West) | `INSERT into system.locations VALUES ('region', 'ukwest', 53.427, -3.084)` | -| westcentralus (West Central US) | `INSERT into system.locations VALUES ('region', 'westcentralus', 40.890, -110.234)` | -| westus2 (West US 2) | `INSERT into system.locations VALUES ('region', 'westus2', 47.233, -119.852)` | -| koreacentral (Korea Central) | `INSERT into system.locations VALUES ('region', 'koreacentral', 37.5665, 126.9780)` | -| koreasouth (Korea South) | `INSERT into system.locations VALUES ('region', 'koreasouth', 35.1796, 129.0756)` | -| francecentral (France Central) | `INSERT into system.locations VALUES ('region', 'francecentral', 46.3772, 2.3730)` | -| francesouth (France South) | `INSERT into system.locations VALUES ('region', 'francesouth', 43.8345, 2.1972)` | diff --git a/src/current/_includes/v19.1/misc/basic-terms.md b/src/current/_includes/v19.1/misc/basic-terms.md deleted file mode 100644 index 8eebde3db17..00000000000 --- a/src/current/_includes/v19.1/misc/basic-terms.md +++ /dev/null @@ -1,9 +0,0 @@ -Term | Definition ------|------------ -**Cluster** | Your CockroachDB deployment, which acts as a single logical application. -**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster. -**Range** | CockroachDB stores all user data (tables, indexes, etc.) and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range.

From a SQL perspective, a table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as that range reaches 64 MiB in size, it splits into two ranges. This process continues for these new ranges as the table and its indexes continue growing. -**Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -**Leaseholder** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.

Unlike writes, read requests access the leaseholder and send the results to the client without needing to coordinate with any of the other range replicas. This reduces the network round trips involved and is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder. -**Raft Leader** | For each range, one of the replicas is the "leader" for write requests. Via the [Raft consensus protocol](replication-layer.html#raft), this replica ensures that a majority of replicas (the leader and enough followers) agree, based on their Raft logs, before committing the write. The Raft leader is almost always the same replica as the leaseholder. -**Raft Log** | For each range, a time-ordered log of writes to the range that its replicas have agreed on. This log exists on-disk with each replica and is the range's source of truth for consistent replication. diff --git a/src/current/_includes/v19.1/misc/beta-warning.md b/src/current/_includes/v19.1/misc/beta-warning.md deleted file mode 100644 index 107fc2bfa4b..00000000000 --- a/src/current/_includes/v19.1/misc/beta-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is a beta feature.** It is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/chrome-localhost.md b/src/current/_includes/v19.1/misc/chrome-localhost.md deleted file mode 100644 index 24f9bb159a3..00000000000 --- a/src/current/_includes/v19.1/misc/chrome-localhost.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -If you are using Google Chrome, and you are getting an error about not being able to reach `localhost` because its certificate has been revoked, go to chrome://flags/#allow-insecure-localhost, enable "Allow invalid certificates for resources loaded from localhost", and then restart the browser. Enabling this Chrome feature degrades security for all sites running on `localhost`, not just CockroachDB's Admin UI, so be sure to enable the feature only temporarily. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/customizing-the-savepoint-name.md b/src/current/_includes/v19.1/misc/customizing-the-savepoint-name.md deleted file mode 100644 index 855397f712c..00000000000 --- a/src/current/_includes/v19.1/misc/customizing-the-savepoint-name.md +++ /dev/null @@ -1,7 +0,0 @@ -New in v19.1: Set the `force_savepoint_restart` [session variable](set-vars.html#supported-variables) to `true` to enable using a custom name for the restart savepoint (for example, because you are using an ORM that wants to use its own names for savepoints). - -Once this variable is set, the [`SAVEPOINT`](savepoint.html) statement will accept any name for the savepoint, not just `cockroach_restart`. This allows compatibility with existing code that uses a single savepoint per transaction as long as that savepoint occurs before any statements that access data stored in non-virtual tables. - -{{site.data.alerts.callout_danger}} -The `force_savepoint_restart` variable changes the semantics of CockroachDB savepoints so that `RELEASE SAVEPOINT ` functions as a real commit. Note that the existence of this variable and its behavior does not change the fact that CockroachDB savepoints can only be used as a part of the transaction retry protocol. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/debug-subcommands.md b/src/current/_includes/v19.1/misc/debug-subcommands.md deleted file mode 100644 index 7a10e4a7fba..00000000000 --- a/src/current/_includes/v19.1/misc/debug-subcommands.md +++ /dev/null @@ -1,3 +0,0 @@ -While the `cockroach debug` command has a few subcommands, users are expected to use only the [`zip`](debug-zip.html), [`encryption-active-key`](debug-encryption-active-key.html), [`merge-logs`](debug-merge-logs.html), and [`ballast`](debug-ballast.html) subcommands. - -The other `debug` subcommands are useful only to CockroachDB's developers and contributors. diff --git a/src/current/_includes/v19.1/misc/delete-statistics.md b/src/current/_includes/v19.1/misc/delete-statistics.md deleted file mode 100644 index a568055e583..00000000000 --- a/src/current/_includes/v19.1/misc/delete-statistics.md +++ /dev/null @@ -1,17 +0,0 @@ -To delete statistics for all tables in all databases: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM system.table_statistics WHERE true; -~~~ - -To delete a named set of statistics (e.g, one named "my_stats"), run a query like the following: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM system.table_statistics WHERE name = 'my_stats'; -~~~ - -After deleting statistics, restart the nodes in your cluster to clear the statistics caches. - -For more information about the `DELETE` statement, see [`DELETE`](delete.html). diff --git a/src/current/_includes/v19.1/misc/diagnostics-callout.html b/src/current/_includes/v19.1/misc/diagnostics-callout.html deleted file mode 100644 index a969a8cf152..00000000000 --- a/src/current/_includes/v19.1/misc/diagnostics-callout.html +++ /dev/null @@ -1 +0,0 @@ -{{site.data.alerts.callout_info}}By default, each node of a CockroachDB cluster periodically shares anonymous usage details with Cockroach Labs. For an explanation of the details that get shared and how to opt-out of reporting, see Diagnostics Reporting.{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/drivers.md b/src/current/_includes/v19.1/misc/drivers.md deleted file mode 100644 index 53595fcfeb5..00000000000 --- a/src/current/_includes/v19.1/misc/drivers.md +++ /dev/null @@ -1,18 +0,0 @@ -{{site.data.alerts.callout_info}} -This page features drivers that we have tested enough to claim **beta-level** support. This means that applications using advanced or obscure features of a driver may encounter incompatibilities. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. -{{site.data.alerts.end}} - -| App Language | Driver | ORM | -|--------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------| -| Go | [pq](build-a-go-app-with-cockroachdb.html) | [GORM](build-a-go-app-with-cockroachdb-gorm.html) | -| Python | [psycopg2](build-a-python-app-with-cockroachdb.html) | [SQLAlchemy](build-a-python-app-with-cockroachdb-sqlalchemy.html) | -| Ruby | [pg](build-a-ruby-app-with-cockroachdb.html) | [ActiveRecord](build-a-ruby-app-with-cockroachdb-activerecord.html) | -| Java | [JDBC](build-a-java-app-with-cockroachdb.html) | [Hibernate](build-a-java-app-with-cockroachdb-hibernate.html) | -| Node.js | [pg](build-a-nodejs-app-with-cockroachdb.html) | [Sequelize](build-a-nodejs-app-with-cockroachdb-sequelize.html) | -| C | [libpq](http://www.postgresql.org/docs/9.5/static/libpq.html) | No ORMs tested | -| C++ | [libpqxx](build-a-c++-app-with-cockroachdb.html) | No ORMs tested | -| C# (.NET) | [Npgsql](build-a-csharp-app-with-cockroachdb.html) | No ORMs tested | -| Clojure | [java.jdbc](build-a-clojure-app-with-cockroachdb.html) | No ORMs tested | -| PHP | [php-pgsql](build-a-php-app-with-cockroachdb.html) | No ORMs tested | -| Rust | postgres {% comment %} This link is in HTML instead of Markdown because HTML proofer dies bc of https://github.com/rust-lang/crates.io/issues/163 {% endcomment %} | No ORMs tested | -| TypeScript | No drivers tested | [TypeORM](https://typeorm.io/#/) | diff --git a/src/current/_includes/v19.1/misc/enterprise-features.md b/src/current/_includes/v19.1/misc/enterprise-features.md deleted file mode 100644 index fe460329693..00000000000 --- a/src/current/_includes/v19.1/misc/enterprise-features.md +++ /dev/null @@ -1,13 +0,0 @@ -Feature | Description ---------+------------------------- -[Geo-Partitioning](topology-geo-partitioned-replicas.html) | This feature gives you row-level control of how and where your data is stored to dramatically reduce read and write latencies and assist in meeting regulatory requirements in multi-region deployments. -[Follower Reads](follower-reads.html) | New in v19.1: This feature reduces read latency in multi-region deployments by using the closest replica at the expense of reading slightly historical data (currently, at least 48 seconds in the past). -[`BACKUP`](backup.html) | This feature creates full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp, stored on a service such as AWS S3, Google Cloud Storage, NFS, or HTTP storage. -[`RESTORE`](restore.html) | This feature restores your cluster's schemas and data from an enterprise `BACKUP`. -[Change Data Capture](change-data-capture.html) (CDC) | This feature provides efficient, distributed, row-level [change feeds into Apache Kafka](create-changefeed.html) for downstream processing such as reporting, caching, or full-text indexing. -[Node Map](enable-node-map.html) | This feature visualizes the geographical configuration of a cluster by plotting node localities on a world map. -[Locality-Aware Index Selection](cost-based-optimizer.html#preferring-the-nearest-index) | New in v19.1: Given [multiple identical indexes](topology-duplicate-indexes.html) that have different locality constraints using [replication zones](configure-replication-zones.html), the cost-based optimizer will prefer the index that is closest to the gateway node that is planning the query. In multi-region deployments, this can lead to performance improvements due to improved data locality and reduced network traffic. -[Encryption at Rest](encryption.html#encryption-at-rest-enterprise) | Supplementing CockroachDB's encryption in flight capabilities, this feature provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using AES in counter mode, with all key sizes allowed. -[GSSAPI with Kerberos Authentication](gssapi_authentication.html) | New in v19.1: CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication, which lets you use an external enterprise directory system that supports Kerberos, such as Active Directory. -[Role-Based Access Control (RBAC)](authorization.html#create-and-manage-roles) | This feature simplifies the process of defining data access policies for groups of authenticated users. -[`EXPORT`](export.html) | New in v19.1: This feature uses the CockroachDB distributed execution engine to quickly get large sets of data out of CockroachDB in a CSV format that can be ingested by downstream systems. diff --git a/src/current/_includes/v19.1/misc/experimental-warning.md b/src/current/_includes/v19.1/misc/experimental-warning.md deleted file mode 100644 index d38a9755593..00000000000 --- a/src/current/_includes/v19.1/misc/experimental-warning.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -**This is an experimental feature**. The interface and output are subject to change. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/explore-benefits-see-also.md b/src/current/_includes/v19.1/misc/explore-benefits-see-also.md deleted file mode 100644 index 0392ed9bb83..00000000000 --- a/src/current/_includes/v19.1/misc/explore-benefits-see-also.md +++ /dev/null @@ -1,8 +0,0 @@ -- [Data Replication](demo-data-replication.html) -- [Fault Tolerance & Recovery](demo-fault-tolerance-and-recovery.html) -- [Automatic Rebalancing](demo-automatic-rebalancing.html) -- [Serializable Transactions](demo-serializable.html) -- [Cross-Cloud Migration](demo-automatic-cloud-migration.html) -- [Follow-the-Workload](demo-follow-the-workload.html) -- [Orchestration](orchestrate-a-local-cluster-with-kubernetes-insecure.html) -- [JSON Support](demo-json-support.html) diff --git a/src/current/_includes/v19.1/misc/external-urls.md b/src/current/_includes/v19.1/misc/external-urls.md deleted file mode 100644 index 7741008573a..00000000000 --- a/src/current/_includes/v19.1/misc/external-urls.md +++ /dev/null @@ -1,48 +0,0 @@ -~~~ -[scheme]://[host]/[path]?[parameters] -~~~ - -| Location | Scheme | Host | Parameters | -|-------------------------------------------------------------+-------------+--------------------------------------------------+----------------------------------------------------------------------------| -| Amazon S3 | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN` | -| Azure | `azure` | N/A (see [Example file URLs](#example-file-urls) | `AZURE_ACCOUNT_KEY`, `AZURE_ACCOUNT_NAME` | -| Google Cloud [1](#considerations) | `gs` | Bucket name | `AUTH` (optional; can be `default`, `implicit`, or `specified`), `CREDENTIALS` | -| HTTP [2](#considerations) | `http` | Remote host | N/A | -| NFS/Local [3](#considerations) | `nodelocal` | Empty or `nodeID` [4](#considerations) (see [Example file URLs](#example-file-urls)) | N/A | -| S3-compatible services [5](#considerations) | `s3` | Bucket name | `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`, `AWS_REGION` [6](#considerations) (optional), `AWS_ENDPOINT` | - -{{site.data.alerts.callout_danger}} -If you write to `nodelocal` storage in a multi-node cluster, individual data files will be written to the `extern` directories of arbitrary nodes and will likely not work as intended. To work correctly, each node must have the [`--external-io-dir` flag](start-a-node.html#general) point to the same NFS mount or other network-backed, shared storage. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -The location parameters often contain special characters that need to be URI-encoded. Use Javascript's [encodeURIComponent](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent) function or Go language's [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape) function to URI-encode the parameters. Other languages provide similar functions to URI-encode special characters. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -If your environment requires an HTTP or HTTPS proxy server for outgoing connections, you can set the standard `HTTP_PROXY` and `HTTPS_PROXY` environment variables when starting CockroachDB. -{{site.data.alerts.end}} - - - -- 1If the `AUTH` parameter is not specified, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) will be used if it is non-empty, otherwise the `implicit` behavior is used. If the `AUTH` parameter is `implicit`, all GCS connections use Google's [default authentication strategy](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application). If the `AUTH` parameter is `default`, the `cloudstorage.gs.default.key` [cluster setting](cluster-settings.html) must be set to the contents of a [service account file](https://cloud.google.com/docs/authentication/production#obtaining_and_providing_service_account_credentials_manually) which will be used during authentication. New in v19.1: If the `AUTH` parameter is `specified`, GCS connections are authenticated on a per-statement basis, which allows the JSON key object to be sent in the `CREDENTIALS` parameter. The JSON key object should be base64-encoded (using the standard encoding in [RFC 4648](https://tools.ietf.org/html/rfc4648)). - -- 2 You can create your own HTTP server with [Caddy or nginx](create-a-file-server.html). A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from HTTPS URLs. - -- 3 The file system backup location on the NFS drive is relative to the path specified by the `--external-io-dir` flag set while [starting the node](start-a-node.html). If the flag is set to `disabled`, then imports from local directories and NFS drives are disabled. - -- 4 New in v19.1: The host component of NFS/Local can either be empty or the `nodeID`. If the `nodeID` is specified, it is currently ignored (i.e., any node can be sent work and it will look in its local input/output directory); however, the `nodeID` will likely be required in the future. - -- 5 A custom root CA can be appended to the system's default CAs by setting the `cloudstorage.http.custom_ca` [cluster setting](cluster-settings.html), which will be used when verifying certificates from an S3-compatible service. - -- 6 The `AWS_REGION` parameter is optional since it is not a required parameter for most S3-compatible services. Specify the parameter only if your S3-compatible service requires it. - -#### Example file URLs - -| Location | Example | -|--------------+----------------------------------------------------------------------------------| -| Amazon S3 | `s3://acme-co/employees.sql?AWS_ACCESS_KEY_ID=123&AWS_SECRET_ACCESS_KEY=456` | -| Azure | `azure://employees.sql?AZURE_ACCOUNT_KEY=123&AZURE_ACCOUNT_NAME=acme-co` | -| Google Cloud | `gs://acme-co/employees.sql` | -| HTTP | `http://localhost:8080/employees.sql` | -| NFS/Local | `nodelocal:///path/employees`, `nodelocal://2/path/employees`

**Note:** If you write to `nodelocal` storage in a multi-node cluster, individual data files will be written to the `extern` directories of arbitrary nodes and will likely not work as intended. To work correctly, each node must have the [`--external-io-dir` flag](start-a-node.html#general) point to the same NFS mount or other network-backed, shared storage. | diff --git a/src/current/_includes/v19.1/misc/force-index-selection.md b/src/current/_includes/v19.1/misc/force-index-selection.md deleted file mode 100644 index c4a5a3b5f9b..00000000000 --- a/src/current/_includes/v19.1/misc/force-index-selection.md +++ /dev/null @@ -1,61 +0,0 @@ -By using the explicit index annotation, you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) when reading from a named table. - -{{site.data.alerts.callout_info}} -Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query. -{{site.data.alerts.end}} - -The syntax to force a scan of a specific index is: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@my_idx; -~~~ - -This is equivalent to the longer expression: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx}; -~~~ - -New in v19.1: The syntax to force a **reverse scan** of a specific index is: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM table@{FORCE_INDEX=my_idx,DESC}; -~~~ - -Forcing a reverse scan is sometimes useful during [performance tuning](performance-best-practices-overview.html). For reference, the full syntax for choosing an index and its scan direction is - -{% include copy-clipboard.html %} -~~~ sql -SELECT * FROM table@{FORCE_INDEX=idx[,DIRECTION]} -~~~ - -where the optional `DIRECTION` is either `ASC` (ascending) or `DESC` (descending). - -When a direction is specified, that scan direction is forced; otherwise the [cost-based optimizer](cost-based-optimizer.html) is free to choose the direction it calculates will result in the best performance. - -You can verify that the optimizer is choosing your desired scan direction using [`EXPLAIN (OPT)`](explain.html#opt-option). For example, given the table - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (K INT PRIMARY KEY, v INT); -~~~ - -you can check the scan direction with: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (opt) SELECT * FROM kv@{FORCE_INDEX=primary,DESC}; -~~~ - -~~~ - text -------------------------------------- - scan kv,rev - └── flags: force-index=primary,rev -(2 rows) -~~~ - -To see all indexes available on a table, use [`SHOW INDEXES`](show-index.html). diff --git a/src/current/_includes/v19.1/misc/gce-locations.md b/src/current/_includes/v19.1/misc/gce-locations.md deleted file mode 100644 index 22122aae78d..00000000000 --- a/src/current/_includes/v19.1/misc/gce-locations.md +++ /dev/null @@ -1,18 +0,0 @@ -| Location | SQL Statement | -| ------ | ------ | -| us-east1 (South Carolina) | `INSERT into system.locations VALUES ('region', 'us-east1', 33.836082, -81.163727)` | -| us-east4 (N. Virginia) | `INSERT into system.locations VALUES ('region', 'us-east4', 37.478397, -76.453077)` | -| us-central1 (Iowa) | `INSERT into system.locations VALUES ('region', 'us-central1', 42.032974, -93.581543)` | -| us-west1 (Oregon) | `INSERT into system.locations VALUES ('region', 'us-west1', 43.804133, -120.554201)` | -| northamerica-northeast1 (Montreal) | `INSERT into system.locations VALUES ('region', 'northamerica-northeast1', 56.130366, -106.346771)` | -| europe-west1 (Belgium) | `INSERT into system.locations VALUES ('region', 'europe-west1', 50.44816, 3.81886)` | -| europe-west2 (London) | `INSERT into system.locations VALUES ('region', 'europe-west2', 51.507351, -0.127758)` | -| europe-west3 (Frankfurt) | `INSERT into system.locations VALUES ('region', 'europe-west3', 50.110922, 8.682127)` | -| europe-west4 (Netherlands) | `INSERT into system.locations VALUES ('region', 'europe-west4', 53.4386, 6.8355)` | -| europe-west6 (Zürich) | `INSERT into system.locations VALUES ('region', 'europe-west6', 47.3769, 8.5417)` | -| asia-east1 (Taiwan) | `INSERT into system.locations VALUES ('region', 'asia-east1', 24.0717, 120.5624)` | -| asia-northeast1 (Tokyo) | `INSERT into system.locations VALUES ('region', 'asia-northeast1', 35.689487, 139.691706)` | -| asia-southeast1 (Singapore) | `INSERT into system.locations VALUES ('region', 'asia-southeast1', 1.352083, 103.819836)` | -| australia-southeast1 (Sydney) | `INSERT into system.locations VALUES ('region', 'australia-southeast1', -33.86882, 151.209296)` | -| asia-south1 (Mumbai) | `INSERT into system.locations VALUES ('region', 'asia-south1', 19.075984, 72.877656)` | -| southamerica-east1 (São Paulo) | `INSERT into system.locations VALUES ('region', 'southamerica-east1', -23.55052, -46.633309)` | diff --git a/src/current/_includes/v19.1/misc/haproxy.md b/src/current/_includes/v19.1/misc/haproxy.md deleted file mode 100644 index 6651e178ee4..00000000000 --- a/src/current/_includes/v19.1/misc/haproxy.md +++ /dev/null @@ -1,39 +0,0 @@ -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - - The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - - Field | Description - ------|------------ - `timeout connect`
`timeout client`
`timeout server` | Timeout values that should be suitable for most deployments. - `bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. - `balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. - `option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. - `server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](start-a-node.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - - {{site.data.alerts.callout_info}} - For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). - {{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/install-next-steps.html b/src/current/_includes/v19.1/misc/install-next-steps.html deleted file mode 100644 index 2111bdbed9c..00000000000 --- a/src/current/_includes/v19.1/misc/install-next-steps.html +++ /dev/null @@ -1,16 +0,0 @@ - diff --git a/src/current/_includes/v19.1/misc/linux-binary-prereqs.md b/src/current/_includes/v19.1/misc/linux-binary-prereqs.md deleted file mode 100644 index 541183fe71b..00000000000 --- a/src/current/_includes/v19.1/misc/linux-binary-prereqs.md +++ /dev/null @@ -1 +0,0 @@ -

The CockroachDB binary for Linux requires glibc, libncurses, and tzdata, which are found by default on nearly all Linux distributions, with Alpine as the notable exception.

diff --git a/src/current/_includes/v19.1/misc/logging-flags.md b/src/current/_includes/v19.1/misc/logging-flags.md deleted file mode 100644 index 06af86228ee..00000000000 --- a/src/current/_includes/v19.1/misc/logging-flags.md +++ /dev/null @@ -1,9 +0,0 @@ -Flag | Description ------|------------ -`--log-dir` | Enable logging to files and write logs to the specified directory.

Setting `--log-dir` to a blank directory (`--log-dir=`) disables logging to files. Do not use `--log-dir=""`; this creates a new directory named `""` and stores log files in that directory. -`--log-dir-max-size` | After the log directory reaches the specified size, delete the oldest log file. The flag's argument takes standard file sizes, such as `--log-dir-max-size=1GiB`.

**Default**: 100MiB -`--log-file-max-size` | After logs reach the specified size, begin writing logs to a new file. The flag's argument takes standard file sizes, such as `--log-file-max-size=2MiB`.

**Default**: 10MiB -`--log-file-verbosity` | Only writes messages to log files if they are at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--log-file-verbosity=WARNING`. **Requires** logging to files.

**Default**: `INFO` -`--logtostderr` | Enable logging to `stderr` for messages at or above the specified [severity level](debug-and-error-logs.html#severity-levels), such as `--logtostderr=ERROR`

If you use this flag without specifying the severity level (e.g., `cockroach start --logtostderr`), it prints messages of *all* severities to `stderr`.

Setting `--logtostderr=NONE` disables logging to `stderr`. -`--no-color` | Do not colorize `stderr`. Possible values: `true` or `false`.

When set to `false`, messages logged to `stderr` are colorized based on [severity level](debug-and-error-logs.html#severity-levels).

**Default:** `false` -`--sql-audit-dir` | New in v2.0: If non-empty, create a SQL audit log in this directory. By default, SQL audit logs are written in the same directory as the other logs generated by CockroachDB. For more information, see [SQL Audit Logging](sql-audit-logging.html). diff --git a/src/current/_includes/v19.1/misc/multi-store-nodes.md b/src/current/_includes/v19.1/misc/multi-store-nodes.md deleted file mode 100644 index 01642597169..00000000000 --- a/src/current/_includes/v19.1/misc/multi-store-nodes.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -In the absence of [special replication constraints](configure-replication-zones.html), CockroachDB rebalances replicas to take advantage of available storage capacity. However, in a 3-node cluster with multiple stores per node, CockroachDB is **not** able to rebalance replicas from one store to another store on the same node because this would temporarily result in the node having multiple replicas of the same range, which is not allowed. This is due to the mechanics of rebalancing, where the cluster first creates a copy of the replica at the target destination before removing the source replica. To allow this type of cross-store rebalancing, the cluster must have 4 or more nodes; this allows the cluster to create a copy of the replica on a node that doesn't already have a replica of the range before removing the source replica and then migrating the new replica to the store with more capacity on the original node. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/remove-user-callout.html b/src/current/_includes/v19.1/misc/remove-user-callout.html deleted file mode 100644 index 925f83d779d..00000000000 --- a/src/current/_includes/v19.1/misc/remove-user-callout.html +++ /dev/null @@ -1 +0,0 @@ -Removing a user does not remove that user's privileges. Therefore, to prevent a future user with an identical username from inheriting an old user's privileges, it's important to revoke a user's privileges before or after removing the user. diff --git a/src/current/_includes/v19.1/misc/savepoint-limitations.md b/src/current/_includes/v19.1/misc/savepoint-limitations.md deleted file mode 100644 index 1232d1d3831..00000000000 --- a/src/current/_includes/v19.1/misc/savepoint-limitations.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_danger}} -CockroachDB's [`SAVEPOINT`](savepoint.html) implementation does not support nested transactions (i.e., subtransactions). It is only used to handle [transaction retries](transactions.html#transaction-retries). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/schema-change-stmt-note.md b/src/current/_includes/v19.1/misc/schema-change-stmt-note.md deleted file mode 100644 index b522b658652..00000000000 --- a/src/current/_includes/v19.1/misc/schema-change-stmt-note.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_info}} -This statement performs a schema change. For more information about how online schema changes work in CockroachDB, see [Online Schema Changes](online-schema-changes.html). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/misc/schema-change-view-job.md b/src/current/_includes/v19.1/misc/schema-change-view-job.md deleted file mode 100644 index 8861174d621..00000000000 --- a/src/current/_includes/v19.1/misc/schema-change-view-job.md +++ /dev/null @@ -1 +0,0 @@ -This schema change statement is registered as a job. You can view long-running jobs with [`SHOW JOBS`](show-jobs.html). diff --git a/src/current/_includes/v19.1/misc/session-vars.html b/src/current/_includes/v19.1/misc/session-vars.html deleted file mode 100644 index 6d311b4fcde..00000000000 --- a/src/current/_includes/v19.1/misc/session-vars.html +++ /dev/null @@ -1,407 +0,0 @@ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Variable nameDescriptionInitial valueModify with - SET - ?View with - SHOW - ?
- application_name - The current application name for statistics collection.Empty string, or cockroach for sessions from the built-in SQL client.YesYes
- bytea_output - The mode for conversions from STRING to BYTES.hexYesYes
- database - The current database.Database in connection string, or empty if not specified.YesYes
- default_int_size - The size, in bytes, of an INT type. - 8 - YesYes
- default_transaction_isolation - All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels. - SERIALIZABLE - NoYes
- default_transaction_read_only - The default transaction access mode for the current session. If set to on, only read operations are allowed in transactions in the current session; if set to off, both read and write operations are allowed. See SET TRANSACTION - for more details. - off - YesYes
- distsql - The query distribution mode for the session. By default, CockroachDB determines which queries are faster to execute if distributed across multiple nodes, and all other queries are run through the gateway node. - auto - YesYes
- extra_float_digits - The number of digits displayed for floating-point values. Only values between -15 and 3 are supported. - 0 - YesYes
- reorder_joins_limit - Maximum number of joins that the optimizer will attempt to reorder when searching for an optimal query execution plan. For more information, see Join reordering. - 4 - YesYes
force_savepoint_restartWhen set to true, allows the SAVEPOINT statement to accept any name for a savepoint. - off - YesYes
- node_id - The ID of the node currently connected to.
-
This variable is particularly useful for verifying load balanced connections.
Node-dependentNoYes
- optimizer - The mode in which a query execution plan is generated. If set to on, the cost-based optimizer is enabled by default and the heuristic planner will only be used if the query is not supported by the cost-based optimizer; if set to off, all queries are run through the legacy heuristic planner. - on - YesYes
- results_buffer_size - The default size of the buffer that accumulates results for a statement or a batch of statements before they are sent to the client. This can also be set for all connections using the 'sql.defaults.results_buffer_size' cluster setting. Note that auto-retries generally only happen while no results have been delivered to the client, so reducing this size can increase the number of retriable errors a client receives. On the other hand, increasing the buffer size can increase the delay until the client receives the first result row. Setting to 0 disables any buffering. - - 16384 - YesYes
- search_path - A list of schemas that will be searched to resolve unqualified table or function names. For more details, see SQL name resolution. - public - YesYes
- server_version - The version of PostgreSQL that CockroachDB emulates.Version-dependentNoYes
- server_version_num - The version of PostgreSQL that CockroachDB emulates.Version-dependentYesYes
- session_user - The user connected for the current session.User in connection stringNoYes
- sql_safe_updates - If false, potentially unsafe SQL statements are allowed, including DROP of a non-empty database and all dependent objects, DELETE without a WHERE clause, UPDATE without a WHERE clause, and ALTER TABLE .. DROP COLUMN. See Allow Potentially Unsafe SQL Statements for more details. - true for interactive sessions from the built-in SQL client,
false for sessions from other clients
YesYes
- statement_timeout - The amount of time a statement can run before being stopped.
-
This value can be an int (e.g., 10) and will be interpreted as milliseconds. It can also be an interval or string argument, where the string can be parsed as a valid interval (e.g., '4s'). A value of 0 turns it off.
- 0s - YesYes
- timezone - The default time zone for the current session.
-
This session variable was named "time zone" (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- UTC - YesYes
- tracing - The trace recording state. - off - - Yes
- transaction_isolation - All transactions execute with SERIALIZABLE isolation. See Transactions: Isolation levels.
-
This session variable was called transaction isolation level (with spaces) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- SERIALIZABLE - NoYes
- transaction_priority - The priority of the current transaction. See Transactions: Isolation levels for more details.
-
This session variable was called transaction priority (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NORMAL - YesYes
- transaction_read_only - The access mode of the current transaction. See Set Transaction for more details. - off - YesYes
- transaction_status - The state of the current transaction. See Transactions for more details.
-
This session variable was called transaction status (with a space) in CockroachDB 1.x. It has been renamed for compatibility with PostgreSQL.
- NoTxn - NoYes
- client_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - NoYes
- client_min_messages - (Reserved; exposed only for ORM compatibility.) - notice - NoYes
- datestyle - (Reserved; exposed only for ORM compatibility.) - ISO - NoYes
- integer_datetimes - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- intervalstyle - (Reserved; exposed only for ORM compatibility.) - postgres - NoYes
- max_index_keys - (Reserved; exposed only for ORM compatibility.) - 32 - NoYes
- standard_conforming_strings - (Reserved; exposed only for ORM compatibility.) - on - NoYes
- server_encoding - (Reserved; exposed only for ORM compatibility.) - UTF8 - YesYes
diff --git a/src/current/_includes/v19.1/misc/sorting-delete-output.md b/src/current/_includes/v19.1/misc/sorting-delete-output.md deleted file mode 100644 index 458376c4466..00000000000 --- a/src/current/_includes/v19.1/misc/sorting-delete-output.md +++ /dev/null @@ -1,8 +0,0 @@ -To sort the output of a `DELETE` statement, use: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ... FROM [DELETE ...] ORDER BY ... -~~~ - -For an example, see [Sort and return deleted rows](delete.html#sort-and-return-deleted-rows). diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-expand-disk-size.md b/src/current/_includes/v19.1/orchestration/kubernetes-expand-disk-size.md deleted file mode 100644 index 5f5f77b4962..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-expand-disk-size.md +++ /dev/null @@ -1,184 +0,0 @@ -You can expand certain [types of persistent volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes -) (including GCE Persistent Disk and Amazon Elastic Block Store) by editing their persistent volume claims. Increasing disk size is often beneficial for CockroachDB performance. Read our [Kubernetes performance guide](kubernetes-performance.html#disk-size) for guidance on disks. - -1. Get the persistent volume claims for the volumes: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc - ~~~ - -
- ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-my-release-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ -
- -
- ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-1 Bound pvc-75e143ca-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - datadir-cockroachdb-2 Bound pvc-75ef409a-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 17m - ~~~ -
- -2. In order to expand a persistent volume claim, `AllowVolumeExpansion` in its storage class must be `true`. Examine the storage class: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe storageclass standard - ~~~ - - ~~~ - Name: standard - IsDefaultClass: Yes - Annotations: storageclass.kubernetes.io/is-default-class=true - Provisioner: kubernetes.io/gce-pd - Parameters: type=pd-standard - AllowVolumeExpansion: False - MountOptions: - ReclaimPolicy: Delete - VolumeBindingMode: Immediate - Events: - ~~~ - - If necessary, edit the storage class: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch storageclass standard -p '{"allowVolumeExpansion": true}' - ~~~ - - ~~~ - storageclass.storage.k8s.io/standard patched - ~~~ - -3. Edit one of the persistent volume claims to request more space: - - {{site.data.alerts.callout_info}} - The requested `storage` value must be larger than the previous value. You cannot use this method to decrease the disk size. - {{site.data.alerts.end}} - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-my-release-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-my-release-cockroachdb-0 patched - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch pvc datadir-cockroachdb-0 -p '{"spec": {"resources": {"requests": {"storage": "200Gi"}}}}' - ~~~ - - ~~~ - persistentvolumeclaim/datadir-cockroachdb-0 patched - ~~~ -
- -4. Check the capacity of the persistent volume claim: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 100Gi RWO standard 18m - ~~~ -
- - If the PVC capacity has not changed, this may be because `AllowVolumeExpansion` was initially set to `false` or because the [volume has a file system](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim) that has to be expanded. You will need to start or restart a pod in order to have it reflect the new capacity. - - {{site.data.alerts.callout_success}} - Running `kubectl get pv` will display the persistent volumes with their *requested* capacity and not their actual capacity. This can be misleading, so it's best to use `kubectl get pvc`. - {{site.data.alerts.end}} - -5. Examine the persistent volume claim. If the volume has a file system, you will see a `FileSystemResizePending` condition with an accompanying message: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-my-release-cockroachdb-0 - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe pvc datadir-cockroachdb-0 - ~~~ -
- - ~~~ - Waiting for user to (re-)start a pod to finish file system resize of volume on node. - ~~~ - -6. Delete the corresponding pod to restart it: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-0 - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-0 - ~~~ -
- - The `FileSystemResizePending` condition and message will be removed. - -7. View the updated persistent volume claim: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-my-release-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -datadir-my-release-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pvc datadir-cockroachdb-0 - ~~~ - - ~~~ - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE -datadir-cockroachdb-0 Bound pvc-75dadd4c-01a1-11ea-b065-42010a8e00cb 200Gi RWO standard 20m - ~~~ -
- -8. The CockroachDB cluster needs to be expanded one node at a time. Repeat steps 3 - 6 to increase the capacities of the remaining volumes by the same amount. \ No newline at end of file diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-limitations.md b/src/current/_includes/v19.1/orchestration/kubernetes-limitations.md deleted file mode 100644 index 00c6c0fdd21..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-limitations.md +++ /dev/null @@ -1,7 +0,0 @@ -#### Kubernetes version - -Kubernetes 1.18 or higher is required in order to use our most up-to-date configuration files. Earlier Kubernetes releases do not support some of the options used in our configuration files. If you need to run on an older version of Kubernetes, we have kept around configuration files that work on older Kubernetes releases in the versioned subdirectories of [https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes) (e.g., [v1.7](https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes/v1.7)). - -#### Storage - -At this time, orchestrations of CockroachDB with Kubernetes use external persistent volumes that are often replicated by the provider. Because CockroachDB already replicates data automatically, this additional layer of replication is unnecessary and can negatively impact performance. High-performance use cases on a private Kubernetes cluster may want to consider using [local volumes](https://kubernetes.io/docs/concepts/storage/volumes/#local). diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md b/src/current/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md deleted file mode 100644 index bb58d08cae8..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-prometheus-alertmanager.md +++ /dev/null @@ -1,214 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -### Configure Prometheus - -Every node of a CockroachDB cluster exports granular timeseries metrics formatted for easy integration with [Prometheus](https://prometheus.io/), an open source tool for storing, aggregating, and querying timeseries data. This section shows you how to orchestrate Prometheus as part of your Kubernetes cluster and pull these metrics into Prometheus for external monitoring. - -This guidance is based on [CoreOS's Prometheus Operator](https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md), which allows a Prometheus instance to be managed using built-in Kubernetes concepts. - -{{site.data.alerts.callout_info}} -If you're on Hosted GKE, before starting, make sure the email address associated with your Google Cloud account is part of the `cluster-admin` RBAC group, as shown in [Step 1. Start Kubernetes](#hosted-gke). -{{site.data.alerts.end}} - -1. From your local workstation, edit the `cockroachdb` service to add the `prometheus: cockroachdb` label: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl label svc cockroachdb prometheus=cockroachdb - ~~~ - - ~~~ - service/cockroachdb labeled - ~~~ - - This ensures that there is a prometheus job and monitoring data only for the `cockroachdb` service, not for the `cockroach-public` service. -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl label svc my-release-cockroachdb prometheus=cockroachdb - ~~~ - - ~~~ - service/my-release-cockroachdb labeled - ~~~ - - This ensures that there is a prometheus job and monitoring data only for the `my-release-cockroachdb` service, not for the `my-release-cockroach-public` service. -
- -2. Install [CoreOS's Prometheus Operator](https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply \ - -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.20/bundle.yaml - ~~~ - - ~~~ - clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created - clusterrole.rbac.authorization.k8s.io/prometheus-operator created - serviceaccount/prometheus-operator created - deployment.apps/prometheus-operator created - ~~~ - -3. Confirm that the `prometheus-operator` has started: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get deploy prometheus-operator - ~~~ - - ~~~ - NAME READY UP-TO-DATE AVAILABLE AGE - prometheus-operator 1/1 1 1 27s - ~~~ - -4. Use our [`prometheus.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/prometheus.yaml) file to create the various objects necessary to run a Prometheus instance: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/prometheus.yaml - ~~~ - - ~~~ - serviceaccount/prometheus created - clusterrole.rbac.authorization.k8s.io/prometheus created - clusterrolebinding.rbac.authorization.k8s.io/prometheus created - servicemonitor.monitoring.coreos.com/cockroachdb created - prometheus.monitoring.coreos.com/cockroachdb created - ~~~ - -5. Access the Prometheus UI locally and verify that CockroachDB is feeding data into Prometheus: - - 1. Port-forward from your local machine to the pod running Prometheus: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward prometheus-cockroachdb-0 9090 - ~~~ - - 2. Go to http://localhost:9090 in your browser. - - 3. To verify that each CockroachDB node is connected to Prometheus, go to **Status > Targets**. The screen should look like this: - - Prometheus targets - - 4. To verify that data is being collected, go to **Graph**, enter the `sys_uptime` variable in the field, click **Execute**, and then click the **Graph** tab. The screen should like this: - - Prometheus graph - - {{site.data.alerts.callout_success}} - Prometheus auto-completes CockroachDB time series metrics for you, but if you want to see a full listing, with descriptions, port-forward as described in {% if page.secure == true %}[Access the Admin UI](#step-4-access-the-admin-ui){% else %}[Access the Admin UI](#step-4-access-the-admin-ui){% endif %} and then point your browser to http://localhost:8080/_status/vars. - - For more details on using the Prometheus UI, see their [official documentation](https://prometheus.io/docs/introduction/getting_started/). - {{site.data.alerts.end}} - -### Configure Alertmanager - -Active monitoring helps you spot problems early, but it is also essential to send notifications when there are events that require investigation or intervention. This section shows you how to use [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml) to do this. - -1. Download our alertmanager-config.yaml configuration file: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager-config.yaml - ~~~ - -2. Edit the `alertmanager-config.yaml` file to [specify the desired receivers for notifications](https://prometheus.io/docs/alerting/configuration/#receiver). Initially, the file contains a placeholder web hook. - -3. Add this configuration to the Kubernetes cluster as a secret, renaming it to `alertmanager.yaml` and labelling it to make it easier to find: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create secret generic alertmanager-cockroachdb \ - --from-file=alertmanager.yaml=alertmanager-config.yaml - ~~~ - - ~~~ - secret/alertmanager-cockroachdb created - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl label secret alertmanager-cockroachdb app=cockroachdb - ~~~ - - ~~~ - secret/alertmanager-cockroachdb labeled - ~~~ - - {{site.data.alerts.callout_danger}} - The name of the secret, `alertmanager-cockroachdb`, must match the name used in the `altermanager.yaml` file. If they differ, the Alertmanager instance will start without configuration, and nothing will happen. - {{site.data.alerts.end}} - -4. Use our [`alertmanager.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alertmanager.yaml) file to create the various objects necessary to run an Alertmanager instance, including a ClusterIP service so that Prometheus can forward alerts: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alertmanager.yaml - ~~~ - - ~~~ - alertmanager.monitoring.coreos.com/cockroachdb created - service/alertmanager-cockroachdb created - ~~~ - -5. Verify that Alertmanager is running: - - 1. Port-forward from your local machine to the pod running Alertmanager: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward alertmanager-cockroachdb-0 9093 - ~~~ - - 2. Go to http://localhost:9093 in your browser. The screen should look like this: - - Alertmanager - -6. Ensure that the Alertmanagers are visible to Prometheus by opening http://localhost:9090/status. The screen should look like this: - - Alertmanager - -7. Add CockroachDB's starter [alerting rules](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/prometheus/alert-rules.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl apply \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/prometheus/alert-rules.yaml - ~~~ - - ~~~ - prometheusrule.monitoring.coreos.com/prometheus-cockroachdb-rules created - ~~~ - -8. Ensure that the rules are visible to Prometheus by opening http://localhost:9090/rules. The screen should look like this: - - Alertmanager - -9. Verify that the `TestAlertManager` example alert is firing by opening http://localhost:9090/alerts. The screen should look like this: - - Alertmanager - -10. To remove the example alert: - - 1. Use the `kubectl edit` command to open the rules for editing: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl edit prometheusrules prometheus-cockroachdb-rules - ~~~ - - 2. Remove the `dummy.rules` block and save the file: - - ~~~ - - name: rules/dummy.rules - rules: - - alert: TestAlertManager - expr: vector(1) - ~~~ diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md b/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md deleted file mode 100644 index b4014260287..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-insecure.md +++ /dev/null @@ -1,130 +0,0 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `Replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -1. Launch a temporary interactive pod and use the `cockroach node status` command to get the internal IDs of nodes: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ - -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node status \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ -
- -2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](view-node-details.html) command to decommission it: - - {{site.data.alerts.callout_info}} - It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node. - {{site.data.alerts.end}} - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- node decommission \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 73 | true | false - (1 row) - ~~~ - - Once the node has been fully decommissioned and stopped, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 0 | true | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -3. Once the node has been decommissioned, remove a pod from your StatefulSet: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset "cockroachdb" scaled - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set Replicas=3 \ - --reuse-values - ~~~ -
diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md b/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md deleted file mode 100644 index de810e5117e..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-remove-nodes-secure.md +++ /dev/null @@ -1,119 +0,0 @@ -To safely remove a node from your cluster, you must first decommission the node and only then adjust the `Replicas` value of your StatefulSet configuration to permanently remove it. This sequence is important because the decommissioning process lets a node finish in-flight requests, rejects any new requests, and transfers all range replicas and range leases off the node. - -{{site.data.alerts.callout_danger}} -If you remove nodes without first telling CockroachDB to decommission them, you may cause data or even cluster unavailability. For more details about how this works and what to consider before removing nodes, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -1. Get a shell into the `cockroachdb-client-secure` pod you created earlier and use the `cockroach node status` command to get the internal IDs of nodes: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | cockroachdb-0.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | cockroachdb-2.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | cockroachdb-1.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | cockroachdb-3.cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node status \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+---------------------------------------------------------------------------------+--------+----------------------------------+----------------------------------+--------------+---------+ - 1 | my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:36.486082+00:00 | 2018-11-29 18:24:24.587454+00:00 | true | true - 2 | my-release-cockroachdb-2.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:55:03.880406+00:00 | 2018-11-29 18:24:23.469302+00:00 | true | true - 3 | my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 16:04:41.383588+00:00 | 2018-11-29 18:24:25.030175+00:00 | true | true - 4 | my-release-cockroachdb-3.my-release-cockroachdb.default.svc.cluster.local:26257 | {{page.release_info.version}} | 2018-11-29 17:31:19.990784+00:00 | 2018-11-29 18:24:26.041686+00:00 | true | true - (4 rows) - ~~~ -
- - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. - -2. Note the ID of the node with the highest number in its address (in this case, the address including `cockroachdb-3`) and use the [`cockroach node decommission`](view-node-details.html) command to decommission it: - - {{site.data.alerts.callout_info}} - It's important to decommission the node with the highest number in its address because, when you reduce the replica count, Kubernetes will remove the pod for that node. - {{site.data.alerts.end}} - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach node decommission \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ -
- - You'll then see the decommissioning status print to `stderr` as it changes: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 73 | true | false - (1 row) - ~~~ - - Once the node has been fully decommissioned and stopped, you'll see a confirmation: - - ~~~ - id | is_live | replicas | is_decommissioning | is_draining - +---+---------+----------+--------------------+-------------+ - 4 | true | 0 | true | false - (1 row) - - No more data reported on target nodes. Please verify cluster health before removing the nodes. - ~~~ - -3. Once the node has been decommissioned, remove a pod from your StatefulSet: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=3 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set Replicas=3 \ - --reuse-values - ~~~ -
diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-scale-cluster.md b/src/current/_includes/v19.1/orchestration/kubernetes-scale-cluster.md deleted file mode 100644 index d185f1d182d..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-scale-cluster.md +++ /dev/null @@ -1,56 +0,0 @@ -Your Kubernetes cluster includes 3 worker nodes, or instances, that can run pods. A CockroachDB node runs in each pod. As recommended in our [production best practices](recommended-production-settings.html#topology), you should ensure that two pods are not placed on the same worker node. - -To do this, add a new worker node and then edit your StatefulSet configuration to add another pod for the new CockroachDB node. - -1. Add a worker node, bringing the total from 3 to 4: - - On GKE, [resize your cluster](https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster). - - On EKS, resize your [Worker Node Group](https://eksctl.io/usage/managing-nodegroups/#scaling). - - On GCE, resize your [Managed Instance Group](https://cloud.google.com/compute/docs/instance-groups/). - - On AWS, resize your [Auto Scaling Group](https://docs.aws.amazon.com/autoscaling/latest/userguide/as-manual-scaling.html). - -2. Add a pod for the new CockroachDB node: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl scale statefulset cockroachdb --replicas=4 - ~~~ - - ~~~ - statefulset.apps/cockroachdb scaled - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set Replicas=4 \ - --reuse-values - ~~~ - - ~~~ - Release "my-release" has been upgraded. Happy Helming! - LAST DEPLOYED: Tue May 14 14:06:43 2019 - NAMESPACE: default - STATUS: DEPLOYED - - RESOURCES: - ==> v1beta1/PodDisruptionBudget - NAME AGE - my-release-cockroachdb-budget 51m - - ==> v1/Pod(related) - - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 38m - my-release-cockroachdb-1 1/1 Running 0 39m - my-release-cockroachdb-2 1/1 Running 0 39m - my-release-cockroachdb-3 0/1 Pending 0 0s - my-release-cockroachdb-init-nwjkh 0/1 Completed 0 39m - - ... - ~~~ -
diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-simulate-failure.md b/src/current/_includes/v19.1/orchestration/kubernetes-simulate-failure.md deleted file mode 100644 index d5f3e52884f..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-simulate-failure.md +++ /dev/null @@ -1,56 +0,0 @@ -Based on the `replicas: 3` line in the StatefulSet configuration, Kubernetes ensures that three pods/nodes are running at all times. When a pod/node fails, Kubernetes automatically creates another pod/node with the same network identity and persistent storage. - -To see this in action: - -1. Terminate one of the CockroachDB nodes: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod cockroachdb-2 - ~~~ - - ~~~ - pod "cockroachdb-2" deleted - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete pod my-release-cockroachdb-2 - ~~~ - - ~~~ - pod "my-release-cockroachdb-2" deleted - ~~~ -
- - -2. In the Admin UI, the **Cluster Overview** will soon show one node as **Suspect**. As Kubernetes auto-restarts the node, watch how the node once again becomes healthy. - -3. Back in the terminal, verify that the pod was automatically restarted: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pod cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-2 1/1 Running 0 12s - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pod my-release-cockroachdb-2 - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-2 1/1 Running 0 44s - ~~~ -
diff --git a/src/current/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md b/src/current/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md deleted file mode 100644 index 867b9157af4..00000000000 --- a/src/current/_includes/v19.1/orchestration/kubernetes-upgrade-cluster.md +++ /dev/null @@ -1,265 +0,0 @@ -As new versions of CockroachDB are released, it's strongly recommended to upgrade to newer versions in order to pick up bug fixes, performance improvements, and new features. The [general CockroachDB upgrade documentation](upgrade-cockroach-version.html) provides best practices for how to prepare for and execute upgrades of CockroachDB clusters, but the mechanism of actually stopping and restarting processes in Kubernetes is somewhat special. - -Kubernetes knows how to carry out a safe rolling upgrade process of the CockroachDB nodes. When you tell it to change the Docker image used in the CockroachDB StatefulSet, Kubernetes will go one-by-one, stopping a node, restarting it with the new image, and waiting for it to be ready to receive client requests before moving on to the next one. For more information, see [the Kubernetes documentation](https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets). - -1. Decide how the upgrade will be finalized. - - {{site.data.alerts.callout_info}} - This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step. - {{site.data.alerts.end}} - - By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain performance improvements and bug fixes introduced in v19.1. After finalization, however, it will no longer be possible to perform a downgrade to v2.1. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. - - We recommend disabling auto-finalization so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade: - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \-- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ -
- - - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- - {% endif %} - - 2. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.1'; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -2. Kick off the upgrade process by changing the desired Docker image: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl patch statefulset cockroachdb \ - --type='json' \ - -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"cockroachdb/cockroach:v19.1.0"}]' - ~~~ - - ~~~ - statefulset.apps/cockroachdb patched - ~~~ -
- -
- - {{site.data.alerts.callout_info}} - For Helm, you must remove the cluster initialization job from when the cluster was created before the cluster version can be changed. - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl delete job my-release-cockroachdb-init - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ helm upgrade \ - my-release \ - cockroachdb/cockroachdb \ - --set ImageTag=v19.1.0 \ - --reuse-values - ~~~ -
- -3. If you then check the status of your cluster's pods, you should see them being restarted: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - -
- ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 1/1 Running 0 2m - cockroachdb-1 1/1 Running 0 2m - cockroachdb-2 1/1 Running 0 2m - cockroachdb-3 0/1 Terminating 0 1m - ... - ~~~ -
- -
- ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 2m - my-release-cockroachdb-1 1/1 Running 0 3m - my-release-cockroachdb-2 1/1 Running 0 3m - my-release-cockroachdb-3 0/1 ContainerCreating 0 25s - my-release-cockroachdb-init-nwjkh 0/1 ContainerCreating 0 6s - ... - ~~~ - - {{site.data.alerts.callout_info}} - Ignore the pod for cluster initialization. It is re-created as a byproduct of the StatefulSet configuration but does not impact your existing cluster. - {{site.data.alerts.end}} -
- -4. This will continue until all of the pods have restarted and are running the new image. To check the image of each pod to determine whether they've all be upgraded, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods \ - -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}' - ~~~ - -
- ~~~ - cockroachdb-0 cockroachdb/cockroach:v19.1.0 - cockroachdb-1 cockroachdb/cockroach:v19.1.0 - cockroachdb-2 cockroachdb/cockroach:v19.1.0 - cockroachdb-3 cockroachdb/cockroach:v19.1.0 - ... - ~~~ -
- -
- ~~~ - my-release-cockroachdb-0 cockroachdb/cockroach:v19.1.0 - my-release-cockroachdb-1 cockroachdb/cockroach:v19.1.0 - my-release-cockroachdb-2 cockroachdb/cockroach:v19.1.0 - my-release-cockroachdb-3 cockroachdb/cockroach:v19.1.0 - ... - ~~~ -
- - You can also check the CockroachDB version of each node in the Admin UI: - - Version in UI after upgrade - -5. Finish the upgrade. - - {{site.data.alerts.callout_info}} - This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step. - {{site.data.alerts.end}} - - If you disabled auto-finalization in step 1 above, monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - - Once you are satisfied with the new version, re-enable auto-finalization: - - {% if page.secure == true %} - - 1. Get a shell into the pod with the `cockroach` binary created earlier and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ -
- - {% else %} - - 1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- - {% endif %} - - 2. Re-enable auto-finalization: - - {% include copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - - 3. Exit the SQL shell and delete the temporary pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v19.1/orchestration/local-start-kubernetes.md b/src/current/_includes/v19.1/orchestration/local-start-kubernetes.md deleted file mode 100644 index 081c2274c0f..00000000000 --- a/src/current/_includes/v19.1/orchestration/local-start-kubernetes.md +++ /dev/null @@ -1,24 +0,0 @@ -## Before you begin - -Before getting started, it's helpful to review some Kubernetes-specific terminology: - -Feature | Description ---------|------------ -[minikube](http://kubernetes.io/docs/getting-started-guides/minikube/) | This is the tool you'll use to run a Kubernetes cluster inside a VM on your local workstation. -[pod](http://kubernetes.io/docs/user-guide/pods/) | A pod is a group of one of more Docker containers. In this tutorial, all pods will run on your local workstation, each containing one Docker container running a single CockroachDB node. You'll start with 3 pods and grow to 4. -[StatefulSet](http://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/) | A StatefulSet is a group of pods treated as stateful units, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. StatefulSets are considered stable as of Kubernetes version 1.9 after reaching beta in version 1.5. -[persistent volume](http://kubernetes.io/docs/user-guide/persistent-volumes/) | A persistent volume is a piece of storage mounted into a pod. The lifetime of a persistent volume is decoupled from the lifetime of the pod that's using it, ensuring that each CockroachDB node binds back to the same storage on restart.

When using `minikube`, persistent volumes are external temporary directories that endure until they are manually deleted or until the entire Kubernetes cluster is deleted. -[persistent volume claim](http://kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims) | When pods are created (one per CockroachDB node), each pod will request a persistent volume claim to “claim” durable storage for its node. - -## Step 1. Start Kubernetes - -1. Follow Kubernetes' [documentation](https://kubernetes.io/docs/tasks/tools/install-minikube/) to install `minikube`, the tool used to run Kubernetes locally, for your OS. This includes installing a hypervisor and `kubectl`, the command-line tool used to manage Kubernetes from your local workstation. - - {{site.data.alerts.callout_info}}Make sure you install minikube version 0.21.0 or later. Earlier versions do not include a Kubernetes server that supports the maxUnavailability field and PodDisruptionBudget resource type used in the CockroachDB StatefulSet configuration.{{site.data.alerts.end}} - -2. Start a local Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ minikube start - ~~~ diff --git a/src/current/_includes/v19.1/orchestration/monitor-cluster.md b/src/current/_includes/v19.1/orchestration/monitor-cluster.md deleted file mode 100644 index 600cda289cf..00000000000 --- a/src/current/_includes/v19.1/orchestration/monitor-cluster.md +++ /dev/null @@ -1,69 +0,0 @@ -To access the cluster's [Admin UI](admin-ui-overview.html): - -{% if page.secure == true %} - -1. On secure clusters, [certain pages of the Admin UI](admin-ui-overview.html#admin-ui-access) can only be accessed by `admin` users. - - Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - -1. Assign `roach` to the `admin` role (you only need to do this once): - - {% include copy-clipboard.html %} - ~~~ sql - > GRANT admin TO roach; - ~~~ - -1. Exit the SQL shell and pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -{% endif %} - -1. In a new terminal window, port-forward from your local machine to one of the pods: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward cockroachdb-0 8080 - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl port-forward my-release-cockroachdb-0 8080 - ~~~ -
- - ~~~ - Forwarding from 127.0.0.1:8080 -> 8080 - ~~~ - - {{site.data.alerts.callout_info}}The port-forward command must be run on the same machine as the web browser in which you want to view the Admin UI. If you have been running these commands from a cloud instance or other non-local shell, you will not be able to view the UI without configuring kubectl locally and running the above port-forward command on your local machine.{{site.data.alerts.end}} - -{% if page.secure == true %} - -1. Go to https://localhost:8080 and log in with the username and password you created earlier. - - {% include {{ page.version.version }}/misc/chrome-localhost.md %} - -{% else %} - -1. Go to http://localhost:8080. - -{% endif %} - -1. In the UI, verify that the cluster is running as expected: - - Click **View nodes list** on the right to ensure that all nodes successfully joined the cluster. - - Click the **Databases** tab on the left to verify that `bank` is listed. diff --git a/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md b/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md deleted file mode 100644 index 5dc570ff890..00000000000 --- a/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-insecure.md +++ /dev/null @@ -1,121 +0,0 @@ -1. [Install the Helm client](https://docs.helm.sh/using_helm/#installing-the-helm-client). - -2. Install the Helm server, known as Tiller. - - In the likely case that your Kubernetes cluster uses RBAC (e.g., if you are using GKE), you first need to create [RBAC resources](https://docs.helm.sh/using_helm/#role-based-access-control) to grant Tiller access to the Kubernetes API: - - 1. Create a `rbac-config.yaml` file to define a role and service account: - - {% include copy-clipboard.html %} - ~~~ - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - ~~~ - - 2. Create the service account: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f rbac-config.yaml - ~~~ - - ~~~ - serviceaccount/tiller created - clusterrolebinding.rbac.authorization.k8s.io/tiller created - ~~~ - - 3. Start the Helm server and [install Tiller](https://docs.helm.sh/using_helm/#installing-tiller): - - {{site.data.alerts.callout_info}} - Tiller does not currently support [Kubernetes 1.16.0](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/). The following command includes a workaround to install Tiller for use with 1.16.0. - {{site.data.alerts.end}} - - - {% include copy-clipboard.html %} - ~~~ shell - $ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - - ~~~ - -3. Update your Helm chart repositories to ensure that you're using the latest CockroachDB chart: - - {% include copy-clipboard.html %} - ~~~ shell - $ helm repo update - ~~~ - -4. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart: - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - $ helm install --name my-release --set Resources.requests.memory="",CacheSize="",MaxSQLMemory="" cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - - {{site.data.alerts.callout_danger}} - To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. - - We recommend setting `CacheSize` and `MaxSQLMemory` each to 1/4 of the memory allocation specified in your `Resources.requests.memory` parameter. For example, if you are allocating 8GiB of memory to each CockroachDB node, use the following values with the `--set` flag in the `helm install` command: - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - Requests.resources.memory="8GiB",CacheSize="2GiB",MaxSQLMemory="2GiB" - ~~~ - - {{site.data.alerts.callout_info}} - You can customize your deployment by passing [configuration parameters](https://github.com/cockroachdb/helm-charts/tree/master/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD). - {{site.data.alerts.end}} - -5. Confirm that three pods are `Running` successfully and that the one-time cluster initialization has `Completed`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 1m - my-release-cockroachdb-1 1/1 Running 0 1m - my-release-cockroachdb-2 1/1 Running 0 1m - my-release-cockroachdb-init-k6jcr 0/1 Completed 0 1m - ~~~ - -6. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 51s - pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 51s - pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md b/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md deleted file mode 100644 index 3fa40ff074a..00000000000 --- a/src/current/_includes/v19.1/orchestration/start-cockroachdb-helm-secure.md +++ /dev/null @@ -1,206 +0,0 @@ -1. [Install the Helm client](https://docs.helm.sh/using_helm/#installing-the-helm-client). - -2. Install the Helm server, known as Tiller. - - In the likely case that your Kubernetes cluster uses RBAC (e.g., if you are using GKE), you first need to create [RBAC resources](https://docs.helm.sh/using_helm/#role-based-access-control) to grant Tiller access to the Kubernetes API: - - 1. Create a `rbac-config.yaml` file to define a role and service account: - - {% include copy-clipboard.html %} - ~~~ - apiVersion: v1 - kind: ServiceAccount - metadata: - name: tiller - namespace: kube-system - --- - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: tiller - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin - subjects: - - kind: ServiceAccount - name: tiller - namespace: kube-system - ~~~ - - 2. Create the service account: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f rbac-config.yaml - ~~~ - - ~~~ - serviceaccount/tiller created - clusterrolebinding.rbac.authorization.k8s.io/tiller created - ~~~ - - 3. Start the Helm server and [install Tiller](https://docs.helm.sh/using_helm/#installing-tiller): - - {{site.data.alerts.callout_info}} - Tiller does not currently support [Kubernetes 1.16.0](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/). The following command includes a workaround to install Tiller for use with 1.16.0. - {{site.data.alerts.end}} - - - {% include copy-clipboard.html %} - ~~~ shell - $ helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f - - ~~~ - - ~~~ - deployment.apps/tiller-deploy created - service/tiller-deploy created - ~~~ - -3. Install the CockroachDB Helm chart, providing a "release" name to identify and track this particular deployment of the chart and setting the `Secure.Enabled` parameter to `true`: - - {{site.data.alerts.callout_info}} - This tutorial uses `my-release` as the release name. If you use a different value, be sure to adjust the release name in subsequent commands. Also be sure to start and end the name with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names). - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - $ helm install --name my-release --set Secure.Enabled=true,Resources.requests.memory="",CacheSize="",MaxSQLMemory="" cockroachdb/cockroachdb - ~~~ - - Behind the scenes, this command uses our `cockroachdb-statefulset.yaml` file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it, where each pod has distinguishable network identity and always binds back to the same persistent storage on restart. - - {{site.data.alerts.callout_danger}} - To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. - - We recommend setting `CacheSize` and `MaxSQLMemory` each to 1/4 of the memory allocation specified in your `Resources.requests.memory` parameter. For example, if you are allocating 8GiB of memory to each CockroachDB node, use the following values with the `--set` flag in the `helm install` command: - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - Requests.resources.memory="8GiB",CacheSize="2GiB",MaxSQLMemory="2GiB" - ~~~ - - {{site.data.alerts.callout_info}} - You can customize your deployment by passing additional [configuration parameters](https://github.com/cockroachdb/helm-charts/tree/master/cockroachdb#configuration) to `helm install` using the `--set key=value[,key=value]` flag. For a production cluster, you should consider modifying the `Storage` and `StorageClass` parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume `StorageClass` in your environment may not be what you want for a database (e.g., on GCE and Azure the default is not SSD). - {{site.data.alerts.end}} - -4. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the CockroachDB node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificate, at which point the CockroachDB node is started in the pod. - - 1. Get the names of the `Pending` CSRs: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.client.root 21s system:serviceaccount:default:my-release-cockroachdb Pending - default.node.my-release-cockroachdb-0 15s system:serviceaccount:default:my-release-cockroachdb Pending - default.node.my-release-cockroachdb-1 16s system:serviceaccount:default:my-release-cockroachdb Pending - default.node.my-release-cockroachdb-2 15s system:serviceaccount:default:my-release-cockroachdb Pending - ... - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - - 2. Examine the CSR for the first pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.my-release-cockroachdb-0 - ~~~ - - ~~~ - Name: default.node.my-release-cockroachdb-0 - Labels: - Annotations: - CreationTimestamp: Mon, 10 Dec 2018 05:36:35 -0500 - Requesting User: system:serviceaccount:default:my-release-cockroachdb - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - my-release-cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local - my-release-cockroachdb-0.my-release-cockroachdb - my-release-cockroachdb-public - my-release-cockroachdb-public.default.svc.cluster.local - IP Addresses: 127.0.0.1 - Events: - ~~~ - - 3. If everything looks correct, approve the CSR for the first pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.my-release-cockroachdb-0 - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.node.my-release-cockroachdb-0 approved - ~~~ - - 4. Repeat steps 2 and 3 for the other 2 pods. - -5. Confirm that three pods are `Running` successfully: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 0/1 Running 0 6m - my-release-cockroachdb-1 0/1 Running 0 6m - my-release-cockroachdb-2 0/1 Running 0 6m - my-release-cockroachdb-init-hxzsc 0/1 Init:0/1 0 6m - ~~~ - -6. Approve the CSR for the one-off pod from which cluster initialization happens: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.client.root - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.client.root approved - ~~~ - -7. Confirm that CockroachDB cluster initialization has completed successfully, with the pods for CockroachDB showing `1/1` under `READY` and the pod for initialization showing `COMPLETED` under `STATUS`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - my-release-cockroachdb-0 1/1 Running 0 8m - my-release-cockroachdb-1 1/1 Running 0 8m - my-release-cockroachdb-2 1/1 Running 0 8m - my-release-cockroachdb-init-hxzsc 0/1 Completed 0 1h - ~~~ - -8. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-71019b3a-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-0 standard 11m - pvc-7108e172-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-1 standard 11m - pvc-710dcb66-fc67-11e8-a606-080027ba45e5 100Gi RWO Delete Bound default/datadir-my-release-cockroachdb-2 standard 11m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/orchestration/start-cockroachdb-insecure.md b/src/current/_includes/v19.1/orchestration/start-cockroachdb-insecure.md deleted file mode 100644 index cb3910c0fb0..00000000000 --- a/src/current/_includes/v19.1/orchestration/start-cockroachdb-insecure.md +++ /dev/null @@ -1,121 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it. - - Download [`cockroachdb-statefulset.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset.yaml - ~~~ - - {{site.data.alerts.callout_danger}} - To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. Specify this amount by adjusting `resources.requests.memory` and `resources.limits.memory` in `cockroachdb-statefulset.yaml`. Their values should be identical. - - We recommend setting `cache` and `max-sql-memory` each to 1/4 of your memory allocation. For example, if you are allocating 8Gi of memory to each CockroachDB node, substitute the following values in [this line](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml#L146): - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - --cache 2Gi --max-sql-memory 2Gi - ~~~ - - Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset.yaml - ~~~ - - ~~~ - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-insecure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-insecure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-insecure.yaml - ~~~ - -2. Confirm that three pods are `Running` successfully. Note that they will not - be considered `Ready` until after the cluster has been initialized: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - -3. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE - pvc-52f51ecf-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-0 26s - pvc-52fd3a39-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-1 27s - pvc-5315efda-8bd5-11e6-a4f4-42010a800002 1Gi RWO Delete Bound default/datadir-cockroachdb-2 27s - ~~~ - -4. Use our [`cluster-init.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init.yaml - ~~~ - - ~~~ - job.batch/cluster-init created - ~~~ - -5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init 1/1 7s 27s - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-cqf8l 0/1 Completed 0 56s - cockroachdb-0 1/1 Running 0 7m51s - cockroachdb-1 1/1 Running 0 7m51s - cockroachdb-2 1/1 Running 0 7m51s - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/orchestration/start-cockroachdb-secure.md b/src/current/_includes/v19.1/orchestration/start-cockroachdb-secure.md deleted file mode 100644 index 7220119c637..00000000000 --- a/src/current/_includes/v19.1/orchestration/start-cockroachdb-secure.md +++ /dev/null @@ -1,203 +0,0 @@ -1. From your local workstation, use our [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml) file to create the StatefulSet that automatically creates 3 pods, each with a CockroachDB node running inside it. - - Download [`cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml - ~~~ - - {{site.data.alerts.callout_danger}} - To avoid running out of memory when CockroachDB is not the only pod on a Kubernetes node, you must set memory limits explicitly. This is because CockroachDB does not detect the amount of memory allocated to its pod when run in Kubernetes. Specify this amount by adjusting `resources.requests.memory` and `resources.limits.memory` in `cockroachdb-statefulset-secure.yaml`. Their values should be identical. - - We recommend setting `cache` and `max-sql-memory` each to 1/4 of your memory allocation. For example, if you are allocating 8Gi of memory to each CockroachDB node, substitute the following values in [this line](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset-secure.yaml#L247): - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - --cache 2Gi --max-sql-memory 2Gi - ~~~ - - Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-secure.yaml - ~~~ - - ~~~ - serviceaccount/cockroachdb created - role.rbac.authorization.k8s.io/cockroachdb created - clusterrole.rbac.authorization.k8s.io/cockroachdb created - rolebinding.rbac.authorization.k8s.io/cockroachdb created - clusterrolebinding.rbac.authorization.k8s.io/cockroachdb created - service/cockroachdb-public created - service/cockroachdb created - poddisruptionbudget.policy/cockroachdb-budget created - statefulset.apps/cockroachdb created - ~~~ - - Alternatively, if you'd rather start with a configuration file that has been customized for performance: - - 1. Download our [performance version of `cockroachdb-statefulset-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml): - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/performance/cockroachdb-statefulset-secure.yaml - ~~~ - - 2. Modify the file wherever there is a `TODO` comment. - - 3. Use the file to create the StatefulSet and start the cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f cockroachdb-statefulset-secure.yaml - ~~~ - - {{site.data.alerts.callout_success}} - If you change the StatefulSet name from the default `cockroachdb`, be sure to start and end with an alphanumeric character and otherwise use lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names). - {{site.data.alerts.end}} - -2. As each pod is created, it issues a Certificate Signing Request, or CSR, to have the node's certificate signed by the Kubernetes CA. You must manually check and approve each node's certificates, at which point the CockroachDB node is started in the pod. - - 1. Get the names of the `Pending` CSRs: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get csr - ~~~ - - ~~~ - NAME AGE REQUESTOR CONDITION - default.node.cockroachdb-0 1m system:serviceaccount:default:cockroachdb Pending - default.node.cockroachdb-1 1m system:serviceaccount:default:cockroachdb Pending - default.node.cockroachdb-2 1m system:serviceaccount:default:cockroachdb Pending - ... - ~~~ - - If you do not see a `Pending` CSR, wait a minute and try again. - - 2. Examine the CSR for the first pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl describe csr default.node.cockroachdb-0 - ~~~ - - ~~~ - Name: default.node.cockroachdb-0 - Labels: - Annotations: - CreationTimestamp: Thu, 09 Nov 2017 13:39:37 -0500 - Requesting User: system:serviceaccount:default:cockroachdb - Status: Pending - Subject: - Common Name: node - Serial Number: - Organization: Cockroach - Subject Alternative Names: - DNS Names: localhost - cockroachdb-0.cockroachdb.default.svc.cluster.local - cockroachdb-0.cockroachdb - cockroachdb-public - cockroachdb-public.default.svc.cluster.local - IP Addresses: 127.0.0.1 - 10.48.1.6 - Events: - ~~~ - - 3. If everything looks correct, approve the CSR for the first pod: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.node.cockroachdb-0 - ~~~ - - ~~~ - certificatesigningrequest "default.node.cockroachdb-0" approved - ~~~ - - 4. Repeat steps 2 and 3 for the other 2 pods. - -3. Initialize the CockroachDB cluster: - - 1. Confirm that three pods are `Running` successfully. Note that they will not be considered `Ready` until after the cluster has been initialized: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cockroachdb-0 0/1 Running 0 2m - cockroachdb-1 0/1 Running 0 2m - cockroachdb-2 0/1 Running 0 2m - ~~~ - - 2. Confirm that the persistent volumes and corresponding claims were created successfully for all three pods: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get persistentvolumes - ~~~ - - ~~~ - NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE - pvc-9e435563-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-0 standard 51m - pvc-9e47d820-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-1 standard 51m - pvc-9e4f57f0-fb2e-11e9-a65c-42010a8e0fca 100Gi RWO Delete Bound default/datadir-cockroachdb-2 standard 51m - ~~~ - - 3. Use our [`cluster-init-secure.yaml`](https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml) file to perform a one-time initialization that joins the CockroachDB nodes into a single cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/cluster-init-secure.yaml - ~~~ - - ~~~ - job.batch/cluster-init-secure created - ~~~ - - 4. Approve the CSR for the one-off pod from which cluster initialization happens: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl certificate approve default.client.root - ~~~ - - ~~~ - certificatesigningrequest.certificates.k8s.io/default.client.root approved - ~~~ - - 5. Confirm that cluster initialization has completed successfully. The job should be considered successful and the Kubernetes pods should soon be considered `Ready`: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get job cluster-init-secure - ~~~ - - ~~~ - NAME COMPLETIONS DURATION AGE - cluster-init-secure 1/1 23s 35s - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl get pods - ~~~ - - ~~~ - NAME READY STATUS RESTARTS AGE - cluster-init-secure-q8s7v 0/1 Completed 0 55s - cockroachdb-0 1/1 Running 0 3m - cockroachdb-1 1/1 Running 0 3m - cockroachdb-2 1/1 Running 0 3m - ~~~ - -{{site.data.alerts.callout_success}} -The StatefulSet configuration sets all CockroachDB nodes to log to `stderr`, so if you ever need access to a pod/node's logs to troubleshoot, use `kubectl logs ` rather than checking the log on the persistent volume. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/orchestration/start-kubernetes.md b/src/current/_includes/v19.1/orchestration/start-kubernetes.md deleted file mode 100644 index 350eff2220f..00000000000 --- a/src/current/_includes/v19.1/orchestration/start-kubernetes.md +++ /dev/null @@ -1,98 +0,0 @@ -Choose whether you want to orchestrate CockroachDB with Kubernetes using the hosted Google Kubernetes Engine (GKE) service, the hosted Amazon Elastic Kubernetes Service (EKS), or manually on Google Compute Engine (GCE) or AWS. The instructions below will change slightly depending on your choice. - -- [Hosted GKE](#hosted-gke) -- [Hosted EKS](#hosted-eks) -- [Manual GCE](#manual-gce) -- [Manual AWS](#manual-aws) - -### Hosted GKE - -1. Complete the **Before You Begin** steps described in the [Google Kubernetes Engine Quickstart](https://cloud.google.com/kubernetes-engine/docs/quickstart) documentation. - - This includes installing `gcloud`, which is used to create and delete Kubernetes Engine clusters, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - - {{site.data.alerts.callout_success}}The documentation offers the choice of using Google's Cloud Shell product or using a local shell on your machine. Choose to use a local shell if you want to be able to view the CockroachDB Admin UI using the steps in this guide.{{site.data.alerts.end}} - -2. From your local workstation, start the Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud container clusters create cockroachdb --machine-type n1-standard-4 - ~~~ - - ~~~ - Creating cluster cockroachdb...done. - ~~~ - - This creates GKE instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--machine-type` flag tells the node pool to use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - The process can take a few minutes, so do not move on to the next step until you see a `Creating cluster cockroachdb...done` message and details about your cluster. - -3. Get the email address associated with your Google Cloud account: - - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud info | grep Account - ~~~ - - ~~~ - Account: [your.google.cloud.email@example.org] - ~~~ - - {{site.data.alerts.callout_danger}} - This command returns your email address in all lowercase. However, in the next step, you must enter the address using the accurate capitalization. For example, if your address is YourName@example.com, you must use YourName@example.com and not yourname@example.com. - {{site.data.alerts.end}} - -4. [Create the RBAC roles](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#prerequisites_for_using_role-based_access_control) CockroachDB needs for running on GKE, using the address from the previous step: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create clusterrolebinding $USER-cluster-admin-binding \ - --clusterrole=cluster-admin \ - --user= - ~~~ - - ~~~ - clusterrolebinding.rbac.authorization.k8s.io/your.username-cluster-admin-binding created - ~~~ - -### Hosted EKS - -1. Complete the steps described in the [EKS Getting Started](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html) documentation. - - This includes installing and configuring the AWS CLI and `eksctl`, which is the command-line tool used to create and delete Kubernetes clusters on EKS, and `kubectl`, which is the command-line tool used to manage Kubernetes from your workstation. - -2. From your local workstation, start the Kubernetes cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ eksctl create cluster \ - --name cockroachdb \ - --nodegroup-name standard-workers \ - --node-type m5.xlarge \ - --nodes 3 \ - --nodes-min 1 \ - --nodes-max 4 \ - --node-ami auto - ~~~ - - This creates EKS instances and joins them into a single Kubernetes cluster named `cockroachdb`. The `--node-type` flag tells the node pool to use the [`m5.xlarge`](https://aws.amazon.com/ec2/instance-types/) instance type (4 vCPUs, 16 GB memory), which meets our [recommended CPU and memory configuration](recommended-production-settings.html#basic-hardware-recommendations). - - Cluster provisioning usually takes between 10 and 15 minutes. Do not move on to the next step until you see a message like `[✔] EKS cluster "cockroachdb" in "us-east-1" region is ready` and details about your cluster. - -3. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/home) to verify that the stacks `eksctl-cockroachdb-cluster` and `eksctl-cockroachdb-nodegroup-standard-workers` were successfully created. Be sure that your region is selected in the console. - -### Manual GCE - -From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on Google Compute Engine](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/gce/) documentation. - -The process includes: - -- Creating a Google Cloud Platform account, installing `gcloud`, and other prerequisites. -- Downloading and installing the latest Kubernetes release. -- Creating GCE instances and joining them into a single Kubernetes cluster. -- Installing `kubectl`, the command-line tool used to manage Kubernetes from your workstation. - -### Manual AWS - -From your local workstation, install prerequisites and start a Kubernetes cluster as described in the [Running Kubernetes on AWS EC2](https://v1-18.docs.kubernetes.io/docs/setup/production-environment/turnkey/aws/) documentation. diff --git a/src/current/_includes/v19.1/orchestration/test-cluster-insecure.md b/src/current/_includes/v19.1/orchestration/test-cluster-insecure.md deleted file mode 100644 index fabe390fc1e..00000000000 --- a/src/current/_includes/v19.1/orchestration/test-cluster-insecure.md +++ /dev/null @@ -1,72 +0,0 @@ -1. Launch a temporary interactive pod and start the [built-in SQL client](use-the-built-in-sql-client.html) inside it: - -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=cockroachdb-public - ~~~ -
- -
- {% include copy-clipboard.html %} - ~~~ shell - $ kubectl run cockroachdb -it \ - --image=cockroachdb/cockroach:{{page.release_info.version}} \ - --rm \ - --restart=Never \ - -- sql \ - --insecure \ - --host=my-release-cockroachdb-public - ~~~ -
- -2. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - balance DECIMAL - ); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts (balance) - VALUES - (1000.50), (20000), (380), (500), (55000); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +--------------------------------------+---------+ - 6f123370-c48c-41ff-b384-2c185590af2b | 380 - 990c9148-1ea0-4861-9da7-fd0e65b0a7da | 1000.50 - ac31c671-40bf-4a7b-8bee-452cff8a4026 | 500 - d58afd93-5be9-42ba-b2e2-dc00dcedf409 | 20000 - e6d8f696-87f5-4d3c-a377-8e152fdc27f7 | 55000 - (5 rows) - ~~~ - -3. Exit the SQL shell and delete the temporary pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v19.1/orchestration/test-cluster-secure.md b/src/current/_includes/v19.1/orchestration/test-cluster-secure.md deleted file mode 100644 index d7a480f93d0..00000000000 --- a/src/current/_includes/v19.1/orchestration/test-cluster-secure.md +++ /dev/null @@ -1,194 +0,0 @@ -To use the built-in SQL client, you need to launch a pod that runs indefinitely with the `cockroach` binary inside it, get a shell into the pod, and then start the built-in SQL client. - -
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create \ - -f https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml - ~~~ - - ~~~ - pod/cockroachdb-client-secure created - ~~~ - - {{site.data.alerts.callout_info}} - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names). - {{site.data.alerts.end}} - -2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@cockroachdb-public:26257/defaultdb> - ~~~ - -3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -4. [Create a user with a password](create-user.html#create-a-user-with-a-password): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS'; - ~~~ - - You will need this username and password to access the Admin UI later. - -5. Exit the SQL shell and pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ -
- -
-1. From your local workstation, use our [`client-secure.yaml`](https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/client-secure.yaml) file to launch a pod and keep it running indefinitely. - - 1. Download the file: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -OOOOOOOOO \ - https://raw.githubusercontent.com/cockroachdb/cockroach/master/cloud/kubernetes/client-secure.yaml - ~~~ - - 1. In the file, change `serviceAccountName: cockroachdb` to `serviceAccountName: my-release-cockroachdb`. - - 1. Use the file to launch a pod and keep it running indefinitely: - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl create -f client-secure.yaml - ~~~ - - ~~~ - pod "cockroachdb-client-secure" created - ~~~ - - {{site.data.alerts.callout_info}} - The pod uses the `root` client certificate created earlier to initialize the cluster, so there's no CSR approval required. If you issue client certificates for other users, however, be sure your SQL usernames contain only lowercase alphanumeric characters, `-`, or `.` so as to comply with [CSR naming requirements](orchestrate-cockroachdb-with-kubernetes.html#csr-names). - {{site.data.alerts.end}} - -2. Get a shell into the pod and start the CockroachDB [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ kubectl exec -it cockroachdb-client-secure \ - -- ./cockroach sql \ - --certs-dir=/cockroach-certs \ - --host=my-release-cockroachdb-public - ~~~ - - ~~~ - # Welcome to the cockroach SQL interface. - # All statements must be terminated by a semicolon. - # To exit: CTRL + D. - # - # Client version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - # Server version: CockroachDB CCL v19.1.0 (x86_64-unknown-linux-gnu, built 2019/04/29 18:36:40, go1.11.6) - - # Cluster ID: 256a8705-e348-4e3a-ab12-e1aba96857e4 - # - # Enter \? for a brief introduction. - # - root@my-release-cockroachdb-public:26257/defaultdb> - ~~~ - -3. Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE bank; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO bank.accounts VALUES (1, 1000.50); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM bank.accounts; - ~~~ - - ~~~ - id | balance - +----+---------+ - 1 | 1000.50 - (1 row) - ~~~ - -4. [Create a user with a password](create-user.html#create-a-user-with-a-password): - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE USER roach WITH PASSWORD 'Q7gc8rEdS'; - ~~~ - - You will need this username and password to access the Admin UI later. - -5. Exit the SQL shell and pod: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ -
- -{{site.data.alerts.callout_success}} -This pod will continue running indefinitely, so any time you need to reopen the built-in SQL client or run any other [`cockroach` client commands](cockroach-commands.html) (e.g., `cockroach node`), repeat step 2 using the appropriate `cockroach` command. - -If you'd prefer to delete the pod and recreate it when needed, run `kubectl delete pod cockroachdb-client-secure`. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/performance/check-rebalancing-after-partitioning.md b/src/current/_includes/v19.1/performance/check-rebalancing-after-partitioning.md deleted file mode 100644 index 106c746f27c..00000000000 --- a/src/current/_includes/v19.1/performance/check-rebalancing-after-partitioning.md +++ /dev/null @@ -1,41 +0,0 @@ -Over the next minutes, CockroachDB will rebalance all partitions based on the constraints you defined. - -To check this at a high level, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is still close to even across all nodes but much higher than before partitioning: - -Perf tuning rebalancing - -To check at a more granular level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement on the `vehicles` table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SELECT * FROM \ -[SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] \ -WHERE \"start_key\" IS NOT NULL \ - AND \"start_key\" NOT LIKE '%Prefix%';" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+------------------+----------------------------+----------+----------+--------------+ - /"boston" | /"boston"/PrefixEnd | 105 | {1,2,3} | 3 - /"los angeles" | /"los angeles"/PrefixEnd | 121 | {7,8,9} | 8 - /"new york" | /"new york"/PrefixEnd | 101 | {1,2,3} | 3 - /"san francisco" | /"san francisco"/PrefixEnd | 117 | {7,8,9} | 8 - /"seattle" | /"seattle"/PrefixEnd | 113 | {4,5,6} | 5 - /"washington dc" | /"washington dc"/PrefixEnd | 109 | {1,2,3} | 1 -(6 rows) -~~~ - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -We can see that, after partitioning, the replicas for New York, Boston, and Washington DC are located on nodes 1-3 in `us-east1-b`, replicas for Seattle are located on nodes 4-6 in `us-west1-a`, and replicas for San Francisco and Los Angeles are located on nodes 7-9 in `us-west2-a`. diff --git a/src/current/_includes/v19.1/performance/check-rebalancing.md b/src/current/_includes/v19.1/performance/check-rebalancing.md deleted file mode 100644 index 82995025007..00000000000 --- a/src/current/_includes/v19.1/performance/check-rebalancing.md +++ /dev/null @@ -1,33 +0,0 @@ -Since you started each node with the `--locality` flag set to its GCE zone, over the next minutes, CockroachDB will rebalance data evenly across the zones. - -To check this, access the Web UI on any node at `:8080` and look at the **Node List**. You'll see that the range count is more or less even across all nodes: - -Perf tuning rebalancing - -For reference, here's how the nodes map to zones: - -Node IDs | Zone ----------|----- -1-3 | `us-east1-b` (South Carolina) -4-6 | `us-west1-a` (Oregon) -7-9 | `us-west2-a` (Los Angeles) - -To verify even balancing at range level, SSH to one of the instances not running CockroachDB and run the `SHOW EXPERIMENTAL_RANGES` statement: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 33 | {3,4,7} | 7 -(1 row) -~~~ - -In this case, we can see that, for the single range containing `vehicles` data, one replica is in each zone, and the leaseholder is in the `us-west2-a` zone. diff --git a/src/current/_includes/v19.1/performance/configure-network.md b/src/current/_includes/v19.1/performance/configure-network.md deleted file mode 100644 index 7cd3e3cbcc6..00000000000 --- a/src/current/_includes/v19.1/performance/configure-network.md +++ /dev/null @@ -1,18 +0,0 @@ -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster) -- **8080** (`tcp:8080`) for accessing the Admin UI - -Since GCE instances communicate on their internal IP addresses by default, you do not need to take any action to enable inter-node communication. However, to access the Admin UI from your local network, you must [create a firewall rule for your project](https://cloud.google.com/vpc/docs/using-firewalls): - -Field | Recommended Value -------|------------------ -Name | **cockroachweb** -Source filter | IP ranges -Source IP ranges | Your local network's IP ranges -Allowed protocols | **tcp:8080** -Target tags | `cockroachdb` - -{{site.data.alerts.callout_info}} -The **tag** feature will let you easily apply the rule to your instances. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/performance/import-movr.md b/src/current/_includes/v19.1/performance/import-movr.md deleted file mode 100644 index 5d796bf47d2..00000000000 --- a/src/current/_includes/v19.1/performance/import-movr.md +++ /dev/null @@ -1,160 +0,0 @@ -Now you'll import Movr data representing users, vehicles, and rides in 3 eastern US cities (New York, Boston, and Washington DC) and 3 western US cities (Los Angeles, San Francisco, and Seattle). - -1. Still on the fourth instance, start the [built-in SQL shell](use-the-built-in-sql-client.html), pointing it at one of the CockroachDB nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql {{page.certs}} --host=
- ~~~ - -2. Create the `movr` database and set it as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE movr; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = movr; - ~~~ - -3. Use the [`IMPORT`](import.html) statement to create and populate the `users`, `vehicles,` and `rides` tables: - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE users ( - id UUID NOT NULL, - city STRING NOT NULL, - name STRING NULL, - address STRING NULL, - credit_card STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/users/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+------+---------------+----------------+--------+ - 390345990764396545 | succeeded | 1 | 1998 | 0 | 0 | 241052 - (1 row) - - Time: 2.882582355s - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE vehicles ( - id UUID NOT NULL, - city STRING NOT NULL, - type STRING NULL, - owner_id UUID NULL, - creation_time TIMESTAMP NULL, - status STRING NULL, - ext JSON NULL, - mycol STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/vehicles/n1.0.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+-------+---------------+----------------+---------+ - 390346109887250433 | succeeded | 1 | 19998 | 19998 | 0 | 3558767 - (1 row) - - Time: 5.803841493s - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > IMPORT TABLE rides ( - id UUID NOT NULL, - city STRING NOT NULL, - vehicle_city STRING NULL, - rider_id UUID NULL, - vehicle_id UUID NULL, - start_address STRING NULL, - end_address STRING NULL, - start_time TIMESTAMP NULL, - end_time TIMESTAMP NULL, - revenue DECIMAL(10,2) NULL, - CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), - INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), - INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city ASC, vehicle_id ASC), - CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) - ) - CSV DATA ( - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.0.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.1.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.2.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.3.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.4.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.5.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.6.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.7.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.8.csv', - 'https://s3-us-west-1.amazonaws.com/cockroachdb-movr/datasets/perf-tuning/rides/n1.9.csv' - ); - ~~~ - - ~~~ - job_id | status | fraction_completed | rows | index_entries | system_records | bytes - +--------------------+-----------+--------------------+--------+---------------+----------------+-----------+ - 390346325693792257 | succeeded | 1 | 999996 | 1999992 | 0 | 339741841 - (1 row) - - Time: 44.620371424s - ~~~ - - {{site.data.alerts.callout_success}} - You can observe the progress of imports as well as all schema change operations (e.g., adding secondary indexes) on the [**Jobs** page](admin-ui-jobs-page.html) of the Web UI. - {{site.data.alerts.end}} - -7. Logically, there should be a number of [foreign key](foreign-key.html) relationships between the tables: - - Referencing columns | Referenced columns - --------------------|------------------- - `vehicles.city`, `vehicles.owner_id` | `users.city`, `users.id` - `rides.city`, `rides.rider_id` | `users.city`, `users.id` - `rides.vehicle_city`, `rides.vehicle_id` | `vehicles.city`, `vehicles.id` - - As mentioned earlier, it wasn't possible to put these relationships in place during `IMPORT`, but it was possible to create the required secondary indexes. Now, let's add the foreign key constraints: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE vehicles - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, owner_id) - REFERENCES users (city, id); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_city_ref_users - FOREIGN KEY (city, rider_id) - REFERENCES users (city, id); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides - ADD CONSTRAINT fk_vehicle_city_ref_vehicles - FOREIGN KEY (vehicle_city, vehicle_id) - REFERENCES vehicles (city, id); - ~~~ - -4. Exit the built-in SQL shell: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ diff --git a/src/current/_includes/v19.1/performance/overview.md b/src/current/_includes/v19.1/performance/overview.md deleted file mode 100644 index 25cfdd91a75..00000000000 --- a/src/current/_includes/v19.1/performance/overview.md +++ /dev/null @@ -1,35 +0,0 @@ -### Topology - -You'll start with a 3-node CockroachDB cluster in a single Google Compute Engine (GCE) zone, with an extra instance for running a client application workload: - -Perf tuning topology - -{{site.data.alerts.callout_info}} -Within a single GCE zone, network latency between instances should be sub-millisecond. -{{site.data.alerts.end}} - -You'll then scale the cluster to 9 nodes running across 3 GCE regions, with an extra instance in each region for a client application workload: - -Perf tuning topology - -To reproduce the performance demonstrated in this tutorial: - -- For each CockroachDB node, you'll use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/#localssds) disk. -- For running the client application workload, you'll use smaller instances, such as `n1-standard-1`. - -### Schema - -Your schema and data will be based on our open-source, fictional peer-to-peer ride-sharing application,[MovR](https://github.com/cockroachdb/movr). - -Perf tuning schema - -A few notes about the schema: - -- There are just three self-explanatory tables: In essence, `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have participated. -- Each table has a composite primary key, with `city` being first in the key. Although not necessary initially in the single-region deployment, once you scale the cluster to multiple regions, these compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. As such, this tutorial demonstrates a schema designed for future scaling. -- The [`IMPORT`](import.html) feature you'll use to import the data does not support foreign keys, so you'll import the data without [foreign key constraints](foreign-key.html). However, the import will create the secondary indexes required to add the foreign keys later. -- The `rides` table contains both `city` and the seemingly redundant `vehicle_city`. This redundancy is necessary because, while it is not possible to apply more than one foreign key constraint to a single column, you will need to apply two foreign key constraints to the `rides` table, and each will require city as part of the constraint. The duplicate `vehicle_city`, which is kept in sync with `city` via a [`CHECK` constraint](check.html), lets you overcome [this limitation](https://github.com/cockroachdb/cockroach/issues/23580). - -### Important concepts - -To understand the techniques in this tutorial, and to be able to apply them in your own scenarios, it's important to first understand [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). Review that document before getting started here. diff --git a/src/current/_includes/v19.1/performance/partition-by-city.md b/src/current/_includes/v19.1/performance/partition-by-city.md deleted file mode 100644 index 5a830230a58..00000000000 --- a/src/current/_includes/v19.1/performance/partition-by-city.md +++ /dev/null @@ -1,419 +0,0 @@ -For this service, the most effective technique for improving read and write latency is to [geo-partition](partitioning.html) the data by city. In essence, this means changing the way data is mapped to ranges. Instead of an entire table and its indexes mapping to a specific range or set of ranges, all rows in the table and its indexes with a given city will map to a range or set of ranges. Once ranges are defined in this way, we can then use the [replication zone](configure-replication-zones.html) feature to pin partitions to specific locations, ensuring that read and write requests from users in a specific city do not have to leave that region. - -1. Partitioning is an enterprise feature, so start off by [registering for a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -2. Once you've received the trial license, SSH to any node in your cluster and [apply the license](enterprise-licensing.html#set-a-license): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING cluster.organization = '';" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --host=
\ - --execute="SET CLUSTER SETTING enterprise.license = '';" - ~~~ - -3. Define partitions for all tables and their secondary indexes. - - Start with the `users` table: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - Now define partitions for the `vehicles` table and its secondary indexes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE vehicles \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX vehicles_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york_idx VALUES IN ('new york'), \ - PARTITION boston_idx VALUES IN ('boston'), \ - PARTITION washington_dc_idx VALUES IN ('washington dc'), \ - PARTITION seattle_idx VALUES IN ('seattle'), \ - PARTITION san_francisco_idx VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx VALUES IN ('los angeles') \ - );" - ~~~ - - Next, define partitions for the `rides` table and its secondary indexes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER TABLE rides \ - PARTITION BY LIST (city) ( \ - PARTITION new_york VALUES IN ('new york'), \ - PARTITION boston VALUES IN ('boston'), \ - PARTITION washington_dc VALUES IN ('washington dc'), \ - PARTITION seattle VALUES IN ('seattle'), \ - PARTITION san_francisco VALUES IN ('san francisco'), \ - PARTITION los_angeles VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_city_ref_users \ - PARTITION BY LIST (city) ( \ - PARTITION new_york_idx1 VALUES IN ('new york'), \ - PARTITION boston_idx1 VALUES IN ('boston'), \ - PARTITION washington_dc_idx1 VALUES IN ('washington dc'), \ - PARTITION seattle_idx1 VALUES IN ('seattle'), \ - PARTITION san_francisco_idx1 VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx1 VALUES IN ('los angeles') \ - );" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles \ - PARTITION BY LIST (vehicle_city) ( \ - PARTITION new_york_idx2 VALUES IN ('new york'), \ - PARTITION boston_idx2 VALUES IN ('boston'), \ - PARTITION washington_dc_idx2 VALUES IN ('washington dc'), \ - PARTITION seattle_idx2 VALUES IN ('seattle'), \ - PARTITION san_francisco_idx2 VALUES IN ('san francisco'), \ - PARTITION los_angeles_idx2 VALUES IN ('los angeles') \ - );" - ~~~ - - Finally, drop an unused index on `rides` rather than partition it: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql \ - {{page.certs}} \ - --database=movr \ - --host=
\ - --execute="DROP INDEX rides_start_time_idx;" - ~~~ - - {{site.data.alerts.callout_info}} - The `rides` table contains 1 million rows, so dropping this index will take a few minutes. - {{site.data.alerts.end}} - -7. Now [create replication zones](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition) to require city data to be stored on specific nodes based on node locality. - - City | Locality - -----|--------- - New York | `zone=us-east1-b` - Boston | `zone=us-east1-b` - Washington DC | `zone=us-east1-b` - Seattle | `zone=us-west1-a` - San Francisco | `zone=us-west2-a` - Los Angeles | `zone=us-west2-a` - - {{site.data.alerts.callout_info}} - Since our nodes are located in 3 specific GCE zones, we're only going to use the `zone=` portion of node locality. If we were using multiple zones per regions, we would likely use the `region=` portion of the node locality instead. - {{site.data.alerts.end}} - - Start with the `users` table partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.users CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Move on to the `vehicles` table and secondary index partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles_idx OF TABLE movr.vehicles CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - Finish with the `rides` table and secondary index partitions: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION new_york_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION boston_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc_idx OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION washington_dc_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-east1-b]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION seattle_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west1-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION san_francisco_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles_idx1 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --execute="ALTER PARTITION los_angeles_idx2 OF TABLE movr.rides CONFIGURE ZONE USING constraints='[+zone=us-west2-a]';" \ - {{page.certs}} \ - --host=
- ~~~ diff --git a/src/current/_includes/v19.1/performance/scale-cluster.md b/src/current/_includes/v19.1/performance/scale-cluster.md deleted file mode 100644 index e18069d5185..00000000000 --- a/src/current/_includes/v19.1/performance/scale-cluster.md +++ /dev/null @@ -1,61 +0,0 @@ -1. SSH to one of the `n1-standard-4` instances in the `us-west1-a` zone. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west1,zone=us-west1-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances in the `us-west1-a` zone. - -5. SSH to one of the `n1-standard-4` instances in the `us-west2-a` zone. - -6. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -7. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join= \ - --locality=cloud=gce,region=us-west2,zone=us-west2-a \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -8. Repeat steps 5 - 7 for the other two `n1-standard-4` instances in the `us-west2-a` zone. diff --git a/src/current/_includes/v19.1/performance/start-cluster.md b/src/current/_includes/v19.1/performance/start-cluster.md deleted file mode 100644 index 67b20c15192..00000000000 --- a/src/current/_includes/v19.1/performance/start-cluster.md +++ /dev/null @@ -1,60 +0,0 @@ -#### Start the nodes - -1. SSH to the first `n1-standard-4` instance. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - {{page.certs}} \ - --advertise-host= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-east1,zone=us-east1-b \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two `n1-standard-4` instances. Be sure to adjust the `--advertise-addr` flag each time. - -#### Initialize the cluster - -1. SSH to the fourth instance, the one not running a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -4. Run the [`cockroach init`](initialize-a-cluster.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init {{page.certs}} --host=
- ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Web UI, and the SQL URL for clients. diff --git a/src/current/_includes/v19.1/performance/test-performance-after-partitioning.md b/src/current/_includes/v19.1/performance/test-performance-after-partitioning.md deleted file mode 100644 index 16c07a9f92d..00000000000 --- a/src/current/_includes/v19.1/performance/test-performance-after-partitioning.md +++ /dev/null @@ -1,93 +0,0 @@ -After partitioning, reads and writers for a specific city will be much faster because all replicas for that city are now located on the nodes closest to the city. - -To check this, let's repeat a few of the read and write queries that we executed before partitioning in [step 12](#step-12-test-performance). - -#### Reads - -Again imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [20.065784454345703, 7.866144180297852, 8.362054824829102, 9.08803939819336, 7.925987243652344, 7.543087005615234, 7.786035537719727, 8.227825164794922, 7.907867431640625, 7.654905319213867, 7.793903350830078, 7.627964019775391, 7.833957672119141, 7.858037948608398, 7.474184036254883, 9.459972381591797, 7.726192474365234, 7.194995880126953, 7.364034652709961, 7.25102424621582, 7.650852203369141, 7.663965225219727, 9.334087371826172, 7.810115814208984, 7.543087005615234, 7.134914398193359, 7.922887802124023, 7.220029830932617, 7.606029510498047, 7.208108901977539, 7.333993911743164, 7.464170455932617, 7.679939270019531, 7.436990737915039, 7.62486457824707, 7.235050201416016, 7.420063018798828, 7.795095443725586, 7.39598274230957, 7.546901702880859, 7.582187652587891, 7.9669952392578125, 7.418155670166016, 7.539033889770508, 7.805109024047852, 7.086992263793945, 7.069826126098633, 7.833957672119141, 7.43412971496582, 7.035017013549805] - - Median time (milliseconds): - 7.62641429901 - ~~~ - -Before partitioning, this query took a median time of 72.02ms. After partitioning, the query took a median time of only 7.62ms. - -#### Writes - -Now let's again imagine 100 people in New York and 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [41.8248176574707, 9.701967239379883, 8.725166320800781, 9.058952331542969, 7.819175720214844, 6.247997283935547, 10.265827178955078, 7.627964019775391, 9.120941162109375, 7.977008819580078, 9.247064590454102, 8.929967880249023, 9.610176086425781, 14.40286636352539, 8.588075637817383, 8.67319107055664, 9.417057037353516, 7.652044296264648, 8.917093276977539, 9.135961532592773, 8.604049682617188, 9.220123291015625, 7.578134536743164, 9.096860885620117, 8.942842483520508, 8.63790512084961, 7.722139358520508, 13.59701156616211, 9.176015853881836, 11.484146118164062, 9.212017059326172, 7.563114166259766, 8.793115615844727, 8.80289077758789, 7.827043533325195, 7.6389312744140625, 17.47584342956543, 9.436845779418945, 7.63392448425293, 8.594989776611328, 9.002208709716797, 8.93402099609375, 8.71896743774414, 8.76307487487793, 8.156061172485352, 8.729934692382812, 8.738040924072266, 8.25190544128418, 8.971929550170898, 7.460832595825195, 8.889198303222656, 8.45789909362793, 8.761167526245117, 10.223865509033203, 8.892059326171875, 8.961915969848633, 8.968114852905273, 7.750988006591797, 7.761955261230469, 9.199142456054688, 9.02700424194336, 9.509086608886719, 9.428977966308594, 7.902860641479492, 8.940935134887695, 8.615970611572266, 8.75401496887207, 7.906913757324219, 8.179187774658203, 11.447906494140625, 8.71419906616211, 9.202003479003906, 9.263038635253906, 9.089946746826172, 8.92496109008789, 10.32114028930664, 7.913827896118164, 9.464025497436523, 10.612010955810547, 8.78596305847168, 8.878946304321289, 7.575035095214844, 10.657072067260742, 8.777856826782227, 8.649110794067383, 9.012937545776367, 8.931875228881836, 9.31406021118164, 9.396076202392578, 8.908987045288086, 8.002996444702148, 9.089946746826172, 7.5588226318359375, 8.918046951293945, 12.117862701416016, 7.266998291015625, 8.074045181274414, 8.955001831054688, 8.868932723999023, 8.755922317504883] - - Median time (milliseconds): - 8.90052318573 - ~~~ - - Before partitioning, this query took a median time of 48.40ms. After partitioning, the query took a median time of only 8.90ms. - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [276.3068675994873, 9.830951690673828, 8.772134780883789, 9.304046630859375, 8.24880599975586, 7.959842681884766, 7.848978042602539, 7.879018783569336, 7.754087448120117, 10.724067687988281, 13.960123062133789, 9.825944900512695, 9.60993766784668, 9.273052215576172, 9.41920280456543, 8.040904998779297, 16.484975814819336, 10.178089141845703, 8.322000503540039, 9.468793869018555, 8.002042770385742, 9.185075759887695, 9.54294204711914, 9.387016296386719, 9.676933288574219, 13.051986694335938, 9.506940841674805, 12.327909469604492, 10.377168655395508, 15.023946762084961, 9.985923767089844, 7.853031158447266, 9.43303108215332, 9.164094924926758, 10.941028594970703, 9.37199592590332, 12.359857559204102, 8.975028991699219, 7.728099822998047, 8.310079574584961, 9.792089462280273, 9.448051452636719, 8.057117462158203, 9.37795639038086, 9.753942489624023, 9.576082229614258, 8.192062377929688, 9.392023086547852, 7.97581672668457, 8.165121078491211, 9.660959243774414, 8.270978927612305, 9.901046752929688, 8.085966110229492, 10.581016540527344, 9.831905364990234, 7.883787155151367, 8.077859878540039, 8.161067962646484, 10.02812385559082, 7.9898834228515625, 9.840965270996094, 9.452104568481445, 9.747028350830078, 9.003162384033203, 9.206056594848633, 9.274005889892578, 7.8449249267578125, 8.827924728393555, 9.322881698608398, 12.08186149597168, 8.76307487487793, 8.353948593139648, 8.182048797607422, 7.736921310424805, 9.31406021118164, 9.263992309570312, 9.282112121582031, 7.823944091796875, 9.11712646484375, 8.099079132080078, 9.156942367553711, 8.363962173461914, 10.974884033203125, 8.729934692382812, 9.2620849609375, 9.27591323852539, 8.272886276245117, 8.25190544128418, 8.093118667602539, 9.259939193725586, 8.413076400756836, 8.198976516723633, 9.95182991027832, 8.024930953979492, 8.895158767700195, 8.243083953857422, 9.076833724975586, 9.994029998779297, 10.149955749511719] - - Median time (milliseconds): - 9.26303863525 - ~~~ - - Before partitioning, this query took a median time of 116.86ms. After partitioning, the query took a median time of only 9.26ms. diff --git a/src/current/_includes/v19.1/performance/test-performance.md b/src/current/_includes/v19.1/performance/test-performance.md deleted file mode 100644 index 2009ac9653f..00000000000 --- a/src/current/_includes/v19.1/performance/test-performance.md +++ /dev/null @@ -1,146 +0,0 @@ -In general, all of the tuning techniques featured in the single-region scenario above still apply in a multi-region deployment. However, the fact that data and leaseholders are spread across the US means greater latencies in many cases. - -#### Reads - -For example, imagine we are a Movr administrator in New York, and we want to get the IDs and descriptions of all New York-based bikes that are currently in use: - -1. SSH to the instance in `us-east1-b` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'new york' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['0068ee24-2dfb-437d-9a5d-22bb742d519e', "{u'color': u'green', u'brand': u'Kona'}"] - ['01b80764-283b-4232-8961-a8d6a4121a08', "{u'color': u'green', u'brand': u'Pinarello'}"] - ['02a39628-a911-4450-b8c0-237865546f7f', "{u'color': u'black', u'brand': u'Schwinn'}"] - ['02eb2a12-f465-4575-85f8-a4b77be14c54', "{u'color': u'black', u'brand': u'Pinarello'}"] - ['02f2fcc3-fea6-4849-a3a0-dc60480fa6c2', "{u'color': u'red', u'brand': u'FujiCervelo'}"] - ['034d42cf-741f-428c-bbbb-e31820c68588', "{u'color': u'yellow', u'brand': u'Santa Cruz'}"] - ... - - Times (milliseconds): - [933.8209629058838, 72.02410697937012, 72.45206832885742, 72.39294052124023, 72.8158950805664, 72.07584381103516, 72.21412658691406, 71.96712493896484, 71.75517082214355, 72.16811180114746, 71.78592681884766, 72.91603088378906, 71.91109657287598, 71.4719295501709, 72.40676879882812, 71.8080997467041, 71.84004783630371, 71.98500633239746, 72.40891456604004, 73.75001907348633, 71.45905494689941, 71.53081893920898, 71.46596908569336, 72.07608222961426, 71.94995880126953, 71.41804695129395, 71.29096984863281, 72.11899757385254, 71.63381576538086, 71.3050365447998, 71.83194160461426, 71.20394706726074, 70.9981918334961, 72.79205322265625, 72.63493537902832, 72.15285301208496, 71.8698501586914, 72.30591773986816, 71.53582572937012, 72.69001007080078, 72.03006744384766, 72.56317138671875, 71.61688804626465, 72.17121124267578, 70.20092010498047, 72.12018966674805, 73.34589958190918, 73.01592826843262, 71.49410247802734, 72.19099998474121] - - Median time (milliseconds): - 72.0270872116 - ~~~ - -As we saw earlier, the leaseholder for the `vehicles` table is in `us-west2-a` (Los Angeles), so our query had to go from the gateway node in `us-east1-b` all the way to the west coast and then back again before returning data to the client. - -For contrast, imagine we are now a Movr administrator in Los Angeles, and we want to get the IDs and descriptions of all Los Angeles-based bikes that are currently in use: - -1. SSH to the instance in `us-west2-a` with the Python client. - -2. Query for the data: - - {% include copy-clipboard.html %} - ~~~ shell - $ {{page.app}} \ - --host=
\ - --statement="SELECT id, ext FROM vehicles \ - WHERE city = 'los angeles' \ - AND type = 'bike' \ - AND status = 'in_use'" \ - --repeat=50 \ - --times - ~~~ - - ~~~ - Result: - ['id', 'ext'] - ['00078349-94d4-43e6-92be-8b0d1ac7ee9f', "{u'color': u'blue', u'brand': u'Merida'}"] - ['003f84c4-fa14-47b2-92d4-35a3dddd2d75', "{u'color': u'red', u'brand': u'Kona'}"] - ['0107a133-7762-4392-b1d9-496eb30ee5f9', "{u'color': u'yellow', u'brand': u'Kona'}"] - ['0144498b-4c4f-4036-8465-93a6bea502a3', "{u'color': u'blue', u'brand': u'Pinarello'}"] - ['01476004-fb10-4201-9e56-aadeb427f98a', "{u'color': u'black', u'brand': u'Merida'}"] - - Times (milliseconds): - [782.6759815216064, 8.564949035644531, 8.226156234741211, 7.949113845825195, 7.86590576171875, 7.842063903808594, 7.674932479858398, 7.555961608886719, 7.642984390258789, 8.024930953979492, 7.717132568359375, 8.46409797668457, 7.520914077758789, 7.6541900634765625, 7.458925247192383, 7.671833038330078, 7.740020751953125, 7.771015167236328, 7.598161697387695, 8.411169052124023, 7.408857345581055, 7.469892501831055, 7.524967193603516, 7.764101028442383, 7.750988006591797, 7.2460174560546875, 6.927967071533203, 7.822990417480469, 7.27391242980957, 7.730960845947266, 7.4710845947265625, 7.4310302734375, 7.33494758605957, 7.455110549926758, 7.021188735961914, 7.083892822265625, 7.812976837158203, 7.625102996826172, 7.447957992553711, 7.179021835327148, 7.504940032958984, 7.224082946777344, 7.257938385009766, 7.714986801147461, 7.4939727783203125, 7.6160430908203125, 7.578849792480469, 7.890939712524414, 7.546901702880859, 7.411956787109375] - - Median time (milliseconds): - 7.6071023941 - ~~~ - -Because the leaseholder for `vehicles` is in the same zone as the client request, this query took just 7.60ms compared to the similar query in New York that took 72.02ms. - -#### Writes - -The geographic distribution of data impacts write performance as well. For example, imagine 100 people in Seattle and 100 people in New York want to create new Movr accounts: - -1. SSH to the instance in `us-west1-a` with the Python client. - -2. Create 100 Seattle-based users: - - {% include copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'seattle', 'Seatller', '111 East Street', '1736352379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [277.4538993835449, 50.12702941894531, 47.75214195251465, 48.13408851623535, 47.872066497802734, 48.65407943725586, 47.78695106506348, 49.14689064025879, 52.770137786865234, 49.00097846984863, 48.68602752685547, 47.387123107910156, 47.36208915710449, 47.6841926574707, 46.49209976196289, 47.06096649169922, 46.753883361816406, 46.304941177368164, 48.90894889831543, 48.63715171813965, 48.37393760681152, 49.23295974731445, 50.13418197631836, 48.310041427612305, 48.57516288757324, 47.62911796569824, 47.77693748474121, 47.505855560302734, 47.89996147155762, 49.79205131530762, 50.76479911804199, 50.21500587463379, 48.73299598693848, 47.55592346191406, 47.35088348388672, 46.7071533203125, 43.00808906555176, 43.1060791015625, 46.02813720703125, 47.91092872619629, 68.71294975280762, 49.241065979003906, 48.9039421081543, 47.82295227050781, 48.26998710632324, 47.631025314331055, 64.51892852783203, 48.12812805175781, 67.33417510986328, 48.603057861328125, 50.31013488769531, 51.02396011352539, 51.45716667175293, 50.85396766662598, 49.07512664794922, 47.49894142150879, 44.67201232910156, 43.827056884765625, 44.412851333618164, 46.69189453125, 49.55601692199707, 49.16882514953613, 49.88598823547363, 49.31306838989258, 46.875, 46.69594764709473, 48.31886291503906, 48.378944396972656, 49.0570068359375, 49.417972564697266, 48.22111129760742, 50.662994384765625, 50.58097839355469, 75.44088363647461, 51.05400085449219, 50.85110664367676, 48.187971115112305, 56.7781925201416, 42.47403144836426, 46.2191104888916, 53.96890640258789, 46.697139739990234, 48.99096488952637, 49.1330623626709, 46.34690284729004, 47.09315299987793, 46.39410972595215, 46.51689529418945, 47.58000373840332, 47.924041748046875, 48.426151275634766, 50.22597312927246, 50.1859188079834, 50.37498474121094, 49.861907958984375, 51.477909088134766, 73.09293746948242, 48.779964447021484, 45.13692855834961, 42.2968864440918] - - Median time (milliseconds): - 48.4025478363 - ~~~ - -3. SSH to the instance in `us-east1-b` with the Python client. - -4. Create 100 new NY-based users: - - {% include copy-clipboard.html %} - ~~~ shell - {{page.app}} \ - --host=
\ - --statement="INSERT INTO users VALUES (gen_random_uuid(), 'new york', 'New Yorker', '111 West Street', '9822222379937347')" \ - --repeat=100 \ - --times - ~~~ - - ~~~ - Times (milliseconds): - [131.05082511901855, 116.88899993896484, 115.15498161315918, 117.095947265625, 121.04082107543945, 115.8750057220459, 113.80696296691895, 113.05880546569824, 118.41201782226562, 125.30899047851562, 117.5389289855957, 115.23890495300293, 116.84799194335938, 120.0411319732666, 115.62800407409668, 115.08989334106445, 113.37089538574219, 115.15498161315918, 115.96989631652832, 133.1961154937744, 114.25995826721191, 118.09396743774414, 122.24102020263672, 116.14608764648438, 114.80998992919922, 131.9139003753662, 114.54391479492188, 115.15307426452637, 116.7759895324707, 135.10799407958984, 117.18511581420898, 120.15485763549805, 118.0570125579834, 114.52388763427734, 115.28396606445312, 130.00011444091797, 126.45292282104492, 142.69423484802246, 117.60401725769043, 134.08493995666504, 117.47002601623535, 115.75007438659668, 117.98381805419922, 115.83089828491211, 114.88890647888184, 113.23404312133789, 121.1700439453125, 117.84791946411133, 115.35286903381348, 115.0820255279541, 116.99700355529785, 116.67394638061523, 116.1041259765625, 114.67289924621582, 112.98894882202148, 117.1119213104248, 119.78602409362793, 114.57300186157227, 129.58717346191406, 118.37983131408691, 126.68204307556152, 118.30306053161621, 113.27195167541504, 114.22920227050781, 115.80777168273926, 116.81294441223145, 114.76683616638184, 115.1430606842041, 117.29192733764648, 118.24417114257812, 116.56999588012695, 113.8620376586914, 114.88819122314453, 120.80597877502441, 132.39002227783203, 131.00910186767578, 114.56179618835449, 117.03896522521973, 117.72680282592773, 115.6010627746582, 115.27681350708008, 114.52317237854004, 114.87483978271484, 117.78903007507324, 116.65701866149902, 122.6949691772461, 117.65193939208984, 120.5449104309082, 115.61179161071777, 117.54202842712402, 114.70890045166016, 113.58809471130371, 129.7171115875244, 117.57993698120117, 117.1119213104248, 117.64001846313477, 140.66505432128906, 136.41691207885742, 116.24789237976074, 115.19908905029297] - - Median time (milliseconds): - 116.868495941 - ~~~ - -It took 48.40ms to create a user in Seattle and 116.86ms to create a user in New York. To better understand this discrepancy, let's look at the distribution of data for the `users` table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ -{{page.certs}} \ ---host=
\ ---database=movr \ ---execute="SHOW EXPERIMENTAL_RANGES FROM TABLE users;" -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-----------+---------+----------+----------+--------------+ - NULL | NULL | 49 | {2,6,8} | 6 -(1 row) -~~~ - -For the single range containing `users` data, one replica is in each zone, with the leaseholder in the `us-west1-a` zone. This means that: - -- When creating a user in Seattle, the request doesn't have to leave the zone to reach the leaseholder. However, since a write requires consensus from its replica group, the write has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client. -- When creating a user in New York, there are more network hops and, thus, increased latency. The request first needs to travel across the continent to the leaseholder in `us-west1-a`. It then has to wait for confirmation from either the replica in `us-west1-b` (Los Angeles) or `us-east1-b` (New York) before committing and then returning confirmation to the client back in the east. diff --git a/src/current/_includes/v19.1/performance/tuning-secure.py b/src/current/_includes/v19.1/performance/tuning-secure.py deleted file mode 100644 index a644dbb1c87..00000000000 --- a/src/current/_includes/v19.1/performance/tuning-secure.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257, - sslmode='require', - sslrootcert='certs/ca.crt', - sslkey='certs/client.root.key', - sslcert='certs/client.root.crt' -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v19.1/performance/tuning.py b/src/current/_includes/v19.1/performance/tuning.py deleted file mode 100644 index dcb567dad91..00000000000 --- a/src/current/_includes/v19.1/performance/tuning.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python - -import argparse -import psycopg2 -import time - -parser = argparse.ArgumentParser( - description="test performance of statements against movr database") -parser.add_argument("--host", required=True, - help="ip address of one of the CockroachDB nodes") -parser.add_argument("--statement", required=True, - help="statement to execute") -parser.add_argument("--repeat", type=int, - help="number of times to repeat the statement", default = 20) -parser.add_argument("--times", - help="print time for each repetition of the statement", action="store_true") -parser.add_argument("--cumulative", - help="print cumulative time for all repetitions of the statement", action="store_true") -args = parser.parse_args() - -conn = psycopg2.connect( - database='movr', - user='root', - host=args.host, - port=26257 -) -conn.set_session(autocommit=True) -cur = conn.cursor() - -def median(lst): - n = len(lst) - if n < 1: - return None - if n % 2 == 1: - return sorted(lst)[n//2] - else: - return sum(sorted(lst)[n//2-1:n//2+1])/2.0 - -times = list() -for n in range(args.repeat): - start = time.time() - statement = args.statement - cur.execute(statement) - if n < 1: - if cur.description is not None: - colnames = [desc[0] for desc in cur.description] - print("") - print("Result:") - print(colnames) - rows = cur.fetchall() - for row in rows: - print([str(cell) for cell in row]) - end = time.time() - times.append((end - start)* 1000) - -cur.close() -conn.close() - -print("") -if args.times: - print("Times (milliseconds):") - print(times) - print("") -# print("Average time (milliseconds):") -# print(float(sum(times))/len(times)) -# print("") -print("Median time (milliseconds):") -print(median(times)) -print("") -if args.cumulative: - print("Cumulative time (milliseconds):") - print(sum(times)) - print("") diff --git a/src/current/_includes/v19.1/prod-deployment/advertise-addr-join.md b/src/current/_includes/v19.1/prod-deployment/advertise-addr-join.md deleted file mode 100644 index 67019d1fcea..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/advertise-addr-join.md +++ /dev/null @@ -1,4 +0,0 @@ -Flag | Description ------|------------ -`--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). -`--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. diff --git a/src/current/_includes/v19.1/prod-deployment/backup.sh b/src/current/_includes/v19.1/prod-deployment/backup.sh deleted file mode 100644 index c1a0bc3c5a6..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/backup.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash - -set -euo pipefail - -# This script creates full backups when run on the configured -# day of the week and incremental backups when run on other days, and tracks -# recently created backups in a file to pass as the base for incremental backups. - -full_day="" # Must match (including case) the output of `LC_ALL=C date +%A`. -what="DATABASE " # The name of the database you want to backup. -base="/backups" # The URL where you want to store the backup. -extra="" # Any additional parameters that need to be appended to the BACKUP URI e.g., AWS key params. -recent=recent_backups.txt # File in which recent backups are tracked. -backup_parameters= # e.g., "WITH revision_history" - -# Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, `--port`, and additional flags as needed to connect to the SQL client. -runsql() { cockroach sql --insecure -e "$1"; } - -destination="${base}/$(date +"%Y%m%d-%H%M")${extra}" - -prev= -while read -r line; do - [[ "$prev" ]] && prev+=", " - prev+="'$line'" -done < "$recent" - -if [[ "$(LC_ALL=C date +%A)" = "$full_day" || ! "$prev" ]]; then - runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters" - echo "$destination" > "$recent" -else - destination="${base}/$(date +"%Y%m%d-%H%M")-inc${extra}" - runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' INCREMENTAL FROM $prev $backup_parameters" - echo "$destination" >> "$recent" -fi - -echo "backed up to ${destination}" diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-initialize-cluster.md b/src/current/_includes/v19.1/prod-deployment/insecure-initialize-cluster.md deleted file mode 100644 index 5d1384c8467..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-initialize-cluster.md +++ /dev/null @@ -1,12 +0,0 @@ -On your local machine, complete the node startup process and have them join together as a cluster: - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Run the [`cockroach init`](initialize-a-cluster.html) command, with the `--host` flag set to the address of any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
- ~~~ - - Each node then prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-recommendations.md b/src/current/_includes/v19.1/prod-deployment/insecure-recommendations.md deleted file mode 100644 index 11bcbe83d83..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-recommendations.md +++ /dev/null @@ -1,13 +0,0 @@ -- Consider using a [secure cluster](manual-deployment.html) instead. Using an insecure cluster comes with risks: - - Your cluster is open to any client that can access any node's IP addresses. - - Any user, even `root`, can log in without providing a password. - - Any user, connecting as `root`, can read or write any data in your cluster. - - There is no network encryption or authentication, and thus no confidentiality. - -- Decide how you want to access your Admin UI: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI. diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-requirements.md b/src/current/_includes/v19.1/prod-deployment/insecure-requirements.md deleted file mode 100644 index 3b45a14b0d5..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-requirements.md +++ /dev/null @@ -1,7 +0,0 @@ -- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html). - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your Admin UI diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-scale-cluster.md b/src/current/_includes/v19.1/prod-deployment/insecure-scale-cluster.md deleted file mode 100644 index 45211b51203..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-scale-cluster.md +++ /dev/null @@ -1,117 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service): - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory - -8. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -9. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-start-nodes.md b/src/current/_includes/v19.1/prod-deployment/insecure-start-nodes.md deleted file mode 100644 index b67edfed311..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-start-nodes.md +++ /dev/null @@ -1,148 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}} -After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step. -{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--insecure` | Indicates that the cluster is insecure, with no network encryption or authentication. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](start-a-node.html). - -5. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown cockroach /var/lib/cockroach - ~~~ - -7. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/insecurecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/insecurecockroachdb.service %} - ~~~ - -8. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-port=8080`. To set these options manually, see [Start a Node](start-a-node.html). - -9. Start the CockroachDB cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start insecurecockroachdb - ~~~ - -10. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop insecurecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-test-cluster.md b/src/current/_includes/v19.1/prod-deployment/insecure-test-cluster.md deleted file mode 100644 index faece96fa42..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
- ~~~ - -2. Create an `insecurenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE insecurenodetest; - ~~~ - -3. View the cluster's databases, which will include `insecurenodetest`: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | insecurenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. diff --git a/src/current/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md b/src/current/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md deleted file mode 100644 index 9e594e0a864..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecure-test-load-balancing.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}} - -1. SSH to the machine where you want the run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node. - -2. Download `workload` and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -3. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -4. Start the TPC-C workload, pointing it at the IP address of the load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ workload run tpcc \ - --drop \ - --init \ - --duration=20m \ - --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=disable" - ~~~ - - This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. - - {{site.data.alerts.callout_success}}For more tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help. - -4. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v19.1/prod-deployment/insecurecockroachdb.service b/src/current/_includes/v19.1/prod-deployment/insecurecockroachdb.service deleted file mode 100644 index b027b941009..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/insecurecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --insecure --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v19.1/prod-deployment/monitor-cluster.md b/src/current/_includes/v19.1/prod-deployment/monitor-cluster.md deleted file mode 100644 index 363ef1167c1..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/monitor-cluster.md +++ /dev/null @@ -1,3 +0,0 @@ -Despite CockroachDB's various [built-in safeguards against failure](frequently-asked-questions.html#how-does-cockroachdb-survive-failures), it is critical to actively monitor the overall health and performance of a cluster running in production and to create alerting rules that promptly send notifications when there are events that require investigation or intervention. - -For details about available monitoring options and the most important events and metrics to alert on, see [Monitoring and Alerting](monitoring-and-alerting.html). diff --git a/src/current/_includes/v19.1/prod-deployment/prod-see-also.md b/src/current/_includes/v19.1/prod-deployment/prod-see-also.md deleted file mode 100644 index 9dc661f6dfc..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/prod-see-also.md +++ /dev/null @@ -1,7 +0,0 @@ -- [Production Checklist](recommended-production-settings.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Monitoring and Alerting](monitoring-and-alerting.html) -- [Performance Benchmarking](performance-benchmarking-with-tpc-c.html) -- [Performance Tuning](performance-tuning.html) -- [Local Deployment](start-a-local-cluster.html) diff --git a/src/current/_includes/v19.1/prod-deployment/secure-generate-certificates.md b/src/current/_includes/v19.1/prod-deployment/secure-generate-certificates.md deleted file mode 100644 index 8badba4dc8e..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-generate-certificates.md +++ /dev/null @@ -1,201 +0,0 @@ -You can use either `cockroach cert` commands or [`openssl` commands](create-security-certificates-openssl.html) to generate security certificates. This section features the `cockroach cert` commands. - -Locally, you'll need to [create the following certificates and keys](create-security-certificates.html): - -- A certificate authority (CA) key pair (`ca.crt` and `ca.key`). -- A node key pair for each node, issued to its IP addresses and any common names the machine uses, as well as to the IP addresses and common names for machines running load balancers. -- A client key pair for the `root` user. You'll use this to run a sample workload against the cluster as well as some `cockroach` client commands from your local machine. - -{{site.data.alerts.callout_success}}Before beginning, it's useful to collect each of your machine's internal and external IP addresses, as well as any server names you want to issue certificates for.{{site.data.alerts.end}} - -1. [Install CockroachDB](install-cockroachdb.html) on your local machine, if you haven't already. - -2. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -3. Create the CA certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -4. Create the certificate and key for the first node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -5. Upload the CA certificate and node certificate and key to the first node: - - {% if page.title contains "Google" %} - {% include copy-clipboard.html %} - ~~~ shell - $ gcloud compute ssh \ - --project \ - --command "mkdir certs" - ~~~ - - {{site.data.alerts.callout_info}} - `gcloud compute ssh` associates your public SSH key with the GCP project and is only needed when connecting to the first node. See the [GCP docs](https://cloud.google.com/sdk/gcloud/reference/compute/ssh) for more details. - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% elsif page.title contains "AWS" %} - {% include copy-clipboard.html %} - ~~~ shell - $ ssh-add /path/.pem - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -6. Delete the local copy of the node certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}} - This is necessary because the certificates and keys for additional nodes will also be named `node.crt` and `node.key`. As an alternative to deleting these files, you can run the next `cockroach cert create-node` commands with the `--overwrite` flag. - {{site.data.alerts.end}} - -7. Create the certificate and key for the second node, issued to all common names you might use to refer to the node as well as to the load balancer instances: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - \ - \ - \ - \ - localhost \ - 127.0.0.1 \ - \ - \ - \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -8. Upload the CA certificate and node certificate and key to the second node: - - {% if page.title contains "AWS" %} - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - - {% else %} - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - {% endif %} - -9. Repeat steps 6 - 8 for each additional node. - -10. Create a client certificate and key for the `root` user: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -11. Upload the CA certificate and client certificate and key to the machine where you will run a sample workload: - - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/client.root.crt \ - certs/client.root.key \ - @:~/certs - ~~~ - - In later steps, you'll also use the `root` user's certificate to run [`cockroach`](cockroach-commands.html) client commands from your local machine. If you might also want to run `cockroach` client commands directly on a node (e.g., for local debugging), you'll need to copy the `root` user's certificate and key to that node as well. - -{{site.data.alerts.callout_info}} -On accessing the Admin UI in a later step, your browser will consider the CockroachDB-created certificate invalid and you’ll need to click through a warning message to get to the UI. You can avoid this issue by [using a certificate issued by a public CA](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/prod-deployment/secure-initialize-cluster.md b/src/current/_includes/v19.1/prod-deployment/secure-initialize-cluster.md deleted file mode 100644 index 0dc9b750307..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-initialize-cluster.md +++ /dev/null @@ -1,8 +0,0 @@ -On your local machine, run the [`cockroach init`](initialize-a-cluster.html) command to complete the node startup process and have them join together as a cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --certs-dir=certs --host=
-~~~ - -After running this command, each node prints helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the admin UI, and the SQL URL for clients. diff --git a/src/current/_includes/v19.1/prod-deployment/secure-recommendations.md b/src/current/_includes/v19.1/prod-deployment/secure-recommendations.md deleted file mode 100644 index 85b0b0b31d0..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-recommendations.md +++ /dev/null @@ -1,7 +0,0 @@ -- Decide how you want to access your Admin UI: - - Access Level | Description - -------------|------------ - Partially open | Set a firewall rule to allow only specific IP addresses to communicate on port `8080`. - Completely open | Set a firewall rule to allow all IP addresses to communicate on port `8080`. - Completely closed | Set a firewall rule to disallow all communication on port `8080`. In this case, a machine with SSH access to a node could use an SSH tunnel to access the Admin UI. diff --git a/src/current/_includes/v19.1/prod-deployment/secure-requirements.md b/src/current/_includes/v19.1/prod-deployment/secure-requirements.md deleted file mode 100644 index d27643bf706..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-requirements.md +++ /dev/null @@ -1,9 +0,0 @@ -- Carefully review the [Production Checklist](recommended-production-settings.html) and recommended [Topology Patterns](topology-patterns.html). - -- You must have [CockroachDB installed](install-cockroachdb.html) locally. This is necessary for generating and managing your deployment's certificates. - -- You must have [SSH access]({{page.ssh-link}}) to each machine. This is necessary for distributing and starting CockroachDB binaries. - -- Your network configuration must allow TCP communication on the following ports: - - `26257` for intra-cluster and client-cluster communication - - `8080` to expose your Admin UI diff --git a/src/current/_includes/v19.1/prod-deployment/secure-scale-cluster.md b/src/current/_includes/v19.1/prod-deployment/secure-scale-cluster.md deleted file mode 100644 index edf43b0d188..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-scale-cluster.md +++ /dev/null @@ -1,124 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command, passing the new node's address as the `--advertise-addr` flag and pointing `--join` to the three existing nodes (also include `--locality` if you set it earlier). - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -5. Update your load balancer to recognize the new node. - -
- -
- -For each additional node you want to add to the cluster, complete the following steps: - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach.cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service): - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - - Save the file in the `/etc/systemd/system/` directory. - -9. Customize the sample configuration template for your deployment: - - Specify values for the following flags in the sample configuration template: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - -10. Repeat these steps for each additional node that you want in your cluster. - -
diff --git a/src/current/_includes/v19.1/prod-deployment/secure-start-nodes.md b/src/current/_includes/v19.1/prod-deployment/secure-start-nodes.md deleted file mode 100644 index 6f50dc3d627..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-start-nodes.md +++ /dev/null @@ -1,153 +0,0 @@ -You can start the nodes manually or automate the process using [systemd](https://www.freedesktop.org/wiki/Software/systemd/). - -
- - -
-

- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - - This command primes the node to start, using the following flags: - - Flag | Description - -----|------------ - `--certs-dir` | Specifies the directory where you placed the `ca.crt` file and the `node.crt` and `node.key` files for the node. - `--advertise-addr` | Specifies the IP address/hostname and port to tell other nodes to use. The port number can be omitted, in which case it defaults to `26257`.

This value must route to an IP address the node is listening on (with `--listen-addr` unspecified, the node listens on all IP addresses).

In some networking scenarios, you may need to use `--advertise-addr` and/or `--listen-addr` differently. For more details, see [Networking](recommended-production-settings.html#networking). - `--join` | Identifies the address of 3-5 of the initial nodes of the cluster. These addresses should match the addresses that the target nodes are advertising. - `--cache`
`--max-sql-memory` | Increases the node's cache and temporary SQL memory size to 25% of available system memory to improve read performance and increase capacity for in-memory SQL processing. For more details, see [Cache and SQL Memory Size](recommended-production-settings.html#cache-and-sql-memory-size). - `--background` | Starts the node in the background so you gain control of the terminal to issue more commands. - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=:8080`. To set these options manually, see [Start a Node](start-a-node.html). - -5. Repeat these steps for each additional node that you want in your cluster. - -
- -
- -For each initial node of your cluster, complete the following steps: - -{{site.data.alerts.callout_info}}After completing these steps, nodes will not yet be live. They will complete the startup process and join together to form a cluster as soon as the cluster is initialized in the next step.{{site.data.alerts.end}} - -1. SSH to the machine where you want the node to run. Ensure you are logged in as the `root` user. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -3. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -4. Create the Cockroach directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir /var/lib/cockroach - ~~~ - -5. Create a Unix user named `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ useradd cockroach - ~~~ - -6. Move the `certs` directory to the `cockroach` directory. - - {% include copy-clipboard.html %} - ~~~ shell - $ mv certs /var/lib/cockroach/ - ~~~ - -7. Change the ownership of `Cockroach` directory to the user `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ chown -R cockroach.cockroach /var/lib/cockroach - ~~~ - -8. Download the [sample configuration template](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service) and save the file in the `/etc/systemd/system/` directory: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/securecockroachdb.service - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - {% include {{ page.version.version }}/prod-deployment/securecockroachdb.service %} - ~~~ - -9. In the sample configuration template, specify values for the following flags: - - {% include {{ page.version.version }}/prod-deployment/advertise-addr-join.md %} - - When deploying across multiple datacenters, or when there is otherwise high latency between nodes, it is recommended to set `--locality` as well. It is also required to use certain enterprise features. For more details, see [Locality](start-a-node.html#locality). - - For other flags not explicitly set, the command uses default values. For example, the node stores data in `--store=cockroach-data` and binds Admin UI HTTP requests to `--http-addr=localhost:8080`. To set these options manually, see [Start a Node](start-a-node.html). - -10. Start the CockroachDB cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start securecockroachdb - ~~~ - -11. Repeat these steps for each additional node that you want in your cluster. - -{{site.data.alerts.callout_info}} -`systemd` handles node restarts in case of node failure. To stop a node without `systemd` restarting it, run `systemctl stop securecockroachdb` -{{site.data.alerts.end}} - -
diff --git a/src/current/_includes/v19.1/prod-deployment/secure-test-cluster.md b/src/current/_includes/v19.1/prod-deployment/secure-test-cluster.md deleted file mode 100644 index 25af5af0414..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-test-cluster.md +++ /dev/null @@ -1,41 +0,0 @@ -CockroachDB replicates and distributes data behind-the-scenes and uses a [Gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) to enable each node to locate data across the cluster. Once a cluster is live, any node can be used as a SQL gateway. - -When using a load balancer, you should issue commands directly to the load balancer, which then routes traffic to the nodes. - -Use the [built-in SQL client](use-the-built-in-sql-client.html) locally as follows: - -1. On your local machine, launch the built-in SQL client, with the `--host` flag set to the address of the load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --host=
- ~~~ - -2. Create a `securenodetest` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE securenodetest; - ~~~ - -3. View the cluster's databases, which will include `securenodetest`: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW DATABASES; - ~~~ - - ~~~ - +--------------------+ - | Database | - +--------------------+ - | crdb_internal | - | information_schema | - | securenodetest | - | pg_catalog | - | system | - +--------------------+ - (5 rows) - ~~~ - -4. Use `\q` to exit the SQL shell. \ No newline at end of file diff --git a/src/current/_includes/v19.1/prod-deployment/secure-test-load-balancing.md b/src/current/_includes/v19.1/prod-deployment/secure-test-load-balancing.md deleted file mode 100644 index 45fb876eaf6..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/secure-test-load-balancing.md +++ /dev/null @@ -1,43 +0,0 @@ -CockroachDB offers a pre-built `workload` binary for Linux that includes several load generators for simulating client traffic against your cluster. This step features CockroachDB's version of the [TPC-C](http://www.tpc.org/tpcc/) workload. - -{{site.data.alerts.callout_success}}For comprehensive guidance on benchmarking CockroachDB with TPC-C, see our Performance Benchmarking white paper.{{site.data.alerts.end}} - -1. SSH to the machine where you want to run the sample TPC-C workload. - - This should be a machine that is not running a CockroachDB node, and it should already have a `certs` directory containing `ca.crt`, `client.root.crt`, and `client.root.key` files. - -2. Download `workload` and make it executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ wget https://edge-binaries.cockroachdb.com/cockroach/workload.LATEST ; chmod 755 workload.LATEST - ~~~ - -3. Rename and copy `workload` into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i workload.LATEST /usr/local/bin/workload - ~~~ - -4. Start the TPC-C workload, pointing it at the IP address of the load balancer and the location of the `ca.crt`, `client.root.crt`, and `client.root.key` files: - - {% include copy-clipboard.html %} - ~~~ shell - $ workload run tpcc \ - --drop \ - --init \ - --duration=20m \ - --tolerate-errors \ - "postgresql://root@:26257/tpcc?sslmode=verify-full&sslrootcert=certs/ca.crt&sslcert=certs/client.root.crt&sslkey=certs/client.root.key" - ~~~ - - This command runs the TPC-C workload against the cluster for 20 minutes, loading 1 "warehouse" of data initially and then issuing about 12 queries per minute via 10 "worker" threads. These workers share SQL connections since individual workers are idle for long periods of time between queries. - - {{site.data.alerts.callout_success}}For more tpcc options, use workload run tpcc --help. For details about other load generators included in workload, use workload run --help. - -5. To monitor the load generator's progress, open the [Admin UI](admin-ui-access-and-navigate.html) by pointing a browser to the address in the `admin` field in the standard output of any node on startup. - - For each user who should have access to the Admin UI for a secure cluster, [create a user with a password](create-user.html#create-a-user-with-a-password) and [assign them to an `admin` role if necessary](admin-ui-overview.html#admin-ui-access). On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords. - - Since the load generator is pointed at the load balancer, the connections will be evenly distributed across nodes. To verify this, click **Metrics** on the left, select the **SQL** dashboard, and then check the **SQL Connections** graph. You can use the **Graph** menu to filter the graph for specific nodes. diff --git a/src/current/_includes/v19.1/prod-deployment/securecockroachdb.service b/src/current/_includes/v19.1/prod-deployment/securecockroachdb.service deleted file mode 100644 index 39054cf2e1d..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/securecockroachdb.service +++ /dev/null @@ -1,16 +0,0 @@ -[Unit] -Description=Cockroach Database cluster node -Requires=network.target -[Service] -Type=notify -WorkingDirectory=/var/lib/cockroach -ExecStart=/usr/local/bin/cockroach start --certs-dir=certs --advertise-addr= --join=,, --cache=.25 --max-sql-memory=.25 -TimeoutStopSec=60 -Restart=always -RestartSec=10 -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=cockroach -User=cockroach -[Install] -WantedBy=default.target diff --git a/src/current/_includes/v19.1/prod-deployment/synchronize-clocks.md b/src/current/_includes/v19.1/prod-deployment/synchronize-clocks.md deleted file mode 100644 index 476880914bd..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/synchronize-clocks.md +++ /dev/null @@ -1,179 +0,0 @@ -CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), it spontaneously shuts down. This avoids the risk of consistency anomalies, but it's best to prevent clocks from drifting too far in the first place by running clock synchronization software on each node. - -{% if page.title contains "Digital Ocean" or page.title contains "On-Premises" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here, but other methods of clock synchronization are suitable as well. - -1. SSH to the first machine. - -2. Disable `timesyncd`, which tends to be active by default on some Linux distributions: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo timedatectl set-ntp no - ~~~ - - Verify that `timesyncd` is off: - - {% include copy-clipboard.html %} - ~~~ shell - $ timedatectl - ~~~ - - Look for `Network time on: no` or `NTP enabled: no` in the output. - -3. Install the `ntp` package: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -4. Stop the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -5. Sync the machine's clock with Google's NTP service: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -6. Verify that the machine is using a Google NTP server: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -7. Repeat these steps for each machine where a CockroachDB node will run. - -{% elsif page.title contains "Google" %} - -Compute Engine instances are preconfigured to use [NTP](http://www.ntp.org/), which should keep offsets in the single-digit milliseconds. However, Google can’t predict how external NTP services, such as `pool.ntp.org`, will handle the leap second. Therefore, you should: - -- [Configure each GCE instance to use Google's internal NTP service](https://cloud.google.com/compute/docs/instances/configure-ntp#configure_ntp_for_your_instances). -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "AWS" %} - -Amazon provides the [Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html), which uses a fleet of satellite-connected and atomic reference clocks in each AWS Region to deliver accurate current time readings. The service also smears the leap second. - -- [Configure each AWS instance to use the internal Amazon Time Sync Service](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-time.html#configure-amazon-time-service). - - Per the above instructions, ensure that `etc/chrony.conf` on the instance contains the line `server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4` and that other `server` or `pool` lines are commented out. - - To verify that Amazon Time Sync Service is being used, run `chronyc sources -v` and check for a line containing `* 169.254.169.123`. The `*` denotes the preferred time server. -- If you plan to run a hybrid cluster across GCE and other cloud providers or environments, note that all of the nodes must be synced to the same time source, or to different sources that implement leap second smearing in the same way. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - -{% elsif page.title contains "Azure" %} - -[`ntpd`](http://doc.ntp.org/) should keep offsets in the single-digit milliseconds, so that software is featured here. However, to run `ntpd` properly on Azure VMs, it's necessary to first unbind the Time Synchronization device used by the Hyper-V technology running Azure VMs; this device aims to synchronize time between the VM and its host operating system but has been known to cause problems. - -1. SSH to the first machine. - -2. Find the ID of the Hyper-V Time Synchronization device: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl -O https://raw.githubusercontent.com/torvalds/linux/master/tools/hv/lsvmbus - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ python lsvmbus -vv | grep -w "Time Synchronization" -A 3 - ~~~ - - ~~~ - VMBUS ID 12: Class_ID = {9527e630-d0ae-497b-adce-e80ab0175caf} - [Time Synchronization] - Device_ID = {2dd1ce17-079e-403c-b352-a1921ee207ee} - Sysfs path: /sys/bus/vmbus/devices/2dd1ce17-079e-403c-b352-a1921ee207ee - Rel_ID=12, target_cpu=0 - ~~~ - -3. Unbind the device, using the `Device_ID` from the previous command's output: - - {% include copy-clipboard.html %} - ~~~ shell - $ echo | sudo tee /sys/bus/vmbus/drivers/hv_util/unbind - ~~~ - -4. Install the `ntp` package: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install ntp - ~~~ - -5. Stop the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp stop - ~~~ - -6. Sync the machine's clock with Google's NTP service: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpd -b time.google.com - ~~~ - - To make this change permanent, in the `/etc/ntp.conf` file, remove or comment out any lines starting with `server` or `pool` and add the following lines: - - {% include copy-clipboard.html %} - ~~~ - server time1.google.com iburst - server time2.google.com iburst - server time3.google.com iburst - server time4.google.com iburst - ~~~ - - Restart the NTP daemon: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo service ntp start - ~~~ - - {{site.data.alerts.callout_info}} - We recommend Google's NTP service because it handles ["smearing" the leap second](https://developers.google.com/time/smear). If you use a different NTP service that doesn't smear the leap second, be sure to configure client-side smearing in the same way on each machine. See the [Production Checklist](recommended-production-settings.html#considerations) for details. - {{site.data.alerts.end}} - -7. Verify that the machine is using a Google NTP server: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo ntpq -p - ~~~ - - The active NTP server will be marked with an asterisk. - -8. Repeat these steps for each machine where a CockroachDB node will run. - -{% endif %} diff --git a/src/current/_includes/v19.1/prod-deployment/use-cluster.md b/src/current/_includes/v19.1/prod-deployment/use-cluster.md deleted file mode 100644 index 134f9fc6912..00000000000 --- a/src/current/_includes/v19.1/prod-deployment/use-cluster.md +++ /dev/null @@ -1,11 +0,0 @@ -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the load balancer, not to a CockroachDB node. - -You may also want to adjust the way the cluster replicates data. For example, by default, a multi-node cluster replicates all data 3 times; you can change this replication factor or create additional rules for replicating individual databases and tables differently. For more information, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_danger}} -When running a cluster of 5 nodes or more, it's safest to [increase the replication factor for important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) to 5, even if you do not do so for user data. For the cluster as a whole to remain available, the ranges for this internal data must always retain a majority of their replicas. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/sql/begin-transaction-as-of-system-time-example.md b/src/current/_includes/v19.1/sql/begin-transaction-as-of-system-time-example.md deleted file mode 100644 index ca8735152cd..00000000000 --- a/src/current/_includes/v19.1/sql/begin-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,19 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> BEGIN AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v19.1/sql/combine-alter-table-commands.md b/src/current/_includes/v19.1/sql/combine-alter-table-commands.md deleted file mode 100644 index 15e74e202df..00000000000 --- a/src/current/_includes/v19.1/sql/combine-alter-table-commands.md +++ /dev/null @@ -1,3 +0,0 @@ -{{site.data.alerts.callout_success}} -New in v19.1: This command can be combined with other `ALTER TABLE` commands in a single statement. For a list of commands that can be combined, see [`ALTER TABLE`](alter-table.html). For a demonstration, see [Add and rename columns atomically](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/sql/connection-parameters.md b/src/current/_includes/v19.1/sql/connection-parameters.md deleted file mode 100644 index 0a0ad048ead..00000000000 --- a/src/current/_includes/v19.1/sql/connection-parameters.md +++ /dev/null @@ -1,8 +0,0 @@ -Flag | Description ------|------------ -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

**Env Variable:** `COCKROACH_HOST`
**Default:** `localhost:26257` -`--port`
`-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

**Env Variable:** `COCKROACH_PORT`
**Default:** `26257` -`--user`
`-u` | The [SQL user](create-and-manage-users.html) that will own the client session.

**Env Variable:** `COCKROACH_USER`
**Default:** `root` -`--insecure` | Use an insecure connection.

**Env Variable:** `COCKROACH_INSECURE`
**Default:** `false` -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html) containing the CA and client certificates and client key.

**Env Variable:** `COCKROACH_CERTS_DIR`
**Default:** `${HOME}/.cockroach-certs/` - `--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

**Env Variable:** `COCKROACH_URL`
**Default:** no URL diff --git a/src/current/_includes/v19.1/sql/diagrams/add_column.html b/src/current/_includes/v19.1/sql/diagrams/add_column.html deleted file mode 100644 index f59fd135d0e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/add_column.html +++ /dev/null @@ -1,52 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -COLUMN - - -IF - - -NOT - - -EXISTS - - - -column_name - - - - -typename - - - - -col_qualification - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/add_constraint.html b/src/current/_includes/v19.1/sql/diagrams/add_constraint.html deleted file mode 100644 index a8f3b1c9c61..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/add_constraint.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ADD - - -CONSTRAINT - - - -constraint_name - - - - -constraint_elem - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_column.html b/src/current/_includes/v19.1/sql/diagrams/alter_column.html deleted file mode 100644 index 773613a76e6..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_column.html +++ /dev/null @@ -1,110 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ALTER - - -COLUMN - - - -column_name - - - -SET - - -DEFAULT - - - -a_expr - - - -DATA - - -TYPE - - - -typename - - - -COLLATE - - - -collation_name - - - -USING - - - -a_expr - - - -DROP - - -DEFAULT - - -NOT - - -NULL - - -STORED - - -TYPE - - - -typename - - - -COLLATE - - - -collation_name - - - -USING - - - -a_expr - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_sequence_options.html b/src/current/_includes/v19.1/sql/diagrams/alter_sequence_options.html deleted file mode 100644 index ee56ccdaee6..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_sequence_options.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - INCREMENT - - - BY - - - MINVALUE - - - MAXVALUE - - - START - - - WITH - - - - integer - - - - NO - - - MINVALUE - - - MAXVALUE - - - CYCLE - - - CYCLE - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_table_partition_by.html b/src/current/_includes/v19.1/sql/diagrams/alter_table_partition_by.html deleted file mode 100644 index 073c8794394..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_table_partition_by.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - PARTITION - - - BY - - - LIST - - - ( - - - - name_list - - - - ) - - - ( - - - - list_partitions - - - - RANGE - - - ( - - - - name_list - - - - ) - - - ( - - - - range_partitions - - - - ) - - - NOTHING - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_type.html b/src/current/_includes/v19.1/sql/diagrams/alter_type.html deleted file mode 100644 index ace962f3b99..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_type.html +++ /dev/null @@ -1,45 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -ALTER - - -COLUMN - - -column_name - - -SET - - -DATA - - -TYPE - - - -typename - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_user_password.html b/src/current/_includes/v19.1/sql/diagrams/alter_user_password.html deleted file mode 100644 index 0e014933d1b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_user_password.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -ALTER - - -USER - - -IF - - -EXISTS - - -name - - -WITH - - -PASSWORD - - -password - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_view.html b/src/current/_includes/v19.1/sql/diagrams/alter_view.html deleted file mode 100644 index 2e481fa60aa..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_view.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - VIEW - - - IF - - - EXISTS - - - - view_name - - - - RENAME - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_zone_database.html b/src/current/_includes/v19.1/sql/diagrams/alter_zone_database.html deleted file mode 100644 index 11eeb471abb..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_zone_database.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -ALTER - - -DATABASE - - -database_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_zone_index.html b/src/current/_includes/v19.1/sql/diagrams/alter_zone_index.html deleted file mode 100644 index ef64e2314d3..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_zone_index.html +++ /dev/null @@ -1,66 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_zone_range.html b/src/current/_includes/v19.1/sql/diagrams/alter_zone_range.html deleted file mode 100644 index 890dcc7240c..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_zone_range.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - -ALTER - - -RANGE - - -range_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/alter_zone_table.html b/src/current/_includes/v19.1/sql/diagrams/alter_zone_table.html deleted file mode 100644 index ddf2af850f2..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/alter_zone_table.html +++ /dev/null @@ -1,69 +0,0 @@ -
- - - - -ALTER - - -PARTITION - - -partition_name - -OF - - -TABLE - - -table_name - -CONFIGURE - - -ZONE - - -USING - - -variable - -= - - -COPY - - -FROM - - -PARENT - - -value - -, - - -variable - -= - - -value - -COPY - - -FROM - - -PARENT - - -DISCARD - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/backup.html b/src/current/_includes/v19.1/sql/diagrams/backup.html deleted file mode 100644 index 1974cb5bcb0..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/backup.html +++ /dev/null @@ -1,73 +0,0 @@ -
- - - - - - BACKUP - - - TABLE - - - - table_pattern - - - - , - - - DATABASE - - - - name - - - - , - - - TO - - - - string_or_placeholder - - - - AS OF SYSTEM TIME - - - - timestamp - - - - INCREMENTAL FROM - - - - full_backup_location - - - - , - - - - incremental_backup_location - - - - WITH - - - - kv_option_list - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/begin_transaction.html b/src/current/_includes/v19.1/sql/diagrams/begin_transaction.html deleted file mode 100644 index 7e40de65c56..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/begin_transaction.html +++ /dev/null @@ -1,50 +0,0 @@ -
- - - - -BEGIN - - -TRANSACTION - - -PRIORITY - - -LOW - - -NORMAL - - -HIGH - - -READ - - -ONLY - - -WRITE - - -AS - - -OF - - -SYSTEM - - -TIME - - -a_expr - -, - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/cancel_job.html b/src/current/_includes/v19.1/sql/diagrams/cancel_job.html deleted file mode 100644 index e8cbeb150fe..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/cancel_job.html +++ /dev/null @@ -1,24 +0,0 @@ -
- - - - -CANCEL - - -JOB - - -job_id - - -JOBS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/cancel_query.html b/src/current/_includes/v19.1/sql/diagrams/cancel_query.html deleted file mode 100644 index 612db072eb4..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/cancel_query.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -CANCEL - - -QUERY - - -IF - - -EXISTS - - -query_id - - -QUERIES - - -IF - - -EXISTS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/cancel_session.html b/src/current/_includes/v19.1/sql/diagrams/cancel_session.html deleted file mode 100644 index 857f87adb18..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/cancel_session.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -CANCEL - - -SESSION - - -IF - - -EXISTS - - -session_id - - -SESSIONS - - -IF - - -EXISTS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/check_column_level.html b/src/current/_includes/v19.1/sql/diagrams/check_column_level.html deleted file mode 100644 index 59eec3e3c15..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/check_column_level.html +++ /dev/null @@ -1,70 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/check_table_level.html b/src/current/_includes/v19.1/sql/diagrams/check_table_level.html deleted file mode 100644 index 6066d637220..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/check_table_level.html +++ /dev/null @@ -1,60 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - CHECK - - - ( - - - - check_expr - - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/col_qualification.html b/src/current/_includes/v19.1/sql/diagrams/col_qualification.html deleted file mode 100644 index 8b9b2d4fa1d..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/col_qualification.html +++ /dev/null @@ -1,132 +0,0 @@ -
- - - - - - CONSTRAINT - - - - constraint_name - - - - NOT - - - NULL - - - UNIQUE - - - PRIMARY - - - KEY - - - CHECK - - - ( - - - - a_expr - - - - ) - - - DEFAULT - - - - b_expr - - - - REFERENCES - - - - table_name - - - - - opt_name_parens - - - - - reference_actions - - - - AS - - - ( - - - - a_expr - - - - ) - - - STORED - - - COLLATE - - - - collation_name - - - - FAMILY - - - - family_name - - - - CREATE - - - FAMILY - - - - family_name - - - - IF - - - NOT - - - EXISTS - - - FAMILY - - - - family_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/column_def.html b/src/current/_includes/v19.1/sql/diagrams/column_def.html deleted file mode 100644 index 284e8dc5838..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/column_def.html +++ /dev/null @@ -1,23 +0,0 @@ - \ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/comment.html b/src/current/_includes/v19.1/sql/diagrams/comment.html deleted file mode 100644 index 79b80364258..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/comment.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - -COMMENT - - -ON - - -DATABASE - - -database_name - -TABLE - - -table_name - -COLUMN - - -column_path - -IS - - -comment_text - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/commit_transaction.html b/src/current/_includes/v19.1/sql/diagrams/commit_transaction.html deleted file mode 100644 index 12914f3e1cb..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/commit_transaction.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - - - COMMIT - - - END - - - TRANSACTION - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_changefeed.html b/src/current/_includes/v19.1/sql/diagrams/create_changefeed.html deleted file mode 100644 index 82b77b8360e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_changefeed.html +++ /dev/null @@ -1,46 +0,0 @@ -
- - - - -CREATE - - -CHANGEFEED - - -FOR - - -TABLE - - -table_name - - -, - - -INTO - - -sink - - -WITH - - -option - - -= - - -value - - -, - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/create_database.html b/src/current/_includes/v19.1/sql/diagrams/create_database.html deleted file mode 100644 index c621b08e138..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_database.html +++ /dev/null @@ -1,42 +0,0 @@ -
- - - - - - CREATE - - - DATABASE - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - ENCODING - - - = - - - - encoding - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_index.html b/src/current/_includes/v19.1/sql/diagrams/create_index.html deleted file mode 100644 index 901c899ed0b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_index.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - -CREATE - - -UNIQUE - - -INDEX - - -opt_index_name - -IF - - -NOT - - -EXISTS - - -index_name - -ON - - -table_name - -USING - - -name - -( - - -column_name - -ASC - - -DESC - - -, - - -) - - -COVERING - - -STORING - - -( - - -name_list - -) - - -opt_interleave - - -opt_partition_by - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/create_inverted_index.html b/src/current/_includes/v19.1/sql/diagrams/create_inverted_index.html deleted file mode 100644 index 266281c12c1..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_inverted_index.html +++ /dev/null @@ -1,64 +0,0 @@ -
- - - - - - CREATE - - - INVERTED - - - INDEX - - - - opt_index_name - - - - IF - - - NOT - - - EXISTS - - - - index_name - - - - ON - - - - table_name - - - - ( - - - - column_name - - - - ASC - - - DESC - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_role.html b/src/current/_includes/v19.1/sql/diagrams/create_role.html deleted file mode 100644 index 3c9c43dedf3..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_role.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - CREATE - - - ROLE - - - IF - - - NOT - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_sequence.html b/src/current/_includes/v19.1/sql/diagrams/create_sequence.html deleted file mode 100644 index 4363cc0b087..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_sequence.html +++ /dev/null @@ -1,58 +0,0 @@ -
- - - - -CREATE - - -SEQUENCE - - -IF - - -NOT - - -EXISTS - - -sequence_name - - -NO - - -CYCLE - - -MINVALUE - - -MAXVALUE - - -INCREMENT - - -BY - - -MINVALUE - - -MAXVALUE - - -START - - -WITH - - -integer - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/create_stats.html b/src/current/_includes/v19.1/sql/diagrams/create_stats.html deleted file mode 100644 index c02186ee5cb..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_stats.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -CREATE - - -STATISTICS - - -statistics_name - - -opt_stats_columns - -FROM - - -create_stats_target - - -opt_as_of_clause - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/create_table.html b/src/current/_includes/v19.1/sql/diagrams/create_table.html deleted file mode 100644 index 456c9f64ab7..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_table.html +++ /dev/null @@ -1,67 +0,0 @@ - \ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_table_as.html b/src/current/_includes/v19.1/sql/diagrams/create_table_as.html deleted file mode 100644 index dbf1028099a..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_table_as.html +++ /dev/null @@ -1,50 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - name - - - - , - - - ) - - - AS - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_user.html b/src/current/_includes/v19.1/sql/diagrams/create_user.html deleted file mode 100644 index 1dc78bb289a..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_user.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - - name - - - - WITH - - - PASSWORD - - - - password - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/create_view.html b/src/current/_includes/v19.1/sql/diagrams/create_view.html deleted file mode 100644 index 044db4c888c..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/create_view.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - - - CREATE - - - VIEW - - - - view_name - - - - ( - - - - name_list - - - - ) - - - AS - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/default_value_column_level.html b/src/current/_includes/v19.1/sql/diagrams/default_value_column_level.html deleted file mode 100644 index 0ba9afca9c4..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/default_value_column_level.html +++ /dev/null @@ -1,64 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - DEFAULT - - - - default_value - - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/delete.html b/src/current/_includes/v19.1/sql/diagrams/delete.html deleted file mode 100644 index d79cbd6e082..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/delete.html +++ /dev/null @@ -1,66 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -DELETE - - -FROM - - - -table_name - - - -AS - - - -table_alias_name - - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_column.html b/src/current/_includes/v19.1/sql/diagrams/drop_column.html deleted file mode 100644 index 384f5219d9d..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_column.html +++ /dev/null @@ -1,43 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -COLUMN - - -IF - - -EXISTS - - -name - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_constraint.html b/src/current/_includes/v19.1/sql/diagrams/drop_constraint.html deleted file mode 100644 index 77cea230ccd..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_constraint.html +++ /dev/null @@ -1,45 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - - -DROP - - -CONSTRAINT - - -IF - - -EXISTS - - - -name - - - -CASCADE - - -RESTRICT - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_database.html b/src/current/_includes/v19.1/sql/diagrams/drop_database.html deleted file mode 100644 index 038eb0befc1..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_database.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - - name - - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_index.html b/src/current/_includes/v19.1/sql/diagrams/drop_index.html deleted file mode 100644 index 2dd8b3636ee..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_index.html +++ /dev/null @@ -1,31 +0,0 @@ -
- - - - -DROP - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -CASCADE - - -RESTRICT - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_role.html b/src/current/_includes/v19.1/sql/diagrams/drop_role.html deleted file mode 100644 index 0037ebf56ce..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_role.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - DROP - - - ROLE - - - IF - - - EXISTS - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_sequence.html b/src/current/_includes/v19.1/sql/diagrams/drop_sequence.html deleted file mode 100644 index 6507f7dec30..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_sequence.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - - sequence_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_table.html b/src/current/_includes/v19.1/sql/diagrams/drop_table.html deleted file mode 100644 index 18ad4fdd502..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_table.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_user.html b/src/current/_includes/v19.1/sql/diagrams/drop_user.html deleted file mode 100644 index 57c3db991b9..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_user.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/drop_view.html b/src/current/_includes/v19.1/sql/diagrams/drop_view.html deleted file mode 100644 index d95db116000..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/drop_view.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - DROP - - - VIEW - - - IF - - - EXISTS - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/experimental_audit.html b/src/current/_includes/v19.1/sql/diagrams/experimental_audit.html deleted file mode 100644 index 46cc527074a..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/experimental_audit.html +++ /dev/null @@ -1,39 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - - -table_name - - - -EXPERIMENTAL_AUDIT - - -SET - - -READ - - -WRITE - - -OFF - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/explain.html b/src/current/_includes/v19.1/sql/diagrams/explain.html deleted file mode 100644 index 61716ec485b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/explain.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - -EXPLAIN - - -( - - -VERBOSE - - -TYPES - - -OPT - - -DISTSQL - - -, - - -) - - -preparable_stmt - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/explain_analyze.html b/src/current/_includes/v19.1/sql/diagrams/explain_analyze.html deleted file mode 100644 index e79e76f6fc0..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/explain_analyze.html +++ /dev/null @@ -1,23 +0,0 @@ -
- - - - -EXPLAIN - - -ANALYZE - - -( - - -DISTSQL - - -) - - -preparable_stmt - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/export.html b/src/current/_includes/v19.1/sql/diagrams/export.html deleted file mode 100644 index 05ad8e2a864..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/export.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - -EXPORT - - -INTO - - -CSV - - -file_location - - - -opt_with_options - - - -FROM - - -select_stmt - - -TABLE - - -table_name - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/family_def.html b/src/current/_includes/v19.1/sql/diagrams/family_def.html deleted file mode 100644 index 1dda01d9e79..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/family_def.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - FAMILY - - - - opt_family_name - - - - ( - - - - name - - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/foreign_key_column_level.html b/src/current/_includes/v19.1/sql/diagrams/foreign_key_column_level.html deleted file mode 100644 index a963e586425..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/foreign_key_column_level.html +++ /dev/null @@ -1,75 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - ) - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/foreign_key_table_level.html b/src/current/_includes/v19.1/sql/diagrams/foreign_key_table_level.html deleted file mode 100644 index 2eb3498af46..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/foreign_key_table_level.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - FOREIGN KEY - - - ( - - - - fk_column_name - - - - , - - - ) - - - REFERENCES - - - - parent_table - - - - ( - - - - ref_column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/grant_privileges.html b/src/current/_includes/v19.1/sql/diagrams/grant_privileges.html deleted file mode 100644 index da7f44e5160..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/grant_privileges.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - - - GRANT - - - ALL - - - CREATE - - - GRANT - - - SELECT - - - DROP - - - INSERT - - - DELETE - - - UPDATE - - - , - - - ON - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - TO - - - - user_name - - - - , - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/grant_roles.html b/src/current/_includes/v19.1/sql/diagrams/grant_roles.html deleted file mode 100644 index f8eee0dc766..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/grant_roles.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - -GRANT - - -role_name - - -, - - -TO - - -user_name - - -, - - -WITH - - -ADMIN - - -OPTION - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/import_csv.html b/src/current/_includes/v19.1/sql/diagrams/import_csv.html deleted file mode 100644 index ad4f863f5ab..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/import_csv.html +++ /dev/null @@ -1,52 +0,0 @@ -
- - - - -IMPORT - - -TABLE - - -table_name - -CREATE - - -USING - - -file_location - -( - - -table_elem_list - -) - - -CSV - - -DATA - - -( - - -file_location - -, - - -) - - -WITH - - -kv_option_list - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/import_dump.html b/src/current/_includes/v19.1/sql/diagrams/import_dump.html deleted file mode 100644 index 1c94207f03e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/import_dump.html +++ /dev/null @@ -1,27 +0,0 @@ -
- - - - -IMPORT - - -TABLE - - -table_name - -FROM - - -import_format - - -file_location - -WITH - - -kv_option_list - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/index_def.html b/src/current/_includes/v19.1/sql/diagrams/index_def.html deleted file mode 100644 index 7808b2e4800..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/index_def.html +++ /dev/null @@ -1,85 +0,0 @@ -
- - - - - - UNIQUE - - - INDEX - - - - opt_index_name - - - - ( - - - - index_elem - - - - , - - - ) - - - COVERING - - - STORING - - - ( - - - - name_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - INVERTED - - - INDEX - - - - name - - - - ( - - - - index_elem - - - - , - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/insert.html b/src/current/_includes/v19.1/sql/diagrams/insert.html deleted file mode 100644 index 81576677379..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/insert.html +++ /dev/null @@ -1,81 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -INSERT - - -INTO - - - -table_name - - - -AS - - - -table_alias_name - - - -( - - - -column_name - - - -, - - -) - - - -select_stmt - - - -DEFAULT - - -VALUES - - - -on_conflict - - - -RETURNING - - - -target_elem - - - -, - - -NOTHING - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/interleave.html b/src/current/_includes/v19.1/sql/diagrams/interleave.html deleted file mode 100644 index 09bb9c35b5b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/interleave.html +++ /dev/null @@ -1,69 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - - table_name - - - - ( - - - - table_definition - - - - ) - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - - opt_partition_by - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/joined_table.html b/src/current/_includes/v19.1/sql/diagrams/joined_table.html deleted file mode 100644 index 68b66314702..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/joined_table.html +++ /dev/null @@ -1,100 +0,0 @@ -
- - - - -( - - - -joined_table - - - -) - - - -table_ref - - - -CROSS - - -NATURAL - - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -JOIN - - - -table_ref - - - -FULL - - -LEFT - - -RIGHT - - -OUTER - - -INNER - - -JOIN - - - -table_ref - - - -USING - - -( - - - -name - - - -, - - -) - - -ON - - - -a_expr - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/limit_clause.html b/src/current/_includes/v19.1/sql/diagrams/limit_clause.html deleted file mode 100644 index 98d5114a88e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/limit_clause.html +++ /dev/null @@ -1,38 +0,0 @@ -
- - - - -LIMIT - - - -count - - - -FETCH - - -FIRST - - -NEXT - - - -count - - - -ROW - - -ROWS - - -ONLY - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/not_null_column_level.html b/src/current/_includes/v19.1/sql/diagrams/not_null_column_level.html deleted file mode 100644 index 52e17e9d57d..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/not_null_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - NOT NULL - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/offset_clause.html b/src/current/_includes/v19.1/sql/diagrams/offset_clause.html deleted file mode 100644 index d6dc4873ee5..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/offset_clause.html +++ /dev/null @@ -1,26 +0,0 @@ -
- - - - -OFFSET - - - -a_expr - - - - -c_expr - - - -ROW - - -ROWS - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/on_conflict.html b/src/current/_includes/v19.1/sql/diagrams/on_conflict.html deleted file mode 100644 index 7a64a45547b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/on_conflict.html +++ /dev/null @@ -1,107 +0,0 @@ -
- - - - -ON - - -CONFLICT - - -( - - - -name - - - -, - - -) - - -WHERE - - - -a_expr - - - -DO - - -UPDATE - - -SET - - - -column_name - - - -= - - - -a_expr - - - -( - - - -column_name - - - -, - - -) - - -= - - -( - - - -select_stmt - - - - -a_expr - - - -, - - -) - - -, - - -WHERE - - - -a_expr - - - -NOTHING - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/opt_interleave.html b/src/current/_includes/v19.1/sql/diagrams/opt_interleave.html deleted file mode 100644 index 5825c01b310..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/opt_interleave.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - - - INTERLEAVE - - - IN - - - PARENT - - - - table_name - - - - ( - - - - name_list - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/pause_job.html b/src/current/_includes/v19.1/sql/diagrams/pause_job.html deleted file mode 100644 index 3d0949c6088..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/pause_job.html +++ /dev/null @@ -1,24 +0,0 @@ -
- - - - -PAUSE - - -JOB - - -job_id - - -JOBS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/primary_key_column_level.html b/src/current/_includes/v19.1/sql/diagrams/primary_key_column_level.html deleted file mode 100644 index f938b641654..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/primary_key_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - PRIMARY KEY - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/primary_key_table_level.html b/src/current/_includes/v19.1/sql/diagrams/primary_key_table_level.html deleted file mode 100644 index db8ece49c39..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/primary_key_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - PRIMARY KEY - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/release_savepoint.html b/src/current/_includes/v19.1/sql/diagrams/release_savepoint.html deleted file mode 100644 index 194ce6573ca..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/release_savepoint.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RELEASE - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_column.html b/src/current/_includes/v19.1/sql/diagrams/rename_column.html deleted file mode 100644 index 2d275bc9de7..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_column.html +++ /dev/null @@ -1,44 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - RENAME - - - COLUMN - - - - current_name - - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_constraint.html b/src/current/_includes/v19.1/sql/diagrams/rename_constraint.html deleted file mode 100644 index 36b2c9dfe1f..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_constraint.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - -ALTER - - -TABLE - - -IF - - -EXISTS - - -table_name - -RENAME - - -CONSTRAINT - - -current_name - -TO - - -name - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_database.html b/src/current/_includes/v19.1/sql/diagrams/rename_database.html deleted file mode 100644 index ce9ddd3ddba..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_database.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - ALTER - - - DATABASE - - - - name - - - - RENAME - - - TO - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_index.html b/src/current/_includes/v19.1/sql/diagrams/rename_index.html deleted file mode 100644 index 82ed2e90255..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_index.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -IF - - -EXISTS - - -table_name - -@ - - -index_name - -RENAME - - -TO - - -index_name - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_sequence.html b/src/current/_includes/v19.1/sql/diagrams/rename_sequence.html deleted file mode 100644 index a564d9db425..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_sequence.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - - current_name - - - - RENAME - - - TO - - - - new_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/rename_table.html b/src/current/_includes/v19.1/sql/diagrams/rename_table.html deleted file mode 100644 index 316c56482eb..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rename_table.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - current_name - - - - RENAME - - - TO - - - - new_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/reset_csetting.html b/src/current/_includes/v19.1/sql/diagrams/reset_csetting.html deleted file mode 100644 index 49e120ffc69..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/reset_csetting.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - RESET - - - CLUSTER - - - SETTING - - - - var_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/reset_session.html b/src/current/_includes/v19.1/sql/diagrams/reset_session.html deleted file mode 100644 index 0a47ec52d49..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/reset_session.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - RESET - - - SESSION - - - - session_var - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/restore.html b/src/current/_includes/v19.1/sql/diagrams/restore.html deleted file mode 100644 index 4aec1b4819f..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/restore.html +++ /dev/null @@ -1,67 +0,0 @@ -
- - - - -RESTORE - - -TABLE - - - -table_pattern - - - -, - - -DATABASE - - - -database_name - - - -, - - -FROM - - -full_backup_location - - -incremental_backup_location - - -, - - -AS - - -OF - - -SYSTEM - - -TIME - - -timestamp - - -WITH - - - -kv_option_list - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/resume_job.html b/src/current/_includes/v19.1/sql/diagrams/resume_job.html deleted file mode 100644 index 552bef86bce..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/resume_job.html +++ /dev/null @@ -1,24 +0,0 @@ -
- - - - -RESUME - - -JOB - - -job_id - - -JOBS - - - -select_stmt - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/revoke_privileges.html b/src/current/_includes/v19.1/sql/diagrams/revoke_privileges.html deleted file mode 100644 index a6f9a1dee8e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/revoke_privileges.html +++ /dev/null @@ -1,74 +0,0 @@ -
- - - - - - REVOKE - - - ALL - - - CREATE - - - GRANT - - - SELECT - - - DROP - - - INSERT - - - DELETE - - - UPDATE - - - , - - - ON - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - FROM - - - - user_name - - - - , - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/revoke_roles.html b/src/current/_includes/v19.1/sql/diagrams/revoke_roles.html deleted file mode 100644 index a30aee75474..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/revoke_roles.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - -REVOKE - - -ADMIN - - -OPTION - - -FOR - - -role_name - - -, - - -FROM - - -user_name - - -, - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/rollback_transaction.html b/src/current/_includes/v19.1/sql/diagrams/rollback_transaction.html deleted file mode 100644 index c34d5d12047..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/rollback_transaction.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - ROLLBACK - - - TO - - - SAVEPOINT - - - - cockroach_restart - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/savepoint.html b/src/current/_includes/v19.1/sql/diagrams/savepoint.html deleted file mode 100644 index 9b7dc70608b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/savepoint.html +++ /dev/null @@ -1,16 +0,0 @@ -
- - - - - - SAVEPOINT - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/select.html b/src/current/_includes/v19.1/sql/diagrams/select.html deleted file mode 100644 index 9f743234e06..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/select.html +++ /dev/null @@ -1,38 +0,0 @@ - diff --git a/src/current/_includes/v19.1/sql/diagrams/select_clause.html b/src/current/_includes/v19.1/sql/diagrams/select_clause.html deleted file mode 100644 index 88dc35507df..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/select_clause.html +++ /dev/null @@ -1,53 +0,0 @@ - diff --git a/src/current/_includes/v19.1/sql/diagrams/set_cluster_setting.html b/src/current/_includes/v19.1/sql/diagrams/set_cluster_setting.html deleted file mode 100644 index b6554c7be52..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/set_cluster_setting.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - SET - - - CLUSTER - - - SETTING - - - - var_name - - - - = - - - TO - - - - var_value - - - - DEFAULT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/set_operation.html b/src/current/_includes/v19.1/sql/diagrams/set_operation.html deleted file mode 100644 index aa0e63023dc..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/set_operation.html +++ /dev/null @@ -1,32 +0,0 @@ -
- - - - - -select_clause - - - -UNION - - -INTERSECT - - -EXCEPT - - -ALL - - -DISTINCT - - - -select_clause - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/set_transaction.html b/src/current/_includes/v19.1/sql/diagrams/set_transaction.html deleted file mode 100644 index 3b3ca38af19..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/set_transaction.html +++ /dev/null @@ -1,50 +0,0 @@ -
- - - - -SET - - -TRANSACTION - - -PRIORITY - - -LOW - - -NORMAL - - -HIGH - - -READ - - -ONLY - - -WRITE - - -AS - - -OF - - -SYSTEM - - -TIME - - -a_expr - -, - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/set_var.html b/src/current/_includes/v19.1/sql/diagrams/set_var.html deleted file mode 100644 index 96bb04e7cf6..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/set_var.html +++ /dev/null @@ -1,33 +0,0 @@ -
- - - - - - SET - - - SESSION - - - - var_name - - - - TO - - - = - - - - var_value - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_backup.html b/src/current/_includes/v19.1/sql/diagrams/show_backup.html deleted file mode 100644 index 0f4f4e2c379..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_backup.html +++ /dev/null @@ -1,19 +0,0 @@ -
- - - - - - SHOW - - - BACKUP - - - - location - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_cluster_setting.html b/src/current/_includes/v19.1/sql/diagrams/show_cluster_setting.html deleted file mode 100644 index d575106689f..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_cluster_setting.html +++ /dev/null @@ -1,34 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - SETTING - - - - var_name - - - - ALL - - - ALL - - - CLUSTER - - - SETTINGS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_columns.html b/src/current/_includes/v19.1/sql/diagrams/show_columns.html deleted file mode 100644 index 7b47a3b3123..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_columns.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - COLUMNS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_constraints.html b/src/current/_includes/v19.1/sql/diagrams/show_constraints.html deleted file mode 100644 index 9c520ae9bc6..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_constraints.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_create.html b/src/current/_includes/v19.1/sql/diagrams/show_create.html deleted file mode 100644 index 09c0fa4c2a1..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_create.html +++ /dev/null @@ -1,16 +0,0 @@ -
- - - - -SHOW - - -CREATE - - -object_name - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_databases.html b/src/current/_includes/v19.1/sql/diagrams/show_databases.html deleted file mode 100644 index 487bfc4e629..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_databases.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - DATABASES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_grants.html b/src/current/_includes/v19.1/sql/diagrams/show_grants.html deleted file mode 100644 index 92a7932dc22..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_grants.html +++ /dev/null @@ -1,61 +0,0 @@ -
- - - - - - SHOW - - - GRANTS - - - ON - - - ROLE - - - - role_name - - - - , - - - TABLE - - - - table_name - - - - , - - - DATABASE - - - - database_name - - - - , - - - FOR - - - - user_name - - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_index.html b/src/current/_includes/v19.1/sql/diagrams/show_index.html deleted file mode 100644 index 3014183c521..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_index.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - SHOW - - - INDEX - - - INDEXES - - - KEYS - - - FROM - - - - table_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_jobs.html b/src/current/_includes/v19.1/sql/diagrams/show_jobs.html deleted file mode 100644 index 26c55164c47..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_jobs.html +++ /dev/null @@ -1,15 +0,0 @@ -
- - - - -SHOW - - -AUTOMATIC - - -JOBS - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_queries.html b/src/current/_includes/v19.1/sql/diagrams/show_queries.html deleted file mode 100644 index 26376243dac..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_queries.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - LOCAL - - - QUERIES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_ranges.html b/src/current/_includes/v19.1/sql/diagrams/show_ranges.html deleted file mode 100644 index 70614d61fa9..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_ranges.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -SHOW - - -EXPERIMENTAL_RANGES - - -FROM - - -TABLE - - -table_name - -INDEX - - -table_index_name - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_roles.html b/src/current/_includes/v19.1/sql/diagrams/show_roles.html deleted file mode 100644 index fd508395e0b..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_roles.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - ROLES - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_schemas.html b/src/current/_includes/v19.1/sql/diagrams/show_schemas.html deleted file mode 100644 index efa07764533..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_schemas.html +++ /dev/null @@ -1,22 +0,0 @@ -
- - - - - - SHOW - - - SCHEMAS - - - FROM - - - - name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_sequences.html b/src/current/_includes/v19.1/sql/diagrams/show_sequences.html deleted file mode 100644 index 4f3fe915c12..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_sequences.html +++ /dev/null @@ -1,17 +0,0 @@ -
- - - - -SHOW - - -SEQUENCES - - -FROM - - -name - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_sessions.html b/src/current/_includes/v19.1/sql/diagrams/show_sessions.html deleted file mode 100644 index 3b2aa5b16ee..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_sessions.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - CLUSTER - - - LOCAL - - - SESSIONS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_stats.html b/src/current/_includes/v19.1/sql/diagrams/show_stats.html deleted file mode 100644 index 0e350b93c0f..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_stats.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - -SHOW - - -STATISTICS - - -FOR - - -TABLE - - -table_name - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_tables.html b/src/current/_includes/v19.1/sql/diagrams/show_tables.html deleted file mode 100644 index 84b221efaf2..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_tables.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - -SHOW - - -TABLES - - -FROM - - -database_name - -. - - -schema_name - -WITH - - -COMMENT - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_trace.html b/src/current/_includes/v19.1/sql/diagrams/show_trace.html deleted file mode 100644 index 37271dc87b5..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_trace.html +++ /dev/null @@ -1,25 +0,0 @@ -
- - - - -SHOW - - -COMPACT - - -KV - - -TRACE - - -FOR - - -SESSION - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/show_users.html b/src/current/_includes/v19.1/sql/diagrams/show_users.html deleted file mode 100644 index 7c33b7f00b4..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_users.html +++ /dev/null @@ -1,14 +0,0 @@ -
- - - - - - SHOW - - - USERS - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_var.html b/src/current/_includes/v19.1/sql/diagrams/show_var.html deleted file mode 100644 index fb7ec6f4ce8..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_var.html +++ /dev/null @@ -1,20 +0,0 @@ -
- - - - - - SHOW - - - SESSION - - - var_name - - - ALL - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/show_zone.html b/src/current/_includes/v19.1/sql/diagrams/show_zone.html deleted file mode 100644 index 83052dd1d5c..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/show_zone.html +++ /dev/null @@ -1,73 +0,0 @@ -
- - - - -SHOW - - -ZONE - - -CONFIGURATION - - -FOR - - -RANGE - - -zone_name - -DATABASE - - -database_name - -TABLE - - -table_name - -PARTITION - - -partition_name - -PARTITION - - -partition_name - -OF - - -TABLE - - -table_name - -INDEX - - -table_name - -@ - - -index_name - -CONFIGURATIONS - - -ALL - - -ZONE - - -CONFIGURATIONS - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/simple_select_clause.html b/src/current/_includes/v19.1/sql/diagrams/simple_select_clause.html deleted file mode 100644 index 4f91c71493a..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/simple_select_clause.html +++ /dev/null @@ -1,107 +0,0 @@ -
- - - - -SELECT - - -ALL - - -DISTINCT - - -ON - - -( - - - -a_expr - - - -, - - -) - - - -target_elem - - - -, - - -FROM - - - -table_ref - - - -, - - -AS - - -OF - - -SYSTEM - - -TIME - - - -a_expr - - - -WHERE - - - -a_expr - - - -GROUP - - -BY - - - -a_expr - - - -, - - -HAVING - - - -a_expr - - - -WINDOW - - - -window_definition_list - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/sort_clause.html b/src/current/_includes/v19.1/sql/diagrams/sort_clause.html deleted file mode 100644 index dbac057629e..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/sort_clause.html +++ /dev/null @@ -1,55 +0,0 @@ -
- - - - - - ORDER - - - BY - - - - a_expr - - - - PRIMARY - - - KEY - - - - table_name - - - - INDEX - - - - table_name - - - - @ - - - - index_name - - - - ASC - - - DESC - - - , - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/split_index_at.html b/src/current/_includes/v19.1/sql/diagrams/split_index_at.html deleted file mode 100644 index 51daee7e3c7..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/split_index_at.html +++ /dev/null @@ -1,35 +0,0 @@ -
- - - - -ALTER - - -INDEX - - -table_name - -@ - - -index_name - -SPLIT - - -AT - - -select_stmt - -WITH - - -EXPIRATION - - -a_expr - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/split_table_at.html b/src/current/_includes/v19.1/sql/diagrams/split_table_at.html deleted file mode 100644 index a694595b9b5..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/split_table_at.html +++ /dev/null @@ -1,30 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - - table_name - - - - SPLIT - - - AT - - - - select_stmt - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/stmt_block.html b/src/current/_includes/v19.1/sql/diagrams/stmt_block.html deleted file mode 100644 index 22083b1d592..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/stmt_block.html +++ /dev/null @@ -1,11433 +0,0 @@ -
- -

stmt_block:

- - - - - - stmt - -

no references


stmt:

- - - - - - HELPTOKEN - - - preparable_stmt - - - copy_from_stmt - - - comment_stmt - - - execute_stmt - - - deallocate_stmt - - - discard_stmt - - - export_stmt - - - grant_stmt - - - prepare_stmt - - - revoke_stmt - - - savepoint_stmt - - - release_stmt - - - nonpreparable_set_stmt - - - transaction_stmt - -

referenced by: -

-


preparable_stmt:

- - - - - - alter_stmt - - - backup_stmt - - - cancel_stmt - - - create_stmt - - - delete_stmt - - - drop_stmt - - - explain_stmt - - - import_stmt - - - insert_stmt - - - pause_stmt - - - reset_stmt - - - restore_stmt - - - resume_stmt - - - scrub_stmt - - - select_stmt - - - preparable_set_stmt - - - show_stmt - - - truncate_stmt - - - update_stmt - - - upsert_stmt - -

referenced by: -

-


copy_from_stmt:

- - - - - - COPY - - - table_name - - - opt_column_list - - FROM - - - STDIN - - -

referenced by: -

-


comment_stmt:

- - - - - - COMMENT - - - ON - - - DATABASE - - - database_name - - TABLE - - - table_name - - COLUMN - - - column_path - - IS - - - comment_text - -

referenced by: -

-


execute_stmt:

- - - - - - EXECUTE - - - table_alias_name - - - execute_param_clause - -

referenced by: -

-


deallocate_stmt:

- - - - - - DEALLOCATE - - - PREPARE - - - name - - ALL - - -

referenced by: -

-


discard_stmt:

- - - - - - DISCARD - - - ALL - - -

referenced by: -

-


export_stmt:

- - - - - - EXPORT - - - INTO - - - import_format - - - string_or_placeholder - - - opt_with_options - - FROM - - - select_stmt - -

referenced by: -

-


grant_stmt:

- - - - - - GRANT - - - privileges - - ON - - - targets - - TO - - - name_list - - - privilege_list - - TO - - - name_list - - WITH - - - ADMIN - - - OPTION - - -

referenced by: -

-


prepare_stmt:

- - - - - - PREPARE - - - table_alias_name - - - prep_type_clause - - AS - - - preparable_stmt - -

referenced by: -

-


revoke_stmt:

- - - - - - REVOKE - - - privileges - - ON - - - targets - - ADMIN - - - OPTION - - - FOR - - - privilege_list - - FROM - - - name_list - -

referenced by: -

-


savepoint_stmt:

- - - - - - SAVEPOINT - - - name - -

referenced by: -

-


release_stmt:

- - - - - - RELEASE - - - savepoint_name - -

referenced by: -

-


nonpreparable_set_stmt:

- - - - - - set_transaction_stmt - -

referenced by: -

-


transaction_stmt:

- - - - - - begin_stmt - - - commit_stmt - - - rollback_stmt - - - abort_stmt - -

referenced by: -

-


alter_stmt:

- - - - - - alter_ddl_stmt - - - alter_user_stmt - -

referenced by: -

-


backup_stmt:

- - - - - - BACKUP - - - targets - - TO - - - string_or_placeholder - - - opt_as_of_clause - - - opt_incremental - - - opt_with_options - -

referenced by: -

-


cancel_stmt:

- - - - - - cancel_jobs_stmt - - - cancel_queries_stmt - - - cancel_sessions_stmt - -

referenced by: -

-


create_stmt:

- - - - - - create_user_stmt - - - create_role_stmt - - - create_ddl_stmt - - - create_stats_stmt - -

referenced by: -

-


delete_stmt:

- - - - - - opt_with_clause - - DELETE - - - FROM - - - table_name_expr_opt_alias_idx - - - opt_where_clause - - - opt_sort_clause - - - opt_limit_clause - - - returning_clause - -

referenced by: -

-


drop_stmt:

- - - - - - drop_ddl_stmt - - - drop_role_stmt - - - drop_user_stmt - -

referenced by: -

-


explain_stmt:

- - - - - - EXPLAIN - - - ANALYZE - - - ( - - - explain_option_list - - ) - - - preparable_stmt - -

referenced by: -

-


import_stmt:

- - - - - - IMPORT - - - import_format - - - string_or_placeholder - - TABLE - - - table_name - - FROM - - - import_format - - - string_or_placeholder - - CREATE - - - USING - - - string_or_placeholder - - ( - - - table_elem_list - - ) - - - import_format - - DATA - - - ( - - - string_or_placeholder_list - - ) - - - opt_with_options - -

referenced by: -

-


insert_stmt:

- - - - - - opt_with_clause - - INSERT - - - INTO - - - insert_target - - - insert_rest - - - on_conflict - - - returning_clause - -

referenced by: -

-


pause_stmt:

- - - - - - PAUSE - - - JOB - - - a_expr - - JOBS - - - select_stmt - -

referenced by: -

-


reset_stmt:

- - - - - - reset_session_stmt - - - reset_csetting_stmt - -

referenced by: -

-


restore_stmt:

- - - - - - RESTORE - - - targets - - FROM - - - string_or_placeholder_list - - - as_of_clause - - - opt_with_options - -

referenced by: -

-


resume_stmt:

- - - - - - RESUME - - - JOB - - - a_expr - - JOBS - - - select_stmt - -

referenced by: -

-


scrub_stmt:

- - - - - - scrub_table_stmt - - - scrub_database_stmt - -

referenced by: -

-


select_stmt:

- - - - - - select_no_parens - - - select_with_parens - -

referenced by: -

-


preparable_set_stmt:

- - - - - - set_session_stmt - - - set_csetting_stmt - - - use_stmt - -

referenced by: -

-


show_stmt:

- - - - - - show_backup_stmt - - - show_columns_stmt - - - show_constraints_stmt - - - show_create_stmt - - - show_csettings_stmt - - - show_databases_stmt - - - show_grants_stmt - - - show_indexes_stmt - - - show_jobs_stmt - - - show_queries_stmt - - - show_ranges_stmt - - - show_roles_stmt - - - show_schemas_stmt - - - show_sequences_stmt - - - show_session_stmt - - - show_sessions_stmt - - - show_stats_stmt - - - show_tables_stmt - - - show_trace_stmt - - - show_users_stmt - - - show_zone_stmt - -

referenced by: -

-


truncate_stmt:

- - - - - - TRUNCATE - - - opt_table - - - relation_expr_list - - - opt_drop_behavior - -

referenced by: -

-


update_stmt:

- - - - - - opt_with_clause - - UPDATE - - - table_name_expr_opt_alias_idx - - SET - - - set_clause_list - - - opt_where_clause - - - opt_sort_clause - - - opt_limit_clause - - - returning_clause - -

referenced by: -

-


upsert_stmt:

- - - - - - opt_with_clause - - UPSERT - - - INTO - - - insert_target - - - insert_rest - - - returning_clause - -

referenced by: -

-


table_name:

- - - - - - db_object_name - -

referenced by: -

-


opt_column_list:

- - - - - - ( - - - name_list - - ) - - -

referenced by: -

-


database_name:

- - - - - - name - -

referenced by: -

-


comment_text:

- - - - - - SCONST - - - NULL - - -

referenced by: -

-


column_path:

- - - - - - name - - - prefixed_column_path - -

referenced by: -

-


table_alias_name:

- - - - - - name - -

referenced by: -

-


execute_param_clause:

- - - - - - ( - - - expr_list - - ) - - -

referenced by: -

-


name:

- - - - - - identifier - - - unreserved_keyword - - - col_name_keyword - -

referenced by: -

-


import_format:

- - - - - - name - -

referenced by: -

-


string_or_placeholder:

- - - - - - non_reserved_word_or_sconst - - PLACEHOLDER - - -

referenced by: -

-


opt_with_options:

- - - - - - WITH - - - kv_option_list - - OPTIONS - - - ( - - - kv_option_list - - ) - - -

referenced by: -

-


privileges:

- - - - - - ALL - - - privilege_list - -

referenced by: -

-


targets:

- - - - - - identifier - - - col_name_keyword - - - unreserved_keyword - - - complex_table_pattern - - - table_pattern - - , - - - TABLE - - - table_pattern_list - - DATABASE - - - name_list - -

referenced by: -

-


name_list:

- - - - - - name - - , - - -

referenced by: -

-


privilege_list:

- - - - - - privilege - - , - - -

referenced by: -

-


prep_type_clause:

- - - - - - ( - - - type_list - - ) - - -

referenced by: -

-


savepoint_name:

- - - - - - SAVEPOINT - - - name - -

referenced by: -

-


set_transaction_stmt:

- - - - - - SET - - - SESSION - - - TRANSACTION - - - transaction_mode_list - -

referenced by: -

-


begin_stmt:

- - - - - - BEGIN - - - opt_transaction - - START - - - TRANSACTION - - - begin_transaction - -

referenced by: -

-


commit_stmt:

- - - - - - COMMIT - - - END - - - opt_transaction - -

referenced by: -

-


rollback_stmt:

- - - - - - ROLLBACK - - - opt_to_savepoint - -

referenced by: -

-


abort_stmt:

- - - - - - ABORT - - - opt_abort_mod - -

referenced by: -

-


alter_ddl_stmt:

- - - - - - alter_table_stmt - - - alter_index_stmt - - - alter_view_stmt - - - alter_sequence_stmt - - - alter_database_stmt - - - alter_range_stmt - -

referenced by: -

-


alter_user_stmt:

- - - - - - alter_user_password_stmt - -

referenced by: -

-


opt_as_of_clause:

- - - - - - as_of_clause - -

referenced by: -

-


opt_incremental:

- - - - - - INCREMENTAL - - - FROM - - - string_or_placeholder_list - -

referenced by: -

-


cancel_jobs_stmt:

- - - - - - CANCEL - - - JOB - - - a_expr - - JOBS - - - select_stmt - -

referenced by: -

-


cancel_queries_stmt:

- - - - - - CANCEL - - - QUERY - - - IF - - - EXISTS - - - a_expr - - QUERIES - - - IF - - - EXISTS - - - select_stmt - -

referenced by: -

-


cancel_sessions_stmt:

- - - - - - CANCEL - - - SESSION - - - IF - - - EXISTS - - - a_expr - - SESSIONS - - - IF - - - EXISTS - - - select_stmt - -

referenced by: -

-


create_user_stmt:

- - - - - - CREATE - - - USER - - - IF - - - NOT - - - EXISTS - - - string_or_placeholder - - - opt_password - -

referenced by: -

-


create_role_stmt:

- - - - - - CREATE - - - ROLE - - - IF - - - NOT - - - EXISTS - - - string_or_placeholder - -

referenced by: -

-


create_ddl_stmt:

- - - - - - create_changefeed_stmt - - - create_database_stmt - - - create_index_stmt - - - create_table_stmt - - - create_table_as_stmt - - - create_view_stmt - - - create_sequence_stmt - -

referenced by: -

-


create_stats_stmt:

- - - - - - CREATE - - - STATISTICS - - - statistics_name - - - opt_stats_columns - - FROM - - - create_stats_target - - - opt_create_stats_options - -

referenced by: -

-


opt_with_clause:

- - - - - - with_clause - -

referenced by: -

-


table_name_expr_opt_alias_idx:

- - - - - - table_name_expr_with_index - - AS - - - table_alias_name - -

referenced by: -

-


opt_where_clause:

- - - - - - where_clause - -

referenced by: -

-


opt_sort_clause:

- - - - - - sort_clause - -

referenced by: -

-


opt_limit_clause:

- - - - - - limit_clause - -

referenced by: -

-


returning_clause:

- - - - - - RETURNING - - - target_list - - NOTHING - - -

referenced by: -

-


drop_ddl_stmt:

- - - - - - drop_database_stmt - - - drop_index_stmt - - - drop_table_stmt - - - drop_view_stmt - - - drop_sequence_stmt - -

referenced by: -

-


drop_role_stmt:

- - - - - - DROP - - - ROLE - - - IF - - - EXISTS - - - string_or_placeholder_list - -

referenced by: -

-


drop_user_stmt:

- - - - - - DROP - - - USER - - - IF - - - EXISTS - - - string_or_placeholder_list - -

referenced by: -

-


explain_option_list:

- - - - - - explain_option_name - - , - - -

referenced by: -

-


string_or_placeholder_list:

- - - - - - string_or_placeholder - - , - - -

referenced by: -

-


table_elem_list:

- - - - - - table_elem - - , - - -

referenced by: -

-


insert_target:

- - - - - - table_name - - AS - - - table_alias_name - -

referenced by: -

-


insert_rest:

- - - - - - ( - - - insert_column_list - - ) - - - select_stmt - - DEFAULT - - - VALUES - - -

referenced by: -

-


on_conflict:

- - - - - - ON - - - CONFLICT - - - opt_conf_expr - - DO - - - UPDATE - - - SET - - - set_clause_list - - - opt_where_clause - - NOTHING - - -

referenced by: -

-


a_expr:

- - - - - - c_expr - - + - - - - - - - ~ - - - NOT - - - a_expr - - DEFAULT - - - TYPECAST - - - cast_target - - TYPEANNOTATE - - - typename - - COLLATE - - - collation_name - - + - - - a_expr - - - - - - a_expr - - * - - - a_expr - - / - - - a_expr - - FLOORDIV - - - a_expr - - % - - - a_expr - - ^ - - - a_expr - - # - - - a_expr - - & - - - a_expr - - | - - - a_expr - - < - - - a_expr - - > - - - a_expr - - ? - - - a_expr - - JSON_SOME_EXISTS - - - a_expr - - JSON_ALL_EXISTS - - - a_expr - - CONTAINS - - - a_expr - - CONTAINED_BY - - - a_expr - - = - - - a_expr - - CONCAT - - - a_expr - - LSHIFT - - - a_expr - - RSHIFT - - - a_expr - - FETCHVAL - - - a_expr - - FETCHTEXT - - - a_expr - - FETCHVAL_PATH - - - a_expr - - FETCHTEXT_PATH - - - a_expr - - REMOVE_PATH - - - a_expr - - INET_CONTAINED_BY_OR_EQUALS - - - a_expr - - INET_CONTAINS_OR_CONTAINED_BY - - - a_expr - - INET_CONTAINS_OR_EQUALS - - - a_expr - - LESS_EQUALS - - - a_expr - - GREATER_EQUALS - - - a_expr - - NOT_EQUALS - - - a_expr - - AND - - - a_expr - - OR - - - a_expr - - LIKE - - - a_expr - - LIKE - - - a_expr - - ESCAPE - - - a_expr - - NOT - - - LIKE - - - a_expr - - NOT - - - LIKE - - - a_expr - - ESCAPE - - - a_expr - - ILIKE - - - a_expr - - ILIKE - - - a_expr - - ESCAPE - - - a_expr - - NOT - - - ILIKE - - - a_expr - - NOT - - - ILIKE - - - a_expr - - ESCAPE - - - a_expr - - SIMILAR - - - TO - - - a_expr - - SIMILAR - - - TO - - - a_expr - - ESCAPE - - - a_expr - - NOT - - - SIMILAR - - - TO - - - a_expr - - NOT - - - SIMILAR - - - TO - - - a_expr - - ESCAPE - - - a_expr - - ~ - - - a_expr - - NOT_REGMATCH - - - a_expr - - REGIMATCH - - - a_expr - - NOT_REGIMATCH - - - a_expr - - IS - - - NAN - - - IS - - - NOT - - - NAN - - - IS - - - NULL - - - ISNULL - - - IS - - - NOT - - - NULL - - - NOTNULL - - - IS - - - TRUE - - - IS - - - NOT - - - TRUE - - - IS - - - FALSE - - - IS - - - NOT - - - FALSE - - - IS - - - UNKNOWN - - - IS - - - NOT - - - UNKNOWN - - - IS - - - DISTINCT - - - FROM - - - a_expr - - IS - - - NOT - - - DISTINCT - - - FROM - - - a_expr - - IS - - - OF - - - ( - - - type_list - - ) - - - IS - - - NOT - - - OF - - - ( - - - type_list - - ) - - - BETWEEN - - - opt_asymmetric - - - b_expr - - AND - - - a_expr - - NOT - - - BETWEEN - - - opt_asymmetric - - - b_expr - - AND - - - a_expr - - BETWEEN - - - SYMMETRIC - - - b_expr - - AND - - - a_expr - - NOT - - - BETWEEN - - - SYMMETRIC - - - b_expr - - AND - - - a_expr - - IN - - - in_expr - - NOT - - - IN - - - in_expr - - - subquery_op - - - sub_type - - - a_expr - -

referenced by: -

-


reset_session_stmt:

- - - - - - RESET - - - SESSION - - - session_var - -

referenced by: -

-


reset_csetting_stmt:

- - - - - - RESET - - - CLUSTER - - - SETTING - - - var_name - -

referenced by: -

-


as_of_clause:

- - - - - - AS - - - OF - - - SYSTEM - - - TIME - - - a_expr - -

referenced by: -

-


scrub_table_stmt:

- - - - - - EXPERIMENTAL - - - SCRUB - - - TABLE - - - table_name - - - opt_as_of_clause - - - opt_scrub_options_clause - -

referenced by: -

-


scrub_database_stmt:

- - - - - - EXPERIMENTAL - - - SCRUB - - - DATABASE - - - database_name - - - opt_as_of_clause - -

referenced by: -

-


select_no_parens:

- - - - - - simple_select - - - select_clause - - - sort_clause - - - opt_sort_clause - - - select_limit - - - with_clause - - - select_clause - - - sort_clause - - - opt_sort_clause - - - select_limit - -

referenced by: -

-


select_with_parens:

- - - - - - ( - - - select_no_parens - - - select_with_parens - - ) - - -

referenced by: -

-


set_session_stmt:

- - - - - - SET - - - SESSION - - - set_rest_more - - CHARACTERISTICS - - - AS - - - TRANSACTION - - - transaction_mode_list - - - set_rest_more - -

referenced by: -

-


set_csetting_stmt:

- - - - - - SET - - - CLUSTER - - - SETTING - - - var_name - - - to_or_eq - - - var_value - -

referenced by: -

-


use_stmt:

- - - - - - USE - - - var_value - -

referenced by: -

-


show_backup_stmt:

- - - - - - SHOW - - - BACKUP - - - string_or_placeholder - -

referenced by: -

-


show_columns_stmt:

- - - - - - SHOW - - - COLUMNS - - - FROM - - - table_name - -

referenced by: -

-


show_constraints_stmt:

- - - - - - SHOW - - - CONSTRAINT - - - CONSTRAINTS - - - FROM - - - table_name - -

referenced by: -

-


show_create_stmt:

- - - - - - SHOW - - - CREATE - - - table_name - -

referenced by: -

-


show_csettings_stmt:

- - - - - - SHOW - - - CLUSTER - - - SETTING - - - var_name - - ALL - - - ALL - - - CLUSTER - - - SETTINGS - - -

referenced by: -

-


show_databases_stmt:

- - - - - - SHOW - - - DATABASES - - -

referenced by: -

-


show_grants_stmt:

- - - - - - SHOW - - - GRANTS - - - opt_on_targets_roles - - - for_grantee_clause - -

referenced by: -

-


show_indexes_stmt:

- - - - - - SHOW - - - INDEX - - - INDEXES - - - KEYS - - - FROM - - - table_name - -

referenced by: -

-


show_jobs_stmt:

- - - - - - SHOW - - - opt_automatic - - JOBS - - -

referenced by: -

-


show_queries_stmt:

- - - - - - SHOW - - - ALL - - - opt_cluster - - QUERIES - - -

referenced by: -

-


show_ranges_stmt:

- - - - - - SHOW - - - ranges_kw - - FROM - - - TABLE - - - table_name - - INDEX - - - table_index_name - -

referenced by: -

-


show_roles_stmt:

- - - - - - SHOW - - - ROLES - - -

referenced by: -

-


show_schemas_stmt:

- - - - - - SHOW - - - SCHEMAS - - - FROM - - - name - -

referenced by: -

-


show_sequences_stmt:

- - - - - - SHOW - - - SEQUENCES - - - FROM - - - name - -

referenced by: -

-


show_session_stmt:

- - - - - - SHOW - - - SESSION - - - session_var - -

referenced by: -

-


show_sessions_stmt:

- - - - - - SHOW - - - ALL - - - opt_cluster - - SESSIONS - - -

referenced by: -

-


show_stats_stmt:

- - - - - - SHOW - - - STATISTICS - - - FOR - - - TABLE - - - table_name - -

referenced by: -

-


show_tables_stmt:

- - - - - - SHOW - - - TABLES - - - FROM - - - name - - . - - - name - - - with_comment - -

referenced by: -

-


show_trace_stmt:

- - - - - - SHOW - - - opt_compact - - KV - - - TRACE - - - FOR - - - SESSION - - -

referenced by: -

-


show_users_stmt:

- - - - - - SHOW - - - USERS - - -

referenced by: -

-


show_zone_stmt:

- - - - - - SHOW - - - ZONE - - - CONFIGURATION - - - FOR - - - RANGE - - - zone_name - - DATABASE - - - database_name - - TABLE - - - table_name - - - opt_partition - - PARTITION - - - partition_name - - OF - - - TABLE - - - table_name - - INDEX - - - table_index_name - - CONFIGURATIONS - - - ALL - - - ZONE - - - CONFIGURATIONS - - -

referenced by: -

-


opt_table:

- - - - - - TABLE - - -

referenced by: -

-


relation_expr_list:

- - - - - - relation_expr - - , - - -

referenced by: -

-


opt_drop_behavior:

- - - - - - CASCADE - - - RESTRICT - - -

referenced by: -

-


set_clause_list:

- - - - - - set_clause - - , - - -

referenced by: -

-


db_object_name:

- - - - - - simple_db_object_name - - - complex_db_object_name - -

referenced by: -

-


prefixed_column_path:

- - - - - - db_object_name_component - - . - - - unrestricted_name - - . - - - unrestricted_name - - . - - - unrestricted_name - -

referenced by: -

-


expr_list:

- - - - - - a_expr - - , - - -

referenced by: -

-


unreserved_keyword:

- - - - - - ABORT - - - ACTION - - - ADD - - - ADMIN - - - AGGREGATE - - - ALTER - - - AT - - - AUTOMATIC - - - BACKUP - - - BEGIN - - - BIGSERIAL - - - BLOB - - - BOOL - - - BY - - - BYTEA - - - BYTES - - - CACHE - - - CANCEL - - - CASCADE - - - CHANGEFEED - - - CLUSTER - - - COLUMNS - - - COMMENT - - - COMMIT - - - COMMITTED - - - COMPACT - - - CONFLICT - - - CONFIGURATION - - - CONFIGURATIONS - - - CONFIGURE - - - CONSTRAINTS - - - CONVERSION - - - COPY - - - COVERING - - - CUBE - - - CURRENT - - - CYCLE - - - DATA - - - DATABASE - - - DATABASES - - - DATE - - - DAY - - - DEALLOCATE - - - DELETE - - - DEFERRED - - - DISCARD - - - DOMAIN - - - DOUBLE - - - DROP - - - ENCODING - - - ENUM - - - ESCAPE - - - EXECUTE - - - EXPERIMENTAL - - - EXPERIMENTAL_AUDIT - - - EXPERIMENTAL_FINGERPRINTS - - - EXPERIMENTAL_RANGES - - - EXPERIMENTAL_RELOCATE - - - EXPERIMENTAL_REPLICA - - - EXPLAIN - - - EXPORT - - - EXTENSION - - - FILES - - - FILTER - - - FIRST - - - FLOAT4 - - - FLOAT8 - - - FOLLOWING - - - FORCE_INDEX - - - FUNCTION - - - GLOBAL - - - GRANTS - - - GROUPS - - - HASH - - - HIGH - - - HISTOGRAM - - - HOUR - - - IMMEDIATE - - - IMPORT - - - INCREMENT - - - INCREMENTAL - - - INDEXES - - - INET - - - INJECT - - - INSERT - - - INT2 - - - INT2VECTOR - - - INT4 - - - INT8 - - - INT64 - - - INTERLEAVE - - - INVERTED - - - ISOLATION - - - JOB - - - JOBS - - - JSON - - - JSONB - - - KEY - - - KEYS - - - KV - - - LANGUAGE - - - LC_COLLATE - - - LC_CTYPE - - - LEASE - - - LESS - - - LEVEL - - - LIST - - - LOCAL - - - LOOKUP - - - LOW - - - MATCH - - - MATERIALIZED - - - MAXVALUE - - - MERGE - - - MINUTE - - - MINVALUE - - - MONTH - - - NAMES - - - NAN - - - NAME - - - NEXT - - - NO - - - NORMAL - - - NO_INDEX_JOIN - - - OF - - - OFF - - - OID - - - OIDS - - - OIDVECTOR - - - OPERATOR - - - OPT - - - OPTION - - - OPTIONS - - - ORDINALITY - - - OVER - - - OWNED - - - PARENT - - - PARTIAL - - - PARTITION - - - PASSWORD - - - PAUSE - - - PHYSICAL - - - PLAN - - - PLANS - - - PRECEDING - - - PREPARE - - - PRIORITY - - - PUBLICATION - - - QUERIES - - - QUERY - - - RANGE - - - RANGES - - - READ - - - RECURSIVE - - - REF - - - REGCLASS - - - REGPROC - - - REGPROCEDURE - - - REGNAMESPACE - - - REGTYPE - - - RELEASE - - - RENAME - - - REPEATABLE - - - REPLACE - - - RESET - - - RESTORE - - - RESTRICT - - - RESUME - - - REVOKE - - - ROLE - - - ROLES - - - ROLLBACK - - - ROLLUP - - - ROWS - - - RULE - - - SETTING - - - SETTINGS - - - STATUS - - - SAVEPOINT - - - SCATTER - - - SCHEMA - - - SCHEMAS - - - SCRUB - - - SEARCH - - - SECOND - - - SERIAL - - - SERIALIZABLE - - - SERIAL2 - - - SERIAL4 - - - SERIAL8 - - - SERVER - - - SEQUENCE - - - SEQUENCES - - - SESSION - - - SESSIONS - - - SET - - - SHOW - - - SIMPLE - - - SMALLSERIAL - - - SNAPSHOT - - - SQL - - - START - - - STATISTICS - - - STDIN - - - STORE - - - STORED - - - STORING - - - STRICT - - - STRING - - - SPLIT - - - SUBSCRIPTION - - - SYNTAX - - - SYSTEM - - - TABLES - - - TEMP - - - TEMPLATE - - - TEMPORARY - - - TESTING_RANGES - - - TESTING_RELOCATE - - - TEXT - - - TRACE - - - TRANSACTION - - - TRIGGER - - - TRUNCATE - - - TRUSTED - - - TYPE - - - THROTTLING - - - UNBOUNDED - - - UNCOMMITTED - - - UNKNOWN - - - UNLOGGED - - - UPDATE - - - UPSERT - - - UUID - - - USE - - - USERS - - - VALID - - - VALIDATE - - - VALUE - - - VARYING - - - VIEW - - - WITHIN - - - WITHOUT - - - WRITE - - - YEAR - - - ZONE - - -

referenced by: -

-


col_name_keyword:

- - - - - - ANNOTATE_TYPE - - - BETWEEN - - - BIGINT - - - BIT - - - BOOLEAN - - - CHAR - - - CHARACTER - - - CHARACTERISTICS - - - COALESCE - - - DEC - - - DECIMAL - - - EXISTS - - - EXTRACT - - - EXTRACT_DURATION - - - FLOAT - - - GREATEST - - - GROUPING - - - IF - - - IFERROR - - - IFNULL - - - INT - - - INTEGER - - - INTERVAL - - - ISERROR - - - LEAST - - - NULLIF - - - NUMERIC - - - OUT - - - OVERLAY - - - POSITION - - - PRECISION - - - REAL - - - ROW - - - SMALLINT - - - SUBSTRING - - - TIME - - - TIMETZ - - - TIMESTAMP - - - TIMESTAMPTZ - - - TREAT - - - TRIM - - - VALUES - - - VARBIT - - - VARCHAR - - - VIRTUAL - - - WORK - - -

referenced by: -

-


non_reserved_word_or_sconst:

- - - - - - non_reserved_word - - SCONST - - -

referenced by: -

-


kv_option_list:

- - - - - - kv_option - - , - - -

referenced by: -

-


complex_table_pattern:

- - - - - - complex_db_object_name - - - db_object_name_component - - . - - - unrestricted_name - - . - - - * - - -

referenced by: -

-


table_pattern:

- - - - - - simple_db_object_name - - - complex_table_pattern - -

referenced by: -

-


table_pattern_list:

- - - - - - table_pattern - - , - - -

referenced by: -

-


privilege:

- - - - - - name - - CREATE - - - GRANT - - - SELECT - - -

referenced by: -

-


type_list:

- - - - - - typename - - , - - -

referenced by: -

-


transaction_mode_list:

- - - - - - transaction_mode - - - opt_comma - -

referenced by: -

-


opt_transaction:

- - - - - - TRANSACTION - - -

referenced by: -

-


begin_transaction:

- - - - - - transaction_mode_list - -

referenced by: -

-


opt_to_savepoint:

- - - - - - TRANSACTION - - - TO - - - savepoint_name - -

referenced by: -

-


opt_abort_mod:

- - - - - - TRANSACTION - - - WORK - - -

referenced by: -

-


alter_table_stmt:

- - - - - - alter_onetable_stmt - - - alter_split_stmt - - - alter_scatter_stmt - - - alter_zone_table_stmt - - - alter_rename_table_stmt - -

referenced by: -

-


alter_index_stmt:

- - - - - - alter_oneindex_stmt - - - alter_split_index_stmt - - - alter_scatter_index_stmt - - - alter_rename_index_stmt - - - alter_zone_index_stmt - -

referenced by: -

-


alter_view_stmt:

- - - - - - alter_rename_view_stmt - -

referenced by: -

-


alter_sequence_stmt:

- - - - - - alter_rename_sequence_stmt - - - alter_sequence_options_stmt - -

referenced by: -

-


alter_database_stmt:

- - - - - - alter_rename_database_stmt - - - alter_zone_database_stmt - -

referenced by: -

-


alter_range_stmt:

- - - - - - alter_zone_range_stmt - -

referenced by: -

-


alter_user_password_stmt:

- - - - - - ALTER - - - USER - - - IF - - - EXISTS - - - string_or_placeholder - - WITH - - - PASSWORD - - - string_or_placeholder - -

referenced by: -

-


opt_password:

- - - - - - opt_with - - PASSWORD - - - string_or_placeholder - -

referenced by: -

-


create_changefeed_stmt:

- - - - - - CREATE - - - CHANGEFEED - - - FOR - - - changefeed_targets - - - opt_changefeed_sink - - - opt_with_options - -

referenced by: -

-


create_database_stmt:

- - - - - - CREATE - - - DATABASE - - - IF - - - NOT - - - EXISTS - - - database_name - - - opt_with - - - opt_template_clause - - - opt_encoding_clause - - - opt_lc_collate_clause - - - opt_lc_ctype_clause - -

referenced by: -

-


create_index_stmt:

- - - - - - CREATE - - - opt_unique - - INDEX - - - opt_index_name - - IF - - - NOT - - - EXISTS - - - index_name - - ON - - - table_name - - - opt_using_gin_btree - - INVERTED - - - INDEX - - - opt_index_name - - IF - - - NOT - - - EXISTS - - - index_name - - ON - - - table_name - - ( - - - index_params - - ) - - - opt_storing - - - opt_interleave - - - opt_partition_by - -

referenced by: -

-


create_table_stmt:

- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - table_name - - ( - - - opt_table_elem_list - - ) - - - opt_interleave - - - opt_partition_by - -

referenced by: -

-


create_table_as_stmt:

- - - - - - CREATE - - - TABLE - - - IF - - - NOT - - - EXISTS - - - table_name - - - opt_column_list - - AS - - - select_stmt - -

referenced by: -

-


create_view_stmt:

- - - - - - CREATE - - - VIEW - - - view_name - - - opt_column_list - - AS - - - select_stmt - -

referenced by: -

-


create_sequence_stmt:

- - - - - - CREATE - - - SEQUENCE - - - IF - - - NOT - - - EXISTS - - - sequence_name - - - opt_sequence_option_list - -

referenced by: -

-


statistics_name:

- - - - - - name - -

referenced by: -

-


opt_stats_columns:

- - - - - - ON - - - name_list - -

referenced by: -

-


create_stats_target:

- - - - - - table_name - -

referenced by: -

-


opt_create_stats_options:

- - - - - - as_of_clause - -

referenced by: -

-


with_clause:

- - - - - - WITH - - - cte_list - -

referenced by: -

-


table_name_expr_with_index:

- - - - - - table_name - - - opt_index_flags - -

referenced by: -

-


where_clause:

- - - - - - WHERE - - - a_expr - -

referenced by: -

-


sort_clause:

- - - - - - ORDER - - - BY - - - sortby_list - -

referenced by: -

-


limit_clause:

- - - - - - LIMIT - - - select_limit_value - - FETCH - - - first_or_next - - - opt_select_fetch_first_value - - - row_or_rows - - ONLY - - -

referenced by: -

-


target_list:

- - - - - - target_elem - - , - - -

referenced by: -

-


drop_database_stmt:

- - - - - - DROP - - - DATABASE - - - IF - - - EXISTS - - - database_name - - - opt_drop_behavior - -

referenced by: -

-


drop_index_stmt:

- - - - - - DROP - - - INDEX - - - IF - - - EXISTS - - - table_index_name_list - - - opt_drop_behavior - -

referenced by: -

-


drop_table_stmt:

- - - - - - DROP - - - TABLE - - - IF - - - EXISTS - - - table_name_list - - - opt_drop_behavior - -

referenced by: -

-


drop_view_stmt:

- - - - - - DROP - - - VIEW - - - IF - - - EXISTS - - - table_name_list - - - opt_drop_behavior - -

referenced by: -

-


drop_sequence_stmt:

- - - - - - DROP - - - SEQUENCE - - - IF - - - EXISTS - - - table_name_list - - - opt_drop_behavior - -

referenced by: -

-


explain_option_name:

- - - - - - non_reserved_word - -

referenced by: -

-


table_elem:

- - - - - - column_def - - - index_def - - - family_def - - - table_constraint - -

referenced by: -

-


insert_column_list:

- - - - - - insert_column_item - - , - - -

referenced by: -

-


opt_conf_expr:

- - - - - - ( - - - name_list - - ) - - -

referenced by: -

-


c_expr:

- - - - - - d_expr - - - array_subscripts - - - case_expr - - EXISTS - - - select_with_parens - -

referenced by: -

-


cast_target:

- - - - - - typename - -

referenced by: -

-


typename:

- - - - - - simple_typename - - - opt_array_bounds - - ARRAY - - - postgres_oid - -

referenced by: -

-


collation_name:

- - - - - - unrestricted_name - -

referenced by: -

-


opt_asymmetric:

- - - - - - ASYMMETRIC - - -

referenced by: -

-


b_expr:

- - - - - - c_expr - - + - - - - - - - ~ - - - b_expr - - TYPECAST - - - cast_target - - TYPEANNOTATE - - - typename - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - ^ - - - # - - - & - - - | - - - < - - - > - - - = - - - CONCAT - - - LSHIFT - - - RSHIFT - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - - b_expr - - IS - - - NOT - - - DISTINCT - - - FROM - - - b_expr - - OF - - - ( - - - type_list - - ) - - -

referenced by: -

-


in_expr:

- - - - - - select_with_parens - - - expr_tuple1_ambiguous - -

referenced by: -

-


subquery_op:

- - - - - - math_op - - NOT - - - LIKE - - - ILIKE - - -

referenced by: -

-


sub_type:

- - - - - - ANY - - - SOME - - - ALL - - -

referenced by: -

-


session_var:

- - - - - - identifier - - - ALL - - - DATABASE - - - NAMES - - - SESSION_USER - - - TIME - - - ZONE - - -

referenced by: -

-


var_name:

- - - - - - name - - - attrs - -

referenced by: -

-


opt_scrub_options_clause:

- - - - - - WITH - - - OPTIONS - - - scrub_option_list - -

referenced by: -

-


simple_select:

- - - - - - simple_select_clause - - - values_clause - - - table_clause - - - set_operation - -

referenced by: -

-


select_clause:

- - - - - - simple_select - - - select_with_parens - -

referenced by: -

-


select_limit:

- - - - - - limit_clause - - - offset_clause - - - offset_clause - - - limit_clause - -

referenced by: -

-


set_rest_more:

- - - - - - generic_set - -

referenced by: -

-


to_or_eq:

- - - - - - = - - - TO - - -

referenced by: -

-


var_value:

- - - - - - a_expr - - - extra_var_value - -

referenced by: -

-


opt_on_targets_roles:

- - - - - - ON - - - targets_roles - -

referenced by: -

-


for_grantee_clause:

- - - - - - FOR - - - name_list - -

referenced by: -

-


opt_automatic:

- - - - - - AUTOMATIC - - -

referenced by: -

-


opt_cluster:

- - - - - - CLUSTER - - - LOCAL - - -

referenced by: -

-


ranges_kw:

- - - - - - TESTING_RANGES - - - EXPERIMENTAL_RANGES - - -

referenced by: -

-


table_index_name:

- - - - - - table_name - - @ - - - index_name - - - standalone_index_name - -

referenced by: -

-


with_comment:

- - - - - - WITH - - - COMMENT - - -

referenced by: -

-


opt_compact:

- - - - - - COMPACT - - -

referenced by: -

-


zone_name:

- - - - - - unrestricted_name - -

referenced by: -

-


opt_partition:

- - - - - - partition - -

referenced by: -

-


partition_name:

- - - - - - unrestricted_name - -

referenced by: -

-


relation_expr:

- - - - - - table_name - - * - - - ONLY - - - table_name - - ( - - - table_name - - ) - - -

referenced by: -

-


set_clause:

- - - - - - single_set_clause - - - multiple_set_clause - -

referenced by: -

-


simple_db_object_name:

- - - - - - db_object_name_component - -

referenced by: -

-


complex_db_object_name:

- - - - - - db_object_name_component - - . - - - unrestricted_name - - . - - - unrestricted_name - -

referenced by: -

-


db_object_name_component:

- - - - - - name - - FAMILY - - - cockroachdb_extra_reserved_keyword - -

referenced by: -

-


unrestricted_name:

- - - - - - identifier - - - unreserved_keyword - - - col_name_keyword - - - type_func_name_keyword - - - reserved_keyword - -

referenced by: -

-


non_reserved_word:

- - - - - - identifier - - - unreserved_keyword - - - col_name_keyword - - - type_func_name_keyword - -

referenced by: -

-


kv_option:

- - - - - - name - - SCONST - - - = - - - string_or_placeholder - -

referenced by: -

-


transaction_mode:

- - - - - - transaction_user_priority - - - transaction_read_mode - - - as_of_clause - -

referenced by: -

-


opt_comma:

- - - - - - , - - -

referenced by: -

-


alter_onetable_stmt:

- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - relation_expr - - - alter_table_cmds - -

referenced by: -

-


alter_split_stmt:

- - - - - - ALTER - - - TABLE - - - table_name - - SPLIT - - - AT - - - select_stmt - -

referenced by: -

-


alter_scatter_stmt:

- - - - - - ALTER - - - TABLE - - - table_name - - SCATTER - - - FROM - - - ( - - - expr_list - - ) - - - TO - - - ( - - - expr_list - - ) - - -

referenced by: -

-


alter_zone_table_stmt:

- - - - - - ALTER - - - PARTITION - - - partition_name - - OF - - - TABLE - - - table_name - - - set_zone_config - -

referenced by: -

-


alter_rename_table_stmt:

- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - relation_expr - - RENAME - - - TO - - - table_name - -

referenced by: -

-


alter_oneindex_stmt:

- - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - table_index_name - - - alter_index_cmds - -

referenced by: -

-


alter_split_index_stmt:

- - - - - - ALTER - - - INDEX - - - table_index_name - - SPLIT - - - AT - - - select_stmt - -

referenced by: -

-


alter_scatter_index_stmt:

- - - - - - ALTER - - - INDEX - - - table_index_name - - SCATTER - - - FROM - - - ( - - - expr_list - - ) - - - TO - - - ( - - - expr_list - - ) - - -

referenced by: -

-


alter_rename_index_stmt:

- - - - - - ALTER - - - INDEX - - - IF - - - EXISTS - - - table_index_name - - RENAME - - - TO - - - index_name - -

referenced by: -

-


alter_zone_index_stmt:

- - - - - - ALTER - - - INDEX - - - table_index_name - - - set_zone_config - -

referenced by: -

-


alter_rename_view_stmt:

- - - - - - ALTER - - - VIEW - - - IF - - - EXISTS - - - relation_expr - - RENAME - - - TO - - - view_name - -

referenced by: -

-


alter_rename_sequence_stmt:

- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - relation_expr - - RENAME - - - TO - - - sequence_name - -

referenced by: -

-


alter_sequence_options_stmt:

- - - - - - ALTER - - - SEQUENCE - - - IF - - - EXISTS - - - sequence_name - - - sequence_option_list - -

referenced by: -

-


alter_rename_database_stmt:

- - - - - - ALTER - - - DATABASE - - - database_name - - RENAME - - - TO - - - database_name - -

referenced by: -

-


alter_zone_database_stmt:

- - - - - - ALTER - - - DATABASE - - - database_name - - - set_zone_config - -

referenced by: -

-


alter_zone_range_stmt:

- - - - - - ALTER - - - RANGE - - - zone_name - - - set_zone_config - -

referenced by: -

-


opt_with:

- - - - - - WITH - - -

referenced by: -

-


changefeed_targets:

- - - - - - TABLE - - - single_table_pattern_list - -

referenced by: -

-


opt_changefeed_sink:

- - - - - - INTO - - - string_or_placeholder - -

referenced by: -

-


opt_template_clause:

- - - - - - TEMPLATE - - - opt_equal - - - non_reserved_word_or_sconst - -

referenced by: -

-


opt_encoding_clause:

- - - - - - ENCODING - - - opt_equal - - - non_reserved_word_or_sconst - -

referenced by: -

-


opt_lc_collate_clause:

- - - - - - LC_COLLATE - - - opt_equal - - - non_reserved_word_or_sconst - -

referenced by: -

-


opt_lc_ctype_clause:

- - - - - - LC_CTYPE - - - opt_equal - - - non_reserved_word_or_sconst - -

referenced by: -

-


opt_unique:

- - - - - - UNIQUE - - -

referenced by: -

-


opt_index_name:

- - - - - - opt_name - -

referenced by: -

-


opt_using_gin_btree:

- - - - - - USING - - - name - -

referenced by: -

-


index_params:

- - - - - - index_elem - - , - - -

referenced by: -

-


opt_storing:

- - - - - - storing - - ( - - - name_list - - ) - - -

referenced by: -

-


opt_interleave:

- - - - - - INTERLEAVE - - - IN - - - PARENT - - - table_name - - ( - - - name_list - - ) - - -

referenced by: -

-


opt_partition_by:

- - - - - - partition_by - -

referenced by: -

-


index_name:

- - - - - - unrestricted_name - -

referenced by: -

-


opt_table_elem_list:

- - - - - - table_elem_list - -

referenced by: -

-


view_name:

- - - - - - table_name - -

referenced by: -

-


sequence_name:

- - - - - - db_object_name - -

referenced by: -

-


opt_sequence_option_list:

- - - - - - sequence_option_list - -

referenced by: -

-


cte_list:

- - - - - - common_table_expr - - , - - -

referenced by: -

-


opt_index_flags:

- - - - - - @ - - - index_name - - [ - - - ICONST - - - ] - - - { - - - index_flags_param_list - - } - - -

referenced by: -

-


sortby_list:

- - - - - - sortby - - , - - -

referenced by: -

-


select_limit_value:

- - - - - - a_expr - - ALL - - -

referenced by: -

-


first_or_next:

- - - - - - FIRST - - - NEXT - - -

referenced by: -

-


opt_select_fetch_first_value:

- - - - - - signed_iconst - - ( - - - a_expr - - ) - - -

referenced by: -

-


row_or_rows:

- - - - - - ROW - - - ROWS - - -

referenced by: -

-


target_elem:

- - - - - - a_expr - - AS - - - target_name - - identifier - - - * - - -

referenced by: -

-


table_index_name_list:

- - - - - - table_index_name - - , - - -

referenced by: -

-


table_name_list:

- - - - - - table_name - - , - - -

referenced by: -

-


column_def:

- - - - - - column_name - - - typename - - - col_qual_list - -

referenced by: -

-


index_def:

- - - - - - UNIQUE - - - INDEX - - - opt_index_name - - ( - - - index_params - - ) - - - opt_storing - - - opt_interleave - - - opt_partition_by - - INVERTED - - - INDEX - - - opt_name - - ( - - - index_params - - ) - - -

referenced by: -

-


family_def:

- - - - - - FAMILY - - - opt_family_name - - ( - - - name_list - - ) - - -

referenced by: -

-


table_constraint:

- - - - - - CONSTRAINT - - - constraint_name - - - constraint_elem - -

referenced by: -

-


insert_column_item:

- - - - - - column_name - -

referenced by: -

-


d_expr:

- - - - - - @ - - - ICONST - - - FCONST - - - const_typename - - SCONST - - - BCONST - - - BITCONST - - - interval - - TRUE - - - FALSE - - - NULL - - - column_path_with_star - - PLACEHOLDER - - - ( - - - a_expr - - ) - - - . - - - * - - - unrestricted_name - - - func_expr - - - select_with_parens - - - labeled_row - - ARRAY - - - select_with_parens - - - row - - - array_expr - -

referenced by: -

-


array_subscripts:

- - - - - - array_subscript - -

referenced by: -

-


case_expr:

- - - - - - CASE - - - case_arg - - - when_clause_list - - - case_default - - END - - -

referenced by: -

-


simple_typename:

- - - - - - const_typename - - - bit_with_length - - - character_with_length - - INTERVAL - - -

referenced by: -

-


opt_array_bounds:

- - - - - - [ - - - ] - - -

referenced by: -

-


postgres_oid:

- - - - - - REGPROC - - - REGPROCEDURE - - - REGCLASS - - - REGTYPE - - - REGNAMESPACE - - -

referenced by: -

-


expr_tuple1_ambiguous:

- - - - - - ( - - - tuple1_ambiguous_values - - ) - - -

referenced by: -

-


math_op:

- - - - - - + - - - - - - - * - - - / - - - FLOORDIV - - - % - - - & - - - | - - - ^ - - - # - - - < - - - > - - - = - - - LESS_EQUALS - - - GREATER_EQUALS - - - NOT_EQUALS - - -

referenced by: -

-


attrs:

- - - - - - . - - - unrestricted_name - -

referenced by: -

-


scrub_option_list:

- - - - - - scrub_option - - , - - -

referenced by: -

-


simple_select_clause:

- - - - - - SELECT - - - opt_all_clause - - DISTINCT - - - distinct_on_clause - - - target_list - - - from_clause - - - opt_where_clause - - - group_clause - - - having_clause - - - window_clause - -

referenced by: -

-


values_clause:

- - - - - - VALUES - - - ( - - - expr_list - - ) - - - , - - -

referenced by: -

-


table_clause:

- - - - - - TABLE - - - table_ref - -

referenced by: -

-


set_operation:

- - - - - - select_clause - - UNION - - - INTERSECT - - - EXCEPT - - - all_or_distinct - - - select_clause - -

referenced by: -

-


offset_clause:

- - - - - - OFFSET - - - a_expr - - - c_expr - - - row_or_rows - -

referenced by: -

-


generic_set:

- - - - - - var_name - - - to_or_eq - - - var_list - -

referenced by: -

-


extra_var_value:

- - - - - - ON - - - cockroachdb_extra_reserved_keyword - -

referenced by: -

-


targets_roles:

- - - - - - ROLE - - - name_list - - - targets - -

referenced by: -

-


standalone_index_name:

- - - - - - db_object_name - -

referenced by: -

-


partition:

- - - - - - PARTITION - - - partition_name - -

referenced by: -

-


single_set_clause:

- - - - - - column_name - - = - - - a_expr - -

referenced by: -

-


multiple_set_clause:

- - - - - - ( - - - insert_column_list - - ) - - - = - - - in_expr - -

referenced by: -

-


cockroachdb_extra_reserved_keyword:

- - - - - - INDEX - - - NOTHING - - -

referenced by: -

-


type_func_name_keyword:

- - - - - - COLLATION - - - CROSS - - - FULL - - - INNER - - - ILIKE - - - IS - - - ISNULL - - - JOIN - - - LEFT - - - LIKE - - - NATURAL - - - NOTNULL - - - OUTER - - - OVERLAPS - - - RIGHT - - - SIMILAR - - - FAMILY - - -

referenced by: -

-


reserved_keyword:

- - - - - - ALL - - - ANALYSE - - - ANALYZE - - - AND - - - ANY - - - ARRAY - - - AS - - - ASC - - - ASYMMETRIC - - - BOTH - - - CASE - - - CAST - - - CHECK - - - COLLATE - - - COLUMN - - - CONSTRAINT - - - CREATE - - - CURRENT_CATALOG - - - CURRENT_DATE - - - CURRENT_ROLE - - - CURRENT_SCHEMA - - - CURRENT_TIME - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - DEFAULT - - - DEFERRABLE - - - DESC - - - DISTINCT - - - DO - - - ELSE - - - END - - - EXCEPT - - - FALSE - - - FETCH - - - FOR - - - FOREIGN - - - FROM - - - GRANT - - - GROUP - - - HAVING - - - IN - - - INITIALLY - - - INTERSECT - - - INTO - - - LATERAL - - - LEADING - - - LIMIT - - - LOCALTIME - - - LOCALTIMESTAMP - - - NOT - - - NULL - - - OFFSET - - - ON - - - ONLY - - - OR - - - ORDER - - - PLACING - - - PRIMARY - - - REFERENCES - - - RETURNING - - - SELECT - - - SESSION_USER - - - SOME - - - SYMMETRIC - - - TABLE - - - THEN - - - TO - - - TRAILING - - - TRUE - - - UNION - - - UNIQUE - - - USER - - - USING - - - VARIADIC - - - WHEN - - - WHERE - - - WINDOW - - - WITH - - - cockroachdb_extra_reserved_keyword - -

referenced by: -

-


transaction_user_priority:

- - - - - - PRIORITY - - - user_priority - -

referenced by: -

-


transaction_read_mode:

- - - - - - READ - - - ONLY - - - WRITE - - -

referenced by: -

-


alter_table_cmds:

- - - - - - alter_table_cmd - - , - - -

referenced by: -

-


set_zone_config:

- - - - - - CONFIGURE - - - ZONE - - - USING - - - var_set_list - - DISCARD - - -

referenced by: -

-


alter_index_cmds:

- - - - - - alter_index_cmd - - , - - -

referenced by: -

-


sequence_option_list:

- - - - - - sequence_option_elem - -

referenced by: -

-


single_table_pattern_list:

- - - - - - table_name - - , - - -

referenced by: -

-


opt_equal:

- - - - - - = - - -

referenced by: -

-


opt_name:

- - - - - - name - -

referenced by: -

-


index_elem:

- - - - - - a_expr - - - opt_asc_desc - -

referenced by: -

-


storing:

- - - - - - COVERING - - - STORING - - -

referenced by: -

-


partition_by:

- - - - - - PARTITION - - - BY - - - LIST - - - ( - - - name_list - - ) - - - ( - - - list_partitions - - RANGE - - - ( - - - name_list - - ) - - - ( - - - range_partitions - - ) - - - NOTHING - - -

referenced by: -

-


common_table_expr:

- - - - - - table_alias_name - - - opt_column_list - - AS - - - ( - - - preparable_stmt - - ) - - -

referenced by: -

-


index_flags_param_list:

- - - - - - index_flags_param - - , - - -

referenced by: -

-


sortby:

- - - - - - a_expr - - PRIMARY - - - KEY - - - table_name - - INDEX - - - table_name - - @ - - - index_name - - - opt_asc_desc - -

referenced by: -

-


signed_iconst:

- - - - - - + - - - - - - - ICONST - - -

referenced by: -

-


target_name:

- - - - - - unrestricted_name - -

referenced by: -

-


column_name:

- - - - - - name - -

referenced by: -

-


col_qual_list:

- - - - - - col_qualification - -

referenced by: -

-


opt_family_name:

- - - - - - opt_name - -

referenced by: -

-


constraint_name:

- - - - - - name - -

referenced by: -

-


constraint_elem:

- - - - - - CHECK - - - ( - - - a_expr - - PRIMARY - - - KEY - - - ( - - - index_params - - ) - - - UNIQUE - - - ( - - - index_params - - ) - - - opt_storing - - - opt_interleave - - - opt_partition_by - - FOREIGN - - - KEY - - - ( - - - name_list - - ) - - - REFERENCES - - - table_name - - - opt_column_list - - - key_match - - - reference_actions - -

referenced by: -

-


const_typename:

- - - - - - numeric - - - bit_without_length - - - character_without_length - - - const_datetime - - - const_json - - BLOB - - - BYTES - - - BYTEA - - - TEXT - - - NAME - - - SERIAL - - - SERIAL2 - - - SMALLSERIAL - - - SERIAL4 - - - SERIAL8 - - - BIGSERIAL - - - UUID - - - INET - - - OID - - - OIDVECTOR - - - INT2VECTOR - - - identifier - - -

referenced by: -

-


interval:

- - - - - - INTERVAL - - - SCONST - - - opt_interval - -

referenced by: -

-


column_path_with_star:

- - - - - - column_path - - - db_object_name_component - - . - - - unrestricted_name - - . - - - unrestricted_name - - . - - - * - - -

referenced by: -

-


func_expr:

- - - - - - func_application - - - filter_clause - - - over_clause - - - func_expr_common_subexpr - -

referenced by: -

-


labeled_row:

- - - - - - row - - ( - - - row - - AS - - - name_list - - ) - - -

referenced by: -

-


row:

- - - - - - ROW - - - ( - - - opt_expr_list - - ) - - - expr_tuple_unambiguous - -

referenced by: -

-


array_expr:

- - - - - - [ - - - opt_expr_list - - - array_expr_list - - ] - - -

referenced by: -

-


array_subscript:

- - - - - - [ - - - a_expr - - - opt_slice_bound - - : - - - opt_slice_bound - - ] - - -

referenced by: -

-


case_arg:

- - - - - - a_expr - -

referenced by: -

-


when_clause_list:

- - - - - - when_clause - -

referenced by: -

-


case_default:

- - - - - - ELSE - - - a_expr - -

referenced by: -

-


bit_with_length:

- - - - - - BIT - - - opt_varying - - VARBIT - - - ( - - - ICONST - - - ) - - -

referenced by: -

-


character_with_length:

- - - - - - character_base - - ( - - - ICONST - - - ) - - -

referenced by: -

-


tuple1_ambiguous_values:

- - - - - - a_expr - - , - - - expr_list - -

referenced by: -

-


scrub_option:

- - - - - - INDEX - - - CONSTRAINT - - - ALL - - - ( - - - name_list - - ) - - - PHYSICAL - - -

referenced by: -

-


opt_all_clause:

- - - - - - ALL - - -

referenced by: -

-


from_clause:

- - - - - - FROM - - - from_list - - - opt_as_of_clause - -

referenced by: -

-


group_clause:

- - - - - - GROUP - - - BY - - - expr_list - -

referenced by: -

-


having_clause:

- - - - - - HAVING - - - a_expr - -

referenced by: -

-


window_clause:

- - - - - - WINDOW - - - window_definition_list - -

referenced by: -

-


distinct_on_clause:

- - - - - - DISTINCT - - - ON - - - ( - - - expr_list - - ) - - -

referenced by: -

-


table_ref:

- - - - - - relation_expr - - - opt_index_flags - - - select_with_parens - - - func_table - - [ - - - preparable_stmt - - ] - - - opt_ordinality - - - opt_alias_clause - - - joined_table - - ( - - - joined_table - - ) - - - opt_ordinality - - - alias_clause - -

referenced by: -

-


all_or_distinct:

- - - - - - ALL - - - DISTINCT - - -

referenced by: -

-


var_list:

- - - - - - var_value - - , - - -

referenced by: -

-


user_priority:

- - - - - - LOW - - - NORMAL - - - HIGH - - -

referenced by: -

-


alter_table_cmd:

- - - - - - RENAME - - - opt_column - - CONSTRAINT - - - column_name - - TO - - - column_name - - ADD - - - COLUMN - - - IF - - - NOT - - - EXISTS - - - column_def - - - table_constraint - - - opt_validate_behavior - - ALTER - - - opt_column - - - column_name - - - alter_column_default - - DROP - - - NOT - - - NULL - - - STORED - - - opt_set_data - - TYPE - - - typename - - - opt_collate - - - opt_alter_column_using - - DROP - - - opt_column - - IF - - - EXISTS - - - column_name - - CONSTRAINT - - - IF - - - EXISTS - - - constraint_name - - - opt_drop_behavior - - VALIDATE - - - CONSTRAINT - - - constraint_name - - EXPERIMENTAL_AUDIT - - - SET - - - audit_mode - - - partition_by - -

referenced by: -

-


var_set_list:

- - - - - - var_name - - = - - - COPY - - - FROM - - - PARENT - - - var_value - - , - - - var_name - - = - - - var_value - - COPY - - - FROM - - - PARENT - - -

referenced by: -

-


alter_index_cmd:

- - - - - - partition_by - -

referenced by: -

-


sequence_option_elem:

- - - - - - NO - - - CYCLE - - - MINVALUE - - - MAXVALUE - - - INCREMENT - - - BY - - - MINVALUE - - - MAXVALUE - - - START - - - WITH - - - signed_iconst64 - - VIRTUAL - - -

referenced by: -

-


opt_asc_desc:

- - - - - - ASC - - - DESC - - -

referenced by: -

-


list_partitions:

- - - - - - list_partition - - , - - -

referenced by: -

-


range_partitions:

- - - - - - range_partition - - , - - -

referenced by: -

-


index_flags_param:

- - - - - - FORCE_INDEX - - - = - - - index_name - - NO_INDEX_JOIN - - -

referenced by: -

-


col_qualification:

- - - - - - CONSTRAINT - - - constraint_name - - - col_qualification_elem - - COLLATE - - - collation_name - - FAMILY - - - family_name - - CREATE - - - FAMILY - - - family_name - - IF - - - NOT - - - EXISTS - - - FAMILY - - - family_name - -

referenced by: -

-


key_match:

- - - - - - MATCH - - - SIMPLE - - - FULL - - -

referenced by: -

-


reference_actions:

- - - - - - reference_on_update - - - reference_on_delete - - - reference_on_delete - - - reference_on_update - -

referenced by: -

-


numeric:

- - - - - - INT - - - INTEGER - - - INT2 - - - SMALLINT - - - INT4 - - - INT8 - - - INT64 - - - BIGINT - - - REAL - - - FLOAT4 - - - FLOAT8 - - - FLOAT - - - opt_float - - DOUBLE - - - PRECISION - - - DECIMAL - - - DEC - - - NUMERIC - - - opt_numeric_modifiers - - BOOLEAN - - - BOOL - - -

referenced by: -

-


bit_without_length:

- - - - - - BIT - - - VARYING - - - VARBIT - - -

referenced by: -

-


character_without_length:

- - - - - - character_base - -

referenced by: -

-


const_datetime:

- - - - - - DATE - - - TIMESTAMP - - - opt_timezone - - TIMESTAMPTZ - - -

referenced by: -

-


const_json:

- - - - - - JSON - - - JSONB - - -

referenced by: -

-


opt_interval:

- - - - - - interval_qualifier - -

referenced by: -

-


func_application:

- - - - - - func_name - - ( - - - ALL - - - DISTINCT - - - expr_list - - * - - - ) - - -

referenced by: -

-


filter_clause:

- - - - - - FILTER - - - ( - - - WHERE - - - a_expr - - ) - - -

referenced by: -

-


over_clause:

- - - - - - OVER - - - window_specification - - - window_name - -

referenced by: -

-


func_expr_common_subexpr:

- - - - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_CATALOG - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - CURRENT_ROLE - - - SESSION_USER - - - USER - - - CAST - - - ( - - - a_expr - - AS - - - cast_target - - ANNOTATE_TYPE - - - ( - - - a_expr - - , - - - typename - - IF - - - ( - - - a_expr - - , - - - NULLIF - - - IFNULL - - - ( - - - IFERROR - - - ( - - - a_expr - - , - - - a_expr - - , - - - ISERROR - - - ( - - - a_expr - - , - - - a_expr - - COALESCE - - - ( - - - expr_list - - ) - - - special_function - -

referenced by: -

-


opt_expr_list:

- - - - - - expr_list - -

referenced by: -

-


expr_tuple_unambiguous:

- - - - - - ( - - - tuple1_unambiguous_values - - ) - - -

referenced by: -

-


array_expr_list:

- - - - - - array_expr - - , - - -

referenced by: -

-


opt_slice_bound:

- - - - - - a_expr - -

referenced by: -

-


when_clause:

- - - - - - WHEN - - - a_expr - - THEN - - - a_expr - -

referenced by: -

-


opt_varying:

- - - - - - VARYING - - -

referenced by: -

-


character_base:

- - - - - - char_aliases - - VARYING - - - VARCHAR - - - STRING - - -

referenced by: -

-


from_list:

- - - - - - table_ref - - , - - -

referenced by: -

-


window_definition_list:

- - - - - - window_definition - - , - - -

referenced by: -

-


opt_ordinality:

- - - - - - WITH - - - ORDINALITY - - -

referenced by: -

-


opt_alias_clause:

- - - - - - alias_clause - -

referenced by: -

-


joined_table:

- - - - - - ( - - - joined_table - - ) - - - table_ref - - CROSS - - - opt_join_hint - - NATURAL - - - join_type - - - opt_join_hint - - JOIN - - - table_ref - - - join_type - - - opt_join_hint - - JOIN - - - table_ref - - - join_qual - -

referenced by: -

-


alias_clause:

- - - - - - AS - - - table_alias_name - - - opt_column_list - -

referenced by: -

-


func_table:

- - - - - - func_expr_windowless - - ROWS - - - FROM - - - ( - - - rowsfrom_list - - ) - - -

referenced by: -

-


opt_column:

- - - - - - COLUMN - - -

referenced by: -

-


alter_column_default:

- - - - - - SET - - - DEFAULT - - - a_expr - - DROP - - - DEFAULT - - -

referenced by: -

-


opt_set_data:

- - - - - - SET - - - DATA - - -

referenced by: -

-


opt_collate:

- - - - - - COLLATE - - - collation_name - -

referenced by: -

-


opt_alter_column_using:

- - - - - - USING - - - a_expr - -

referenced by: -

-


opt_validate_behavior:

- - - - - - NOT - - - VALID - - -

referenced by: -

-


audit_mode:

- - - - - - READ - - - WRITE - - - OFF - - -

referenced by: -

-


signed_iconst64:

- - - - - - signed_iconst - -

referenced by: -

-


list_partition:

- - - - - - partition - - VALUES - - - IN - - - ( - - - expr_list - - ) - - - opt_partition_by - -

referenced by: -

-


range_partition:

- - - - - - partition - - VALUES - - - FROM - - - ( - - - expr_list - - ) - - - TO - - - ( - - - expr_list - - ) - - - opt_partition_by - -

referenced by: -

-


col_qualification_elem:

- - - - - - NOT - - - NULL - - - UNIQUE - - - PRIMARY - - - KEY - - - CHECK - - - ( - - - a_expr - - ) - - - DEFAULT - - - b_expr - - REFERENCES - - - table_name - - - opt_name_parens - - - key_match - - - reference_actions - - AS - - - ( - - - a_expr - - ) - - - STORED - - -

referenced by: -

-


family_name:

- - - - - - name - -

referenced by: -

-


reference_on_update:

- - - - - - ON - - - UPDATE - - - reference_action - -

referenced by: -

-


reference_on_delete:

- - - - - - ON - - - DELETE - - - reference_action - -

referenced by: -

-


opt_float:

- - - - - - ( - - - ICONST - - - ) - - -

referenced by: -

-


opt_numeric_modifiers:

- - - - - - ( - - - ICONST - - - , - - - ICONST - - - ) - - -

referenced by: -

-


opt_timezone:

- - - - - - WITH - - - WITHOUT - - - TIME - - - ZONE - - -

referenced by: -

-


interval_qualifier:

- - - - - - YEAR - - - TO - - - MONTH - - - MONTH - - - DAY - - - TO - - - HOUR - - - MINUTE - - - SECOND - - - HOUR - - - TO - - - MINUTE - - - SECOND - - - MINUTE - - - TO - - - SECOND - - - SECOND - - -

referenced by: -

-


func_name:

- - - - - - type_function_name - - - prefixed_column_path - -

referenced by: -

-


window_specification:

- - - - - - ( - - - opt_existing_window_name - - - opt_partition_clause - - - opt_sort_clause - - - opt_frame_clause - - ) - - -

referenced by: -

-


window_name:

- - - - - - name - -

referenced by: -

-


special_function:

- - - - - - CURRENT_DATE - - - CURRENT_SCHEMA - - - CURRENT_TIMESTAMP - - - CURRENT_USER - - - ( - - - EXTRACT - - - EXTRACT_DURATION - - - ( - - - extract_list - - OVERLAY - - - ( - - - overlay_list - - POSITION - - - ( - - - position_list - - SUBSTRING - - - ( - - - substr_list - - GREATEST - - - LEAST - - - ( - - - expr_list - - TRIM - - - ( - - - BOTH - - - LEADING - - - TRAILING - - - trim_list - - ) - - -

referenced by: -

-


tuple1_unambiguous_values:

- - - - - - a_expr - - , - - - expr_list - -

referenced by: -

-


char_aliases:

- - - - - - CHAR - - - CHARACTER - - -

referenced by: -

-


window_definition:

- - - - - - window_name - - AS - - - window_specification - -

referenced by: -

-


opt_join_hint:

- - - - - - HASH - - - MERGE - - - LOOKUP - - -

referenced by: -

-


join_type:

- - - - - - FULL - - - LEFT - - - RIGHT - - - join_outer - - INNER - - -

referenced by: -

-


join_qual:

- - - - - - USING - - - ( - - - name_list - - ) - - - ON - - - a_expr - -

referenced by: -

-


func_expr_windowless:

- - - - - - func_application - - - func_expr_common_subexpr - -

referenced by: -

-


rowsfrom_list:

- - - - - - rowsfrom_item - - , - - -

referenced by: -

-


opt_name_parens:

- - - - - - ( - - - name - - ) - - -

referenced by: -

-


reference_action:

- - - - - - NO - - - ACTION - - - RESTRICT - - - CASCADE - - - SET - - - NULL - - - DEFAULT - - -

referenced by: -

-


type_function_name:

- - - - - - identifier - - - unreserved_keyword - - - type_func_name_keyword - -

referenced by: -

-


opt_existing_window_name:

- - - - - - name - -

referenced by: -

-


opt_partition_clause:

- - - - - - PARTITION - - - BY - - - expr_list - -

referenced by: -

-


opt_frame_clause:

- - - - - - RANGE - - - ROWS - - - GROUPS - - - frame_extent - -

referenced by: -

-


extract_list:

- - - - - - extract_arg - - FROM - - - a_expr - - - expr_list - -

referenced by: -

-


overlay_list:

- - - - - - a_expr - - - overlay_placing - - - substr_from - - - substr_for - - - expr_list - -

referenced by: -

-


position_list:

- - - - - - b_expr - - IN - - - b_expr - -

referenced by: -

-


substr_list:

- - - - - - a_expr - - - substr_from - - - substr_for - - - substr_for - - - substr_from - - - opt_expr_list - -

referenced by: -

-


trim_list:

- - - - - - a_expr - - FROM - - - expr_list - -

referenced by: -

-


join_outer:

- - - - - - OUTER - - -

referenced by: -

-


rowsfrom_item:

- - - - - - func_expr_windowless - -

referenced by: -

-


frame_extent:

- - - - - - BETWEEN - - - frame_bound - - AND - - - frame_bound - -

referenced by: -

-


extract_arg:

- - - - - - identifier - - - YEAR - - - MONTH - - - DAY - - - HOUR - - - MINUTE - - - SECOND - - -

referenced by: -

-


overlay_placing:

- - - - - - PLACING - - - a_expr - -

referenced by: -

-


substr_from:

- - - - - - FROM - - - a_expr - -

referenced by: -

-


substr_for:

- - - - - - FOR - - - a_expr - -

referenced by: -

-


frame_bound:

- - - - - - UNBOUNDED - - - a_expr - - PRECEDING - - - FOLLOWING - - - CURRENT - - - ROW - - -

referenced by: -

-


generated by Railroad Diagram Generator

\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/table_clause.html b/src/current/_includes/v19.1/sql/diagrams/table_clause.html deleted file mode 100644 index 97691481d76..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/table_clause.html +++ /dev/null @@ -1,15 +0,0 @@ -
- - - - -TABLE - - - -table_ref - - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/table_constraint.html b/src/current/_includes/v19.1/sql/diagrams/table_constraint.html deleted file mode 100644 index ac37f0f1eac..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/table_constraint.html +++ /dev/null @@ -1,120 +0,0 @@ -
- - - - - - CONSTRAINT - - - - constraint_name - - - - CHECK - - - ( - - - - a_expr - - - - PRIMARY - - - KEY - - - ( - - - - index_params - - - - ) - - - UNIQUE - - - ( - - - - index_params - - - - ) - - - COVERING - - - STORING - - - ( - - - - name_list - - - - ) - - - - opt_interleave - - - - - opt_partition_by - - - - FOREIGN - - - KEY - - - ( - - - - name_list - - - - ) - - - REFERENCES - - - - table_name - - - - - opt_column_list - - - - - reference_actions - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/table_ref.html b/src/current/_includes/v19.1/sql/diagrams/table_ref.html deleted file mode 100644 index 0010ffa90f8..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/table_ref.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - -table_name - -@ - - -index_name - - -func_application - -[ - - -preparable_stmt - -] - - -( - - -select_stmt - - -joined_table - -) - - -WITH - - -ORDINALITY - - -AS - - -table_alias_name - -( - - -name - -, - - -) - - -joined_table - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/truncate.html b/src/current/_includes/v19.1/sql/diagrams/truncate.html deleted file mode 100644 index 06cb91a310c..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/truncate.html +++ /dev/null @@ -1,28 +0,0 @@ -
- - - - - - TRUNCATE - - - TABLE - - - - table_name - - - - , - - - CASCADE - - - RESTRICT - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/unique_column_level.html b/src/current/_includes/v19.1/sql/diagrams/unique_column_level.html deleted file mode 100644 index c7c178e9351..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/unique_column_level.html +++ /dev/null @@ -1,59 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_name - - - - - column_type - - - - UNIQUE - - - - column_constraints - - - - , - - - - column_def - - - - - table_constraints - - - - ) - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/unique_table_level.html b/src/current/_includes/v19.1/sql/diagrams/unique_table_level.html deleted file mode 100644 index e77a972161a..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/unique_table_level.html +++ /dev/null @@ -1,63 +0,0 @@ -
- - - - - - CREATE - - - TABLE - - - - table_name - - - - ( - - - - column_def - - - - , - - - CONSTRAINT - - - - name - - - - UNIQUE - - - ( - - - - column_name - - - - , - - - ) - - - - table_constraints - - - - ) - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/update.html b/src/current/_includes/v19.1/sql/diagrams/update.html deleted file mode 100644 index 7ead70594b4..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/update.html +++ /dev/null @@ -1,118 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -UPDATE - - - -table_name - - - -AS - - - -table_alias_name - - - -SET - - - -column_name - - - -= - - - -a_expr - - - -( - - - -column_name - - - -, - - -) - - -= - - -( - - - -select_stmt - - - - -a_expr - - - -, - - -) - - -, - - -WHERE - - - -a_expr - - - - -sort_clause - - - - -limit_clause - - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/upsert.html b/src/current/_includes/v19.1/sql/diagrams/upsert.html deleted file mode 100644 index b4d7987ddfe..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/upsert.html +++ /dev/null @@ -1,71 +0,0 @@ -
- - - - -WITH - - - -common_table_expr - - - -, - - -UPSERT - - -INTO - - - -table_name - - - -AS - - - -table_alias_name - - - -( - - - -column_name - - - -, - - -) - - -select_stmt - - -DEFAULT - - -VALUES - - -RETURNING - - - -target_list - - - -NOTHING - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/validate_constraint.html b/src/current/_includes/v19.1/sql/diagrams/validate_constraint.html deleted file mode 100644 index d470d8dd98f..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/validate_constraint.html +++ /dev/null @@ -1,36 +0,0 @@ -
- - - - - - ALTER - - - TABLE - - - IF - - - EXISTS - - - - table_name - - - - VALIDATE - - - CONSTRAINT - - - - constraint_name - - - - -
\ No newline at end of file diff --git a/src/current/_includes/v19.1/sql/diagrams/values_clause.html b/src/current/_includes/v19.1/sql/diagrams/values_clause.html deleted file mode 100644 index 34f78e982b4..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/values_clause.html +++ /dev/null @@ -1,27 +0,0 @@ -
- - - - -VALUES - - -( - - - -a_expr - - - -, - - -) - - -, - - - -
diff --git a/src/current/_includes/v19.1/sql/diagrams/with_clause.html b/src/current/_includes/v19.1/sql/diagrams/with_clause.html deleted file mode 100644 index 0f746306ae3..00000000000 --- a/src/current/_includes/v19.1/sql/diagrams/with_clause.html +++ /dev/null @@ -1,71 +0,0 @@ - diff --git a/src/current/_includes/v19.1/sql/function-special-forms.md b/src/current/_includes/v19.1/sql/function-special-forms.md deleted file mode 100644 index bb4b06bbe39..00000000000 --- a/src/current/_includes/v19.1/sql/function-special-forms.md +++ /dev/null @@ -1,27 +0,0 @@ -| Special form | Equivalent to | -|-----------------------------------------------------------|---------------------------------------------| -| `CURRENT_CATALOG` | `current_catalog()` | -| `CURRENT_DATE` | `current_date()` | -| `CURRENT_ROLE` | `current_user()` | -| `CURRENT_SCHEMA` | `current_schema()` | -| `CURRENT_TIMESTAMP` | `current_timestamp()` | -| `CURRENT_TIME` | `current_time()` | -| `CURRENT_USER` | `current_user()` | -| `EXTRACT( FROM )` | `extract("", )` | -| `EXTRACT_DURATION( FROM )` | `extract_duration("", )` | -| `OVERLAY( PLACING FROM FOR )` | `overlay(, , , )` | -| `OVERLAY( PLACING FROM )` | `overlay(, , )` | -| `POSITION( IN )` | `strpos(, )` | -| `SESSION_USER` | `current_user()` | -| `SUBSTRING( FOR FROM )` | `substring(, , )` | -| `SUBSTRING( FOR )` | `substring(, 1, )` | -| `SUBSTRING( FROM FOR )` | `substring(, , )` | -| `SUBSTRING( FROM )` | `substring(, )` | -| `TRIM( FROM )` | `btrim(, )` | -| `TRIM(, )` | `btrim(, )` | -| `TRIM(FROM )` | `btrim()` | -| `TRIM(LEADING FROM )` | `ltrim(, )` | -| `TRIM(LEADING FROM )` | `ltrim()` | -| `TRIM(TRAILING FROM )` | `rtrim(, )` | -| `TRIM(TRAILING FROM )` | `rtrim()` | -| `USER` | `current_user()` | diff --git a/src/current/_includes/v19.1/sql/operators.md b/src/current/_includes/v19.1/sql/operators.md deleted file mode 100644 index 97fce772d2c..00000000000 --- a/src/current/_includes/v19.1/sql/operators.md +++ /dev/null @@ -1,448 +0,0 @@ - - - - - -
#Return
int # intint
varbit # varbitvarbit
- - - - -
#>Return
jsonb #> string[]jsonb
- - - - -
#>>Return
jsonb #>> string[]string
- - - - - - - - -
%Return
decimal % decimaldecimal
decimal % intdecimal
float % floatfloat
int % decimaldecimal
int % intint
- - - - - - -
&Return
inet & inetinet
int & intint
varbit & varbitvarbit
- - - - - - - - - - - - - - -
*Return
decimal * decimaldecimal
decimal * intdecimal
decimal * intervalinterval
float * floatfloat
float * intervalinterval
int * decimaldecimal
int * intint
int * intervalinterval
interval * decimalinterval
interval * floatinterval
interval * intinterval
- - - - - - - - - - - - - - - - - - - - - - - -
+Return
date + intdate
date + intervaltimestamptz
date + timetimestamp
decimal + decimaldecimal
decimal + intdecimal
float + floatfloat
inet + intinet
int + datedate
int + decimaldecimal
int + inetinet
int + intint
interval + datetimestamptz
interval + intervalinterval
interval + timetime
interval + timestamptimestamp
interval + timestamptztimestamptz
time + datetimestamp
time + intervaltime
timestamp + intervaltimestamp
timestamptz + intervaltimestamptz
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Return
-decimaldecimal
-floatfloat
-intint
-intervalinterval
date - dateint
date - intdate
date - intervaltimestamptz
date - timetimestamp
decimal - decimaldecimal
decimal - intdecimal
float - floatfloat
inet - inetint
inet - intinet
int - decimaldecimal
int - intint
interval - intervalinterval
jsonb - intjsonb
jsonb - stringjsonb
jsonb - string[]jsonb
time - intervaltime
time - timeinterval
timestamp - intervaltimestamp
timestamp - timestampinterval
timestamp - timestamptzinterval
timestamptz - intervaltimestamptz
timestamptz - timestampinterval
timestamptz - timestamptzinterval
- - - - - -
->Return
jsonb -> intjsonb
jsonb -> stringjsonb
- - - - - -
->>Return
jsonb ->> intstring
jsonb ->> stringstring
- - - - - - - - - - -
/Return
decimal / decimaldecimal
decimal / intdecimal
float / floatfloat
int / decimaldecimal
int / intdecimal
interval / floatinterval
interval / intinterval
- - - - - - - - -
//Return
decimal // decimaldecimal
decimal // intdecimal
float // floatfloat
int // decimaldecimal
int // intint
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
<Return
bool < boolbool
bytes < bytesbool
collatedstring < collatedstringbool
date < datebool
date < timestampbool
date < timestamptzbool
decimal < decimalbool
decimal < floatbool
decimal < intbool
float < decimalbool
float < floatbool
float < intbool
inet < inetbool
int < decimalbool
int < floatbool
int < intbool
interval < intervalbool
oid < oidbool
string < stringbool
time < timebool
timestamp < datebool
timestamp < timestampbool
timestamp < timestamptzbool
timestamptz < datebool
timestamptz < timestampbool
timestamptz < timestamptzbool
tuple < tuplebool
uuid < uuidbool
varbit < varbitbool
- - - - - - -
<<Return
inet << inetbool
int << intint
varbit << intvarbit
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
<=Return
bool <= boolbool
bytes <= bytesbool
collatedstring <= collatedstringbool
date <= datebool
date <= timestampbool
date <= timestamptzbool
decimal <= decimalbool
decimal <= floatbool
decimal <= intbool
float <= decimalbool
float <= floatbool
float <= intbool
inet <= inetbool
int <= decimalbool
int <= floatbool
int <= intbool
interval <= intervalbool
oid <= oidbool
string <= stringbool
time <= timebool
timestamp <= datebool
timestamp <= timestampbool
timestamp <= timestamptzbool
timestamptz <= datebool
timestamptz <= timestampbool
timestamptz <= timestamptzbool
tuple <= tuplebool
uuid <= uuidbool
varbit <= varbitbool
- - - - -
<@Return
jsonb <@ jsonbbool
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
=Return
bool = boolbool
bool[] = bool[]bool
bytes = bytesbool
bytes[] = bytes[]bool
collatedstring = collatedstringbool
date = datebool
date = timestampbool
date = timestamptzbool
date[] = date[]bool
decimal = decimalbool
decimal = floatbool
decimal = intbool
decimal[] = decimal[]bool
float = decimalbool
float = floatbool
float = intbool
float[] = float[]bool
inet = inetbool
inet[] = inet[]bool
int = decimalbool
int = floatbool
int = intbool
int[] = int[]bool
interval = intervalbool
interval[] = interval[]bool
jsonb = jsonbbool
oid = oidbool
string = stringbool
string[] = string[]bool
time = timebool
time[] = time[]bool
timestamp = datebool
timestamp = timestampbool
timestamp = timestamptzbool
timestamp[] = timestamp[]bool
timestamptz = datebool
timestamptz = timestampbool
timestamptz = timestamptzbool
timestamptz = timestamptzbool
tuple = tuplebool
uuid = uuidbool
uuid[] = uuid[]bool
varbit = varbitbool
- - - - - - -
>>Return
inet >> inetbool
int >> intint
varbit >> intvarbit
- - - - -
?Return
jsonb ? stringbool
- - - - -
?&Return
jsonb ?& string[]bool
- - - - -
?|Return
jsonb ?| string[]bool
- - - - -
@>Return
jsonb @> jsonbbool
- - - - -
ILIKEReturn
string ILIKE stringbool
- - - - - - - - - - - - - - - - - - - - - -
INReturn
bool IN tuplebool
bytes IN tuplebool
collatedstring IN tuplebool
date IN tuplebool
decimal IN tuplebool
float IN tuplebool
inet IN tuplebool
int IN tuplebool
interval IN tuplebool
jsonb IN tuplebool
oid IN tuplebool
string IN tuplebool
time IN tuplebool
timestamp IN tuplebool
timestamptz IN tuplebool
tuple IN tuplebool
uuid IN tuplebool
varbit IN tuplebool
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IS NOT DISTINCT FROMReturn
bool IS NOT DISTINCT FROM boolbool
bool[] IS NOT DISTINCT FROM bool[]bool
bytes IS NOT DISTINCT FROM bytesbool
bytes[] IS NOT DISTINCT FROM bytes[]bool
collatedstring IS NOT DISTINCT FROM collatedstringbool
date IS NOT DISTINCT FROM datebool
date IS NOT DISTINCT FROM timestampbool
date IS NOT DISTINCT FROM timestamptzbool
date[] IS NOT DISTINCT FROM date[]bool
decimal IS NOT DISTINCT FROM decimalbool
decimal IS NOT DISTINCT FROM floatbool
decimal IS NOT DISTINCT FROM intbool
decimal[] IS NOT DISTINCT FROM decimal[]bool
float IS NOT DISTINCT FROM decimalbool
float IS NOT DISTINCT FROM floatbool
float IS NOT DISTINCT FROM intbool
float[] IS NOT DISTINCT FROM float[]bool
inet IS NOT DISTINCT FROM inetbool
inet[] IS NOT DISTINCT FROM inet[]bool
int IS NOT DISTINCT FROM decimalbool
int IS NOT DISTINCT FROM floatbool
int IS NOT DISTINCT FROM intbool
int[] IS NOT DISTINCT FROM int[]bool
interval IS NOT DISTINCT FROM intervalbool
interval[] IS NOT DISTINCT FROM interval[]bool
jsonb IS NOT DISTINCT FROM jsonbbool
oid IS NOT DISTINCT FROM oidbool
string IS NOT DISTINCT FROM stringbool
string[] IS NOT DISTINCT FROM string[]bool
time IS NOT DISTINCT FROM timebool
time[] IS NOT DISTINCT FROM time[]bool
timestamp IS NOT DISTINCT FROM datebool
timestamp IS NOT DISTINCT FROM timestampbool
timestamp IS NOT DISTINCT FROM timestamptzbool
timestamp[] IS NOT DISTINCT FROM timestamp[]bool
timestamptz IS NOT DISTINCT FROM datebool
timestamptz IS NOT DISTINCT FROM timestampbool
timestamptz IS NOT DISTINCT FROM timestamptzbool
timestamptz IS NOT DISTINCT FROM timestamptzbool
tuple IS NOT DISTINCT FROM tuplebool
unknown IS NOT DISTINCT FROM unknownbool
uuid IS NOT DISTINCT FROM uuidbool
uuid[] IS NOT DISTINCT FROM uuid[]bool
varbit IS NOT DISTINCT FROM varbitbool
- - - - -
LIKEReturn
string LIKE stringbool
- - - - -
SIMILAR TOReturn
string SIMILAR TO stringbool
- - - - - - - - -
^Return
decimal ^ decimaldecimal
decimal ^ intdecimal
float ^ floatfloat
int ^ decimaldecimal
int ^ intint
- - - - - - -
|Return
inet | inetinet
int | intint
varbit | varbitvarbit
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
||Return
bool || bool[]bool[]
bool[] || boolbool[]
bool[] || bool[]bool[]
bytes || bytesbytes
bytes || bytes[]bytes[]
bytes[] || bytesbytes[]
bytes[] || bytes[]bytes[]
date || date[]date[]
date[] || datedate[]
date[] || date[]date[]
decimal || decimal[]decimal[]
decimal[] || decimaldecimal[]
decimal[] || decimal[]decimal[]
float || float[]float[]
float[] || floatfloat[]
float[] || float[]float[]
inet || inet[]inet[]
inet[] || inetinet[]
inet[] || inet[]inet[]
int || int[]int[]
int[] || intint[]
int[] || int[]int[]
interval || interval[]interval[]
interval[] || intervalinterval[]
interval[] || interval[]interval[]
jsonb || jsonbjsonb
oid || oidoid
string || stringstring
string || string[]string[]
string[] || stringstring[]
string[] || string[]string[]
time || time[]time[]
time[] || timetime[]
time[] || time[]time[]
timestamp || timestamp[]timestamp[]
timestamp[] || timestamptimestamp[]
timestamp[] || timestamp[]timestamp[]
timestamptz || timestamptztimestamptz
timestamptz || timestamptztimestamptz
timestamptz || timestamptztimestamptz
uuid || uuid[]uuid[]
uuid[] || uuiduuid[]
uuid[] || uuid[]uuid[]
varbit || varbitvarbit
- - - - - - - -
~Return
~inetinet
~intint
~varbitvarbit
string ~ stringbool
- - - - -
~*Return
string ~* stringbool
diff --git a/src/current/_includes/v19.1/sql/physical-plan-url.md b/src/current/_includes/v19.1/sql/physical-plan-url.md deleted file mode 100644 index 774ea72f5c8..00000000000 --- a/src/current/_includes/v19.1/sql/physical-plan-url.md +++ /dev/null @@ -1 +0,0 @@ -The physical query plan is encoded into a byte string after the [fragment identifier (`#`)](https://en.wikipedia.org/wiki/Fragment_identifier) in the generated URL. The fragment is not sent to the web server; instead, the browser waits for the web server to return a `decode.html` resource, and then JavaScript on the web page decodes the fragment into a physical query plan diagram. The query plan is, therefore, not logged by a server external to the CockroachDB cluster and not exposed to the public internet. diff --git a/src/current/_includes/v19.1/sql/set-transaction-as-of-system-time-example.md b/src/current/_includes/v19.1/sql/set-transaction-as-of-system-time-example.md deleted file mode 100644 index f4cecd43fc7..00000000000 --- a/src/current/_includes/v19.1/sql/set-transaction-as-of-system-time-example.md +++ /dev/null @@ -1,24 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET TRANSACTION AS OF SYSTEM TIME '2019-04-09 18:02:52.0+00:00'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM products; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ diff --git a/src/current/_includes/v19.1/start-in-docker/mac-linux-steps.md b/src/current/_includes/v19.1/start-in-docker/mac-linux-steps.md deleted file mode 100644 index c2f8ef608df..00000000000 --- a/src/current/_includes/v19.1/start-in-docker/mac-linux-steps.md +++ /dev/null @@ -1,148 +0,0 @@ -## Before you begin - -If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**. - -## Step 1. Create a bridge network - -Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks. - -{% include copy-clipboard.html %} -~~~ shell -$ docker network create -d bridge roachnet -~~~ - -We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like. - -## Step 2. Start the first node - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach1 \ ---hostname=roach1 \ ---net=roachnet \ --p 26257:26257 -p 8080:8080 \ --v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure -~~~ - -This command creates a container and starts the first CockroachDB node inside it. Let's look at each part: - -- `docker run`: The Docker command to start a new container. -- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell. -- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container. -- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster. -- `--net`: The bridge network for the container to join. See step 1 for more details. -- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser. -- `-v "${PWD}/cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `${PWD}/cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic. -- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode. - -## Step 3. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. - -To simulate a real deployment, scale your cluster by adding two more nodes: - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach2 \ ---hostname=roach2 \ ---net=roachnet \ --v "${PWD}/cockroach-data/roach2:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker run -d \ ---name=roach3 \ ---hostname=roach3 \ ---net=roachnet \ --v "${PWD}/cockroach-data/roach3:/cockroach/cockroach-data" \ -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1 -~~~ - -These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2: - -- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `${PWD}/cockroach-data/roach2` and `${PWD}/cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted. -- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Otherwise, all [`cockroach start`](start-a-node.html) defaults are accepted. Note that since each node is in a unique container, using identical default ports won’t cause conflicts. - -## Step 4. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container: - -{% include copy-clipboard.html %} -~~~ shell -$ docker exec -it roach1 ./cockroach sql --insecure -~~~ - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then start the SQL shell in the second container: - -{% include copy-clipboard.html %} -~~~ shell -$ docker exec -it roach2 ./cockroach sql --insecure -~~~ - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -When you're done, exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ diff --git a/src/current/_includes/v19.1/topology-patterns/fundamentals.md b/src/current/_includes/v19.1/topology-patterns/fundamentals.md deleted file mode 100644 index 1a13ce9443b..00000000000 --- a/src/current/_includes/v19.1/topology-patterns/fundamentals.md +++ /dev/null @@ -1,6 +0,0 @@ -- Multi-region topology patterns are almost always table-specific. If you haven't already, [review the full range of patterns](topology-patterns.html#multi-region-patterns) to ensure you choose the right one for each of your tables. -- Review how data is replicated and distributed across a cluster, and how this affects performance. It is especially important to understand the concept of the "leaseholder". For a summary, see [Reads and Writes in CockroachDB](architecture/reads-and-writes-overview.html). For a deeper dive, see the [CockroachDB Architecture](architecture/overview.html) documentation. -- Review the concept of [locality](start-a-node.html#locality), which makes CockroachDB aware of the location of nodes and able to intelligently place and balance data based on how you define [replication controls](configure-replication-zones.html). -- Review the recommendations and requirements in our [Production Checklist](recommended-production-settings.html). -- This topology doesn't account for hardware specifications, so be sure to follow our [hardware recommendations](recommended-production-settings.html#hardware) and perform a POC to size hardware for your use case. -- Adopt relevant [SQL Best Practices](performance-best-practices-overview.html) to ensure optimal performance. diff --git a/src/current/_includes/v19.1/topology-patterns/multi-region-cluster-setup.md b/src/current/_includes/v19.1/topology-patterns/multi-region-cluster-setup.md deleted file mode 100644 index 28682afaec1..00000000000 --- a/src/current/_includes/v19.1/topology-patterns/multi-region-cluster-setup.md +++ /dev/null @@ -1,29 +0,0 @@ -Each [multi-region topology pattern](topology-patterns.html#multi-region-patterns) assumes the following setup: - -Multi-region hardware setup - -#### Hardware - -- 3 regions - -- Per region, 3+ AZs with 3+ VMs evenly distributed across them - -- Region-specific app instances and load balancers - - Each load balancer redirects to CockroachDB nodes in its region. - - When CockroachDB nodes are unavailable in a region, the load balancer redirects to nodes in other regions. - -#### Cluster - -Each node is started with the [`--locality`](start-a-node.html#locality) flag specifying its region and AZ combination. For example, the following command starts a node in the west1 AZ of the us-west region: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---locality=region=us-west,zone=west1 \ ---certs-dir=certs \ ---advertise-addr= \ ---join=:26257,:26257,:26257 \ ---cache=.25 \ ---max-sql-memory=.25 \ ---background -~~~ diff --git a/src/current/_includes/v19.1/topology-patterns/see-also.md b/src/current/_includes/v19.1/topology-patterns/see-also.md deleted file mode 100644 index 03844ca34fd..00000000000 --- a/src/current/_includes/v19.1/topology-patterns/see-also.md +++ /dev/null @@ -1,11 +0,0 @@ -- [Topology Patterns Overview](topology-patterns.html) - - - Single-region - - [Development](topology-development.html) - - [Basic Production](topology-basic-production.html) - - - Multi-region - - [Geo-Partitioned Replicas](topology-geo-partitioned-replicas.html) - - [Geo-Partitioned Leaseholders](topology-geo-partitioned-leaseholders.html) - - [Duplicate Indexes](topology-duplicate-indexes.html) - - [Follow-the-Workload](topology-follow-the-workload.html) diff --git a/src/current/_includes/v19.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md b/src/current/_includes/v19.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md deleted file mode 100644 index 58f06d6c0e7..00000000000 --- a/src/current/_includes/v19.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md +++ /dev/null @@ -1,32 +0,0 @@ -In addition to [constraining replicas to specific datacenters](configure-replication-zones.html#per-replica-constraints-to-specific-datacenters), you may also specify preferences for where the range's leaseholders should be placed. This can result in increased performance in some scenarios. - -The [`ALTER TABLE ... CONFIGURE ZONE`](configure-zone.html) statement below requires that the cluster try to place the ranges' leaseholders in zone `us-east-1b`; if that is not possible, it will try to place them in zone `us-east-1a`. - -For more information about how the `lease_preferences` field works, see its description in the [Replication zone variables](configure-replication-zones.html#replication-zone-variables) section. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE kv CONFIGURE ZONE USING num_replicas = 3, constraints = '{"+zone=us-east-1a": 1, "+zone=us-east-1b": 1}', lease_preferences = '[[+zone=us-east-1b], [+zone=us-east-1a]]'; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR TABLE kv; -~~~ - -~~~ - zone_name | config_sql ------------+------------------------------------------------------------------------ - test.kv | ALTER TABLE kv CONFIGURE ZONE USING + - | range_min_bytes = 1048576, + - | range_max_bytes = 67108864, + - | gc.ttlseconds = 90000, + - | num_replicas = 3, + - | constraints = '{+zone=us-east-1a: 1, +zone=us-east-1b: 1}', + - | lease_preferences = '[[+zone=us-east-1b], [+zone=us-east-1a]]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-database.md b/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-database.md deleted file mode 100644 index b5e5b3e9347..00000000000 --- a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-database.md +++ /dev/null @@ -1,28 +0,0 @@ -To control replication for a specific database, use the `ALTER DATABASE ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected): - -{% include copy-clipboard.html %} -~~~ sql -> ALTER DATABASE test CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR DATABASE test; -~~~ - -~~~ - zone_name | config_sql -+-----------+------------------------------------------+ - test | ALTER DATABASE test CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md b/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md deleted file mode 100644 index 4b6b2e9e6fd..00000000000 --- a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md +++ /dev/null @@ -1,42 +0,0 @@ -{{site.data.alerts.callout_success}} -New in v19.1: The [Cost-based Optimizer](cost-based-optimizer.html) can take advantage of replication zones for secondary indexes when optimizing queries. For more information, see [Cost-based optimizer - preferring the nearest index](cost-based-optimizer.html#preferring-the-nearest-index). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -This is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. - -To control replication for a specific secondary index, use the `ALTER INDEX ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected). - -{{site.data.alerts.callout_success}} -To get the name of a secondary index, which you need for the `CONFIGURE ZONE` statement, use the [`SHOW INDEX`](show-index.html) or [`SHOW CREATE TABLE`](show-create.html) statements. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX tpch.frequent_customers CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR INDEX tpch.customer@frequent_customers; -~~~ - -~~~ - zone_name | config_sql -+----------------------------------+--------------------------------------------------------------------------+ - tpch.customer@frequent_customers | ALTER INDEX tpch.public.customer@frequent_customers CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-system-range.md b/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-system-range.md deleted file mode 100644 index 1222f7cdc7c..00000000000 --- a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-system-range.md +++ /dev/null @@ -1,41 +0,0 @@ -In addition to the databases and tables that are visible via the SQL interface, CockroachDB stores internal data in what are called system ranges. CockroachDB comes with pre-configured replication zones for some of these ranges: - -Zone Name | Description -----------|----------------------------- -`.meta` | The "meta" ranges contain the authoritative information about the location of all data in the cluster.

These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `.meta` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance.

If your cluster is running in multiple datacenters, it's a best practice to configure the meta ranges to have a copy in each datacenter. -`.liveness` | The "liveness" range contains the authoritative information about which nodes are live at any given time.

These ranges must retain a majority of replicas for the cluster as a whole to remain available and historical queries are never run on them, so CockroachDB comes with a **pre-configured** `.liveness` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure and a lower-than-default `gc.ttlseconds` to keep these ranges smaller for reliable performance. -`.system` | There are system ranges for a variety of other important internal data, including information needed to allocate new table IDs and track the status of a cluster's nodes.

These ranges must retain a majority of replicas for the cluster as a whole to remain available, so CockroachDB comes with a **pre-configured** `.system` replication zone with `num_replicas` set to 5 to make these ranges more resilient to node failure. -`.timeseries` | The "timeseries" ranges contain monitoring data about the cluster that powers the graphs in CockroachDB's Admin UI. If necessary, you can add a `.timeseries` replication zone to control the replication of this data. - -{{site.data.alerts.callout_danger}} -Use caution when editing replication zones for system ranges, as they could cause some (or all) parts of your cluster to stop working. -{{site.data.alerts.end}} - -To control replication for one of the above sets of system ranges, use the [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) statement to define the values you want to change (other values will not be affected): - -{% include copy-clipboard.html %} -~~~ sql -> ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 7; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR RANGE meta; -~~~ - -~~~ - zone_name | config_sql -+-----------+---------------------------------------+ - .meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 3600, - | num_replicas = 7, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table-partition.md b/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table-partition.md deleted file mode 100644 index 6e2ac1677fd..00000000000 --- a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table-partition.md +++ /dev/null @@ -1,36 +0,0 @@ -{{site.data.alerts.callout_info}} -This is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -To [control replication for table partitions](partitioning.html#replication-zones), use the `ALTER PARTITION ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected): - -{% include copy-clipboard.html %} -~~~ sql -> ALTER PARTITION north_america OF TABLE customers CONFIGURE ZONE USING num_replicas = 5, constraints = '[-region=EU]'; -~~~ - -~~~ sql -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR PARTITION north_america OF TABLE customers; -~~~ - -~~~ - zone_name | config_sql -+------------------------------+-------------------------------------------------------------------------------+ - test.customers.north_america | ALTER PARTITION north_america OF INDEX customers@primary CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[-region=EU]', - | lease_preferences = '[]' -(1 row) -~~~ - -{{site.data.alerts.callout_success}} -Since the syntax is the same for defining a replication zone for a table or index partition (e.g., `database.table.partition`), give partitions names that communicate what they are partitioning, e.g., `north_america_table` vs `north_america_idx1`. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table.md b/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table.md deleted file mode 100644 index 468df8f9bac..00000000000 --- a/src/current/_includes/v19.1/zone-configs/create-a-replication-zone-for-a-table.md +++ /dev/null @@ -1,28 +0,0 @@ -To control replication for a specific table, use the `ALTER TABLE ... CONFIGURE ZONE` statement to define the values you want to change (other values will not be affected): - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE customers CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR TABLE customers; -~~~ - -~~~ - zone_name | config_sql -+----------------+--------------------------------------------+ - test.customers | ALTER TABLE customers CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/edit-the-default-replication-zone.md b/src/current/_includes/v19.1/zone-configs/edit-the-default-replication-zone.md deleted file mode 100644 index 18ec82d91a4..00000000000 --- a/src/current/_includes/v19.1/zone-configs/edit-the-default-replication-zone.md +++ /dev/null @@ -1,28 +0,0 @@ -To edit the default replication zone, use the `ALTER RANGE ... CONFIGURE ZONE` statement to define the values you want to change (other values will remain the same): - -{% include copy-clipboard.html %} -~~~ sql -> ALTER RANGE default CONFIGURE ZONE USING num_replicas = 5, gc.ttlseconds = 100000; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR RANGE default; -~~~ - -~~~ - zone_name | config_sql -+-----------+------------------------------------------+ - .default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 100000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/remove-a-replication-zone.md b/src/current/_includes/v19.1/zone-configs/remove-a-replication-zone.md deleted file mode 100644 index b379652c8c8..00000000000 --- a/src/current/_includes/v19.1/zone-configs/remove-a-replication-zone.md +++ /dev/null @@ -1,8 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE DISCARD; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/reset-a-replication-zone.md b/src/current/_includes/v19.1/zone-configs/reset-a-replication-zone.md deleted file mode 100644 index 60474c84a5d..00000000000 --- a/src/current/_includes/v19.1/zone-configs/reset-a-replication-zone.md +++ /dev/null @@ -1,8 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE USING DEFAULT; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/variables.md b/src/current/_includes/v19.1/zone-configs/variables.md deleted file mode 100644 index 32ab6d7798c..00000000000 --- a/src/current/_includes/v19.1/zone-configs/variables.md +++ /dev/null @@ -1,14 +0,0 @@ -Variable | Description -------|------------ -`range_min_bytes` | The minimum size, in bytes, for a range of data in the zone. When a range is less than this size, CockroachDB will merge it with an adjacent range.

**Default:** `16777216` (16MiB) -`range_max_bytes` | The maximum size, in bytes, for a range of data in the zone. When a range reaches this size, CockroachDB will split it into two ranges.

**Default:** `67108864` (64MiB) -`gc.ttlseconds` | The number of seconds overwritten values will be retained before garbage collection. Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for `AS OF SYSTEM TIME` queries, also know as [Time Travel Queries](select-clause.html#select-historical-data-time-travel).

It is not recommended to set this below `600` (10 minutes); doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits, it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 64MiB; such oversized ranges could contribute to the server running out of memory or other problems.

**Default:** `90000` (25 hours) -`num_replicas` | The number of replicas in the zone.

**Default:** `3`

For the `system` database and `.meta`, `.liveness`, and `.system` ranges, the default value is `5`. -`constraints` | An array of required (`+`) and/or prohibited (`-`) constraints influencing the location of replicas. See [Types of Constraints](configure-replication-zones.html#types-of-constraints) and [Scope of Constraints](configure-replication-zones.html#scope-of-constraints) for more details.

To prevent hard-to-detect typos, constraints placed on [store attributes and node localities](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) must match the values passed to at least one node in the cluster. If not, an error is signalled.

**Default:** No constraints, with CockroachDB locating each replica on a unique node and attempting to spread replicas evenly across localities. -`lease_preferences` | An ordered list of required and/or prohibited constraints influencing the location of [leaseholders](architecture/overview.html#glossary). Whether each constraint is required or prohibited is expressed with a leading `+` or `-`, respectively. Note that lease preference constraints do not have to be shared with the `constraints` field. For example, it's valid for your configuration to define a `lease_preferences` field that does not reference any values from the `constraints` field. It's also valid to define a `lease_preferences` field with no `constraints` field at all.

If the first preference cannot be satisfied, CockroachDB will attempt to satisfy the second preference, and so on. If none of the preferences can be met, the lease will be placed using the default lease placement algorithm, which is to base lease placement decisions on how many leases each node already has, trying to make all the nodes have around the same amount.

Each value in the list can include multiple constraints. For example, the list `[[+zone=us-east-1b, +ssd], [+zone=us-east-1a], [+zone=us-east-1c, +ssd]]` means "prefer nodes with an SSD in `us-east-1b`, then any nodes in `us-east-1a`, then nodes in `us-east-1c` with an SSD."

For a usage example, see [Constrain leaseholders to specific datacenters](configure-replication-zones.html#constrain-leaseholders-to-specific-datacenters).

**Default**: No lease location preferences are applied if this field is not specified. - -{{site.data.alerts.callout_info}} -If a value is not set, new zone configurations will inherit their values from their parent zone (e.g., a partition zone inherits from the table zone), which is not necessarily `.default`. - -If a variable is set to `COPY FROM PARENT` (e.g., `range_max_bytes = COPY FROM PARENT`), the variable will copy its value from its parent [replication zone](configure-replication-zones.html). The `COPY FROM PARENT` value is a convenient shortcut to use so you do not have to look up the parent's current value. For example, the `range_max_bytes` and `range_min_bytes` variables must be set together, so when editing one value, you can use `COPY FROM PARENT` for the other. Note that if the variable in the parent replication zone is changed after the child replication zone is copied, the change will not be reflected in the child zone. -{{site.data.alerts.end}} diff --git a/src/current/_includes/v19.1/zone-configs/view-all-replication-zones.md b/src/current/_includes/v19.1/zone-configs/view-all-replication-zones.md deleted file mode 100644 index 076286064a1..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-all-replication-zones.md +++ /dev/null @@ -1,52 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ALL ZONE CONFIGURATIONS; -~~~ - -~~~ - zone_name | config_sql -+-------------+-----------------------------------------------------+ - .default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' - system | ALTER DATABASE system CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - system.jobs | ALTER TABLE system.public.jobs CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 600, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - .meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 3600, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - .system | ALTER RANGE system CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - .liveness | ALTER RANGE liveness CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 600, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' -(6 rows) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/view-the-default-replication-zone.md b/src/current/_includes/v19.1/zone-configs/view-the-default-replication-zone.md deleted file mode 100644 index 05120116574..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-the-default-replication-zone.md +++ /dev/null @@ -1,17 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR RANGE default; -~~~ - -~~~ - zone_name | config_sql -+-----------+------------------------------------------+ - .default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-database.md b/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-database.md deleted file mode 100644 index 2d65a6aebdd..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-database.md +++ /dev/null @@ -1,16 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR DATABASE tpch; -~~~ -~~~ - zone_name | config_sql -+-----------+------------------------------------------+ - tpch | ALTER DATABASE tpch CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-partition.md b/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-partition.md deleted file mode 100644 index 8ffd40b827c..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-partition.md +++ /dev/null @@ -1,16 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR PARTITION north_america OF TABLE roachlearn.students; -~~~ - -~~~ - zone_name | config_sql -+-----------------------------------+------------------------------------------------------------------------------------------------+ - roachlearn.students.north_america | ALTER PARTITION north_america OF INDEX roachlearn.public.students@primary CONFIGURE ZONE USING - | range_min_bytes = 16777216, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[+region=us]', - | lease_preferences = '[]' -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-table.md b/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-table.md deleted file mode 100644 index 5de95591be7..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-a-table.md +++ /dev/null @@ -1,16 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR TABLE tpch.customer; -~~~ -~~~ - zone_name | config_sql -+---------------+-------------------------------------------------------+ - tpch.customer | ALTER TABLE tpch.public.customer CONFIGURE ZONE USING - | range_min_bytes = 40000, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-an-index.md b/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-an-index.md deleted file mode 100644 index e50f56ed779..00000000000 --- a/src/current/_includes/v19.1/zone-configs/view-the-replication-zone-for-an-index.md +++ /dev/null @@ -1,16 +0,0 @@ -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATION FOR INDEX tpch.customer@frequent_customers; -~~~ -~~~ - zone_name | config_sql -+---------------+-------------------------------------------------------+ - tpch.customer | ALTER TABLE tpch.public.customer CONFIGURE ZONE USING - | range_min_bytes = 40000, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' -(1 row) -~~~ diff --git a/src/current/images/v19.1/CockroachDB_Training_Wide.png b/src/current/images/v19.1/CockroachDB_Training_Wide.png deleted file mode 100644 index 0844c2b50e0..00000000000 Binary files a/src/current/images/v19.1/CockroachDB_Training_Wide.png and /dev/null differ diff --git a/src/current/images/v19.1/Parallel_Statement_Execution_Error_Mismatch.png b/src/current/images/v19.1/Parallel_Statement_Execution_Error_Mismatch.png deleted file mode 100644 index f60360c9598..00000000000 Binary files a/src/current/images/v19.1/Parallel_Statement_Execution_Error_Mismatch.png and /dev/null differ diff --git a/src/current/images/v19.1/Parallel_Statement_Hybrid_Execution.png b/src/current/images/v19.1/Parallel_Statement_Hybrid_Execution.png deleted file mode 100644 index a4edf85dc02..00000000000 Binary files a/src/current/images/v19.1/Parallel_Statement_Hybrid_Execution.png and /dev/null differ diff --git a/src/current/images/v19.1/Parallel_Statement_Normal_Execution.png b/src/current/images/v19.1/Parallel_Statement_Normal_Execution.png deleted file mode 100644 index df63ab1da01..00000000000 Binary files a/src/current/images/v19.1/Parallel_Statement_Normal_Execution.png and /dev/null differ diff --git a/src/current/images/v19.1/Sequential_Statement_Execution.png b/src/current/images/v19.1/Sequential_Statement_Execution.png deleted file mode 100644 index 99c47c51664..00000000000 Binary files a/src/current/images/v19.1/Sequential_Statement_Execution.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-cluster-overview-panel.png b/src/current/images/v19.1/admin-ui-cluster-overview-panel.png deleted file mode 100644 index ee906077ee8..00000000000 Binary files a/src/current/images/v19.1/admin-ui-cluster-overview-panel.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-custom-chart-debug-00.png b/src/current/images/v19.1/admin-ui-custom-chart-debug-00.png deleted file mode 100644 index a82305beffd..00000000000 Binary files a/src/current/images/v19.1/admin-ui-custom-chart-debug-00.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-custom-chart-debug-01.png b/src/current/images/v19.1/admin-ui-custom-chart-debug-01.png deleted file mode 100644 index f8b9162f14e..00000000000 Binary files a/src/current/images/v19.1/admin-ui-custom-chart-debug-01.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-components.png b/src/current/images/v19.1/admin-ui-node-components.png deleted file mode 100644 index 2ed730ff80c..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-components.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-list.png b/src/current/images/v19.1/admin-ui-node-list.png deleted file mode 100644 index 14b7b87d58d..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-list.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-map-after-license.png b/src/current/images/v19.1/admin-ui-node-map-after-license.png deleted file mode 100644 index fa47a7b579f..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-map-after-license.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-map-before-license.png b/src/current/images/v19.1/admin-ui-node-map-before-license.png deleted file mode 100644 index f352e214868..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-map-before-license.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-map-complete.png b/src/current/images/v19.1/admin-ui-node-map-complete.png deleted file mode 100644 index 46b1c38d4bf..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-map-complete.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-map-navigation.gif b/src/current/images/v19.1/admin-ui-node-map-navigation.gif deleted file mode 100644 index 67ce2dc009c..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-map-navigation.gif and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-node-map.png b/src/current/images/v19.1/admin-ui-node-map.png deleted file mode 100644 index c1e0b83a3dc..00000000000 Binary files a/src/current/images/v19.1/admin-ui-node-map.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-region-component.png b/src/current/images/v19.1/admin-ui-region-component.png deleted file mode 100644 index c36a362d107..00000000000 Binary files a/src/current/images/v19.1/admin-ui-region-component.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-single-node.gif b/src/current/images/v19.1/admin-ui-single-node.gif deleted file mode 100644 index f60d25b0e2a..00000000000 Binary files a/src/current/images/v19.1/admin-ui-single-node.gif and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-statements-page.png b/src/current/images/v19.1/admin-ui-statements-page.png deleted file mode 100644 index aec34103196..00000000000 Binary files a/src/current/images/v19.1/admin-ui-statements-page.png and /dev/null differ diff --git a/src/current/images/v19.1/admin-ui-time-range.gif b/src/current/images/v19.1/admin-ui-time-range.gif deleted file mode 100644 index c28807b9a1b..00000000000 Binary files a/src/current/images/v19.1/admin-ui-time-range.gif and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_available_disk_capacity.png b/src/current/images/v19.1/admin_ui_available_disk_capacity.png deleted file mode 100644 index 7ee4c2c5359..00000000000 Binary files a/src/current/images/v19.1/admin_ui_available_disk_capacity.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_capacity.png b/src/current/images/v19.1/admin_ui_capacity.png deleted file mode 100644 index 1e9085851af..00000000000 Binary files a/src/current/images/v19.1/admin_ui_capacity.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_cpu_percent.png b/src/current/images/v19.1/admin_ui_cpu_percent.png deleted file mode 100644 index dae468b6d6f..00000000000 Binary files a/src/current/images/v19.1/admin_ui_cpu_percent.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_cpu_time.png b/src/current/images/v19.1/admin_ui_cpu_time.png deleted file mode 100644 index 3e81817ca38..00000000000 Binary files a/src/current/images/v19.1/admin_ui_cpu_time.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_database_grants_view.png b/src/current/images/v19.1/admin_ui_database_grants_view.png deleted file mode 100644 index c21145da9f9..00000000000 Binary files a/src/current/images/v19.1/admin_ui_database_grants_view.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_database_tables_view.png b/src/current/images/v19.1/admin_ui_database_tables_view.png deleted file mode 100644 index 0ffcf19af19..00000000000 Binary files a/src/current/images/v19.1/admin_ui_database_tables_view.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_iops.png b/src/current/images/v19.1/admin_ui_disk_iops.png deleted file mode 100644 index f0f553547e3..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_iops.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_read_bytes.png b/src/current/images/v19.1/admin_ui_disk_read_bytes.png deleted file mode 100644 index 15bcb584f55..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_read_bytes.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_read_ops.png b/src/current/images/v19.1/admin_ui_disk_read_ops.png deleted file mode 100644 index 55b356f84ec..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_read_ops.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_read_time.png b/src/current/images/v19.1/admin_ui_disk_read_time.png deleted file mode 100644 index fd340744135..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_read_time.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_write_bytes.png b/src/current/images/v19.1/admin_ui_disk_write_bytes.png deleted file mode 100644 index e3fd5fccdad..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_write_bytes.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_write_ops.png b/src/current/images/v19.1/admin_ui_disk_write_ops.png deleted file mode 100644 index 9e493d69f88..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_write_ops.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_disk_write_time.png b/src/current/images/v19.1/admin_ui_disk_write_time.png deleted file mode 100644 index 3cd023ffd40..00000000000 Binary files a/src/current/images/v19.1/admin_ui_disk_write_time.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_events.png b/src/current/images/v19.1/admin_ui_events.png deleted file mode 100644 index 3d3a4738c78..00000000000 Binary files a/src/current/images/v19.1/admin_ui_events.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_file_descriptors.png b/src/current/images/v19.1/admin_ui_file_descriptors.png deleted file mode 100644 index 42187c9878d..00000000000 Binary files a/src/current/images/v19.1/admin_ui_file_descriptors.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_hovering.gif b/src/current/images/v19.1/admin_ui_hovering.gif deleted file mode 100644 index 1795471051f..00000000000 Binary files a/src/current/images/v19.1/admin_ui_hovering.gif and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_jobs_page.png b/src/current/images/v19.1/admin_ui_jobs_page.png deleted file mode 100644 index a9f07a785a3..00000000000 Binary files a/src/current/images/v19.1/admin_ui_jobs_page.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_jobs_page_new.png b/src/current/images/v19.1/admin_ui_jobs_page_new.png deleted file mode 100644 index dd07672cde0..00000000000 Binary files a/src/current/images/v19.1/admin_ui_jobs_page_new.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_max_changefeed.png b/src/current/images/v19.1/admin_ui_max_changefeed.png deleted file mode 100644 index 7fb0eba3d8a..00000000000 Binary files a/src/current/images/v19.1/admin_ui_max_changefeed.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_memory_usage.png b/src/current/images/v19.1/admin_ui_memory_usage.png deleted file mode 100644 index ffc2c515616..00000000000 Binary files a/src/current/images/v19.1/admin_ui_memory_usage.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_memory_usage_new.png b/src/current/images/v19.1/admin_ui_memory_usage_new.png deleted file mode 100644 index 97ae93e1b8e..00000000000 Binary files a/src/current/images/v19.1/admin_ui_memory_usage_new.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_network_bytes_received.png b/src/current/images/v19.1/admin_ui_network_bytes_received.png deleted file mode 100644 index e9a274dc793..00000000000 Binary files a/src/current/images/v19.1/admin_ui_network_bytes_received.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_network_bytes_sent.png b/src/current/images/v19.1/admin_ui_network_bytes_sent.png deleted file mode 100644 index 2eb35a43222..00000000000 Binary files a/src/current/images/v19.1/admin_ui_network_bytes_sent.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_node_count.png b/src/current/images/v19.1/admin_ui_node_count.png deleted file mode 100644 index d5c103fc868..00000000000 Binary files a/src/current/images/v19.1/admin_ui_node_count.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_nodes_page.png b/src/current/images/v19.1/admin_ui_nodes_page.png deleted file mode 100644 index 495ff14eea0..00000000000 Binary files a/src/current/images/v19.1/admin_ui_nodes_page.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_overview_dashboard.png b/src/current/images/v19.1/admin_ui_overview_dashboard.png deleted file mode 100644 index c2adcbf0c83..00000000000 Binary files a/src/current/images/v19.1/admin_ui_overview_dashboard.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_ranges.png b/src/current/images/v19.1/admin_ui_ranges.png deleted file mode 100644 index 316186bb4a3..00000000000 Binary files a/src/current/images/v19.1/admin_ui_ranges.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replica_quiescence.png b/src/current/images/v19.1/admin_ui_replica_quiescence.png deleted file mode 100644 index 663dbfb097e..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replica_quiescence.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replica_snapshots.png b/src/current/images/v19.1/admin_ui_replica_snapshots.png deleted file mode 100644 index 56146c7f775..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replica_snapshots.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replicas_migration.png b/src/current/images/v19.1/admin_ui_replicas_migration.png deleted file mode 100644 index 6e08c5a3a5b..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replicas_migration.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replicas_migration2.png b/src/current/images/v19.1/admin_ui_replicas_migration2.png deleted file mode 100644 index f7183689f20..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replicas_migration2.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replicas_migration3.png b/src/current/images/v19.1/admin_ui_replicas_migration3.png deleted file mode 100644 index b7d9fd39760..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replicas_migration3.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replicas_per_node.png b/src/current/images/v19.1/admin_ui_replicas_per_node.png deleted file mode 100644 index a6a662c6f32..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replicas_per_node.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_replicas_per_store.png b/src/current/images/v19.1/admin_ui_replicas_per_store.png deleted file mode 100644 index 2036c392fc8..00000000000 Binary files a/src/current/images/v19.1/admin_ui_replicas_per_store.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_service_latency_99_percentile.png b/src/current/images/v19.1/admin_ui_service_latency_99_percentile.png deleted file mode 100644 index 7e14805d21d..00000000000 Binary files a/src/current/images/v19.1/admin_ui_service_latency_99_percentile.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sink_byte_traffic.png b/src/current/images/v19.1/admin_ui_sink_byte_traffic.png deleted file mode 100644 index 4bb61c4e83d..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sink_byte_traffic.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sink_counts.png b/src/current/images/v19.1/admin_ui_sink_counts.png deleted file mode 100644 index dc8a6690cbf..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sink_counts.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sink_timings.png b/src/current/images/v19.1/admin_ui_sink_timings.png deleted file mode 100644 index 63f5de2be4a..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sink_timings.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sql_byte_traffic.png b/src/current/images/v19.1/admin_ui_sql_byte_traffic.png deleted file mode 100644 index 9f077b25259..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sql_byte_traffic.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sql_connections.png b/src/current/images/v19.1/admin_ui_sql_connections.png deleted file mode 100644 index 7cda5614e49..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sql_connections.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sql_queries.png b/src/current/images/v19.1/admin_ui_sql_queries.png deleted file mode 100644 index 771c995256e..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sql_queries.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_sql_query_errors.png b/src/current/images/v19.1/admin_ui_sql_query_errors.png deleted file mode 100644 index 6dfe71291f3..00000000000 Binary files a/src/current/images/v19.1/admin_ui_sql_query_errors.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_statements_details_page.png b/src/current/images/v19.1/admin_ui_statements_details_page.png deleted file mode 100644 index dfa16481bee..00000000000 Binary files a/src/current/images/v19.1/admin_ui_statements_details_page.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_summary_panel.png b/src/current/images/v19.1/admin_ui_summary_panel.png deleted file mode 100644 index 5eaa9b18439..00000000000 Binary files a/src/current/images/v19.1/admin_ui_summary_panel.png and /dev/null differ diff --git a/src/current/images/v19.1/admin_ui_transactions.png b/src/current/images/v19.1/admin_ui_transactions.png deleted file mode 100644 index 5131ecc6b2d..00000000000 Binary files a/src/current/images/v19.1/admin_ui_transactions.png and /dev/null differ diff --git a/src/current/images/v19.1/after-decommission1.png b/src/current/images/v19.1/after-decommission1.png deleted file mode 100644 index 945ec05f974..00000000000 Binary files a/src/current/images/v19.1/after-decommission1.png and /dev/null differ diff --git a/src/current/images/v19.1/after-decommission2.png b/src/current/images/v19.1/after-decommission2.png deleted file mode 100644 index fbb041d2c14..00000000000 Binary files a/src/current/images/v19.1/after-decommission2.png and /dev/null differ diff --git a/src/current/images/v19.1/automated-operations1.png b/src/current/images/v19.1/automated-operations1.png deleted file mode 100644 index 64c6e51616c..00000000000 Binary files a/src/current/images/v19.1/automated-operations1.png and /dev/null differ diff --git a/src/current/images/v19.1/before-decommission0.png b/src/current/images/v19.1/before-decommission0.png deleted file mode 100644 index 8fcf1f92f3c..00000000000 Binary files a/src/current/images/v19.1/before-decommission0.png and /dev/null differ diff --git a/src/current/images/v19.1/before-decommission1.png b/src/current/images/v19.1/before-decommission1.png deleted file mode 100644 index 91627545b22..00000000000 Binary files a/src/current/images/v19.1/before-decommission1.png and /dev/null differ diff --git a/src/current/images/v19.1/before-decommission2.png b/src/current/images/v19.1/before-decommission2.png deleted file mode 100644 index 063efeb6326..00000000000 Binary files a/src/current/images/v19.1/before-decommission2.png and /dev/null differ diff --git a/src/current/images/v19.1/cloudformation_admin_ui_live_node_count.png b/src/current/images/v19.1/cloudformation_admin_ui_live_node_count.png deleted file mode 100644 index fce52a39034..00000000000 Binary files a/src/current/images/v19.1/cloudformation_admin_ui_live_node_count.png and /dev/null differ diff --git a/src/current/images/v19.1/cloudformation_admin_ui_replicas.png b/src/current/images/v19.1/cloudformation_admin_ui_replicas.png deleted file mode 100644 index 9327b1004e4..00000000000 Binary files a/src/current/images/v19.1/cloudformation_admin_ui_replicas.png and /dev/null differ diff --git a/src/current/images/v19.1/cloudformation_admin_ui_sql_queries.png b/src/current/images/v19.1/cloudformation_admin_ui_sql_queries.png deleted file mode 100644 index 843d94b30f0..00000000000 Binary files a/src/current/images/v19.1/cloudformation_admin_ui_sql_queries.png and /dev/null differ diff --git a/src/current/images/v19.1/cluster-status-after-decommission1.png b/src/current/images/v19.1/cluster-status-after-decommission1.png deleted file mode 100644 index 35d96fef0d5..00000000000 Binary files a/src/current/images/v19.1/cluster-status-after-decommission1.png and /dev/null differ diff --git a/src/current/images/v19.1/cluster-status-after-decommission2.png b/src/current/images/v19.1/cluster-status-after-decommission2.png deleted file mode 100644 index e420e202aa6..00000000000 Binary files a/src/current/images/v19.1/cluster-status-after-decommission2.png and /dev/null differ diff --git a/src/current/images/v19.1/dbeaver-01-select-cockroachdb.png b/src/current/images/v19.1/dbeaver-01-select-cockroachdb.png deleted file mode 100644 index d58344cd09c..00000000000 Binary files a/src/current/images/v19.1/dbeaver-01-select-cockroachdb.png and /dev/null differ diff --git a/src/current/images/v19.1/dbeaver-02-cockroachdb-connection-settings.png b/src/current/images/v19.1/dbeaver-02-cockroachdb-connection-settings.png deleted file mode 100644 index ffd14ff855f..00000000000 Binary files a/src/current/images/v19.1/dbeaver-02-cockroachdb-connection-settings.png and /dev/null differ diff --git a/src/current/images/v19.1/dbeaver-03-ssl-tab.png b/src/current/images/v19.1/dbeaver-03-ssl-tab.png deleted file mode 100644 index eee8393ec2b..00000000000 Binary files a/src/current/images/v19.1/dbeaver-03-ssl-tab.png and /dev/null differ diff --git a/src/current/images/v19.1/dbeaver-04-connection-success-dialog.png b/src/current/images/v19.1/dbeaver-04-connection-success-dialog.png deleted file mode 100644 index ce5f157ae27..00000000000 Binary files a/src/current/images/v19.1/dbeaver-04-connection-success-dialog.png and /dev/null differ diff --git a/src/current/images/v19.1/dbeaver-05-movr.png b/src/current/images/v19.1/dbeaver-05-movr.png deleted file mode 100644 index 339b3003e54..00000000000 Binary files a/src/current/images/v19.1/dbeaver-05-movr.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple1.png b/src/current/images/v19.1/decommission-multiple1.png deleted file mode 100644 index 30c90280f7c..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple1.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple2.png b/src/current/images/v19.1/decommission-multiple2.png deleted file mode 100644 index d93abcd4acb..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple2.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple3.png b/src/current/images/v19.1/decommission-multiple3.png deleted file mode 100644 index 3a1d17176de..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple3.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple4.png b/src/current/images/v19.1/decommission-multiple4.png deleted file mode 100644 index 854c4ba50c9..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple4.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple5.png b/src/current/images/v19.1/decommission-multiple5.png deleted file mode 100644 index 3a8621e956b..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple5.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple6.png b/src/current/images/v19.1/decommission-multiple6.png deleted file mode 100644 index 168ba907be1..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple6.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-multiple7.png b/src/current/images/v19.1/decommission-multiple7.png deleted file mode 100644 index a52d034cf9a..00000000000 Binary files a/src/current/images/v19.1/decommission-multiple7.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario1.1.png b/src/current/images/v19.1/decommission-scenario1.1.png deleted file mode 100644 index a66389270de..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario1.1.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario1.2.png b/src/current/images/v19.1/decommission-scenario1.2.png deleted file mode 100644 index 9b33855e101..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario1.2.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario1.3.png b/src/current/images/v19.1/decommission-scenario1.3.png deleted file mode 100644 index 4c1175d956b..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario1.3.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario2.1.png b/src/current/images/v19.1/decommission-scenario2.1.png deleted file mode 100644 index 2fa8790c556..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario2.1.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario2.2.png b/src/current/images/v19.1/decommission-scenario2.2.png deleted file mode 100644 index 391b8e24c0f..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario2.2.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario3.1.png b/src/current/images/v19.1/decommission-scenario3.1.png deleted file mode 100644 index db682df3d78..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario3.1.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario3.2.png b/src/current/images/v19.1/decommission-scenario3.2.png deleted file mode 100644 index 3571bd0b83e..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario3.2.png and /dev/null differ diff --git a/src/current/images/v19.1/decommission-scenario3.3.png b/src/current/images/v19.1/decommission-scenario3.3.png deleted file mode 100644 index 45f61d9bd18..00000000000 Binary files a/src/current/images/v19.1/decommission-scenario3.3.png and /dev/null differ diff --git a/src/current/images/v19.1/explain-analyze-distsql-plan.png b/src/current/images/v19.1/explain-analyze-distsql-plan.png deleted file mode 100644 index d0f0371520c..00000000000 Binary files a/src/current/images/v19.1/explain-analyze-distsql-plan.png and /dev/null differ diff --git a/src/current/images/v19.1/explain-distsql-plan.png b/src/current/images/v19.1/explain-distsql-plan.png deleted file mode 100644 index 77a6699cca4..00000000000 Binary files a/src/current/images/v19.1/explain-distsql-plan.png and /dev/null differ diff --git a/src/current/images/v19.1/follow-workload-1.png b/src/current/images/v19.1/follow-workload-1.png deleted file mode 100644 index a58fcb2e5ed..00000000000 Binary files a/src/current/images/v19.1/follow-workload-1.png and /dev/null differ diff --git a/src/current/images/v19.1/follow-workload-2.png b/src/current/images/v19.1/follow-workload-2.png deleted file mode 100644 index 47d83c5d4d6..00000000000 Binary files a/src/current/images/v19.1/follow-workload-2.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-cluster-topology.png b/src/current/images/v19.1/geo-partitioning-cluster-topology.png deleted file mode 100644 index ec4ce6d5416..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-cluster-topology.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-node-map-1.png b/src/current/images/v19.1/geo-partitioning-node-map-1.png deleted file mode 100644 index a93d49dc118..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-node-map-1.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-node-map-2.png b/src/current/images/v19.1/geo-partitioning-node-map-2.png deleted file mode 100644 index 9600dcd7b59..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-node-map-2.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-node-map-3.png b/src/current/images/v19.1/geo-partitioning-node-map-3.png deleted file mode 100644 index 579959153d4..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-node-map-3.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-schema.png b/src/current/images/v19.1/geo-partitioning-schema.png deleted file mode 100644 index c80f3b210f0..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-schema.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-sql-latency-after-1.png b/src/current/images/v19.1/geo-partitioning-sql-latency-after-1.png deleted file mode 100644 index 3ebf14b9747..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-sql-latency-after-1.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-sql-latency-after-2.png b/src/current/images/v19.1/geo-partitioning-sql-latency-after-2.png deleted file mode 100644 index dd806f629c5..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-sql-latency-after-2.png and /dev/null differ diff --git a/src/current/images/v19.1/geo-partitioning-sql-latency-before.png b/src/current/images/v19.1/geo-partitioning-sql-latency-before.png deleted file mode 100644 index 5703297a7b1..00000000000 Binary files a/src/current/images/v19.1/geo-partitioning-sql-latency-before.png and /dev/null differ diff --git a/src/current/images/v19.1/icon_info.svg b/src/current/images/v19.1/icon_info.svg deleted file mode 100644 index 57aac994733..00000000000 --- a/src/current/images/v19.1/icon_info.svg +++ /dev/null @@ -1,4 +0,0 @@ - - - - \ No newline at end of file diff --git a/src/current/images/v19.1/kubernetes-alertmanager-home.png b/src/current/images/v19.1/kubernetes-alertmanager-home.png deleted file mode 100644 index 8d1272b27a6..00000000000 Binary files a/src/current/images/v19.1/kubernetes-alertmanager-home.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-prometheus-alertmanagers.png b/src/current/images/v19.1/kubernetes-prometheus-alertmanagers.png deleted file mode 100644 index c6ee9f3db79..00000000000 Binary files a/src/current/images/v19.1/kubernetes-prometheus-alertmanagers.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-prometheus-alertrules.png b/src/current/images/v19.1/kubernetes-prometheus-alertrules.png deleted file mode 100644 index edb19e06254..00000000000 Binary files a/src/current/images/v19.1/kubernetes-prometheus-alertrules.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-prometheus-alerts.png b/src/current/images/v19.1/kubernetes-prometheus-alerts.png deleted file mode 100644 index ed0e89aac8d..00000000000 Binary files a/src/current/images/v19.1/kubernetes-prometheus-alerts.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-prometheus-graph.png b/src/current/images/v19.1/kubernetes-prometheus-graph.png deleted file mode 100644 index 9822717cefc..00000000000 Binary files a/src/current/images/v19.1/kubernetes-prometheus-graph.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-prometheus-targets.png b/src/current/images/v19.1/kubernetes-prometheus-targets.png deleted file mode 100644 index 5e4b917eeb8..00000000000 Binary files a/src/current/images/v19.1/kubernetes-prometheus-targets.png and /dev/null differ diff --git a/src/current/images/v19.1/kubernetes-upgrade.png b/src/current/images/v19.1/kubernetes-upgrade.png deleted file mode 100644 index 497559cef73..00000000000 Binary files a/src/current/images/v19.1/kubernetes-upgrade.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_concepts1.png b/src/current/images/v19.1/perf_tuning_concepts1.png deleted file mode 100644 index 3a086a41c26..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_concepts1.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_concepts2.png b/src/current/images/v19.1/perf_tuning_concepts2.png deleted file mode 100644 index d67b8f253f8..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_concepts2.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_concepts3.png b/src/current/images/v19.1/perf_tuning_concepts3.png deleted file mode 100644 index 46d666be55d..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_concepts3.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_concepts4.png b/src/current/images/v19.1/perf_tuning_concepts4.png deleted file mode 100644 index b60b19e01bf..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_concepts4.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_movr_schema.png b/src/current/images/v19.1/perf_tuning_movr_schema.png deleted file mode 100644 index 262adc18b75..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_movr_schema.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_multi_region_rebalancing.png b/src/current/images/v19.1/perf_tuning_multi_region_rebalancing.png deleted file mode 100644 index 7064e3962db..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_multi_region_rebalancing.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_multi_region_rebalancing_after_partitioning.png b/src/current/images/v19.1/perf_tuning_multi_region_rebalancing_after_partitioning.png deleted file mode 100644 index 433c0f8ba03..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_multi_region_rebalancing_after_partitioning.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_multi_region_topology.png b/src/current/images/v19.1/perf_tuning_multi_region_topology.png deleted file mode 100644 index fe64c322ca0..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_multi_region_topology.png and /dev/null differ diff --git a/src/current/images/v19.1/perf_tuning_single_region_topology.png b/src/current/images/v19.1/perf_tuning_single_region_topology.png deleted file mode 100644 index 4dfca364929..00000000000 Binary files a/src/current/images/v19.1/perf_tuning_single_region_topology.png and /dev/null differ diff --git a/src/current/images/v19.1/range-lookup.png b/src/current/images/v19.1/range-lookup.png deleted file mode 100644 index fc88139d8d9..00000000000 Binary files a/src/current/images/v19.1/range-lookup.png and /dev/null differ diff --git a/src/current/images/v19.1/raw-status-endpoints.png b/src/current/images/v19.1/raw-status-endpoints.png deleted file mode 100644 index a893911fa87..00000000000 Binary files a/src/current/images/v19.1/raw-status-endpoints.png and /dev/null differ diff --git a/src/current/images/v19.1/recovery1.png b/src/current/images/v19.1/recovery1.png deleted file mode 100644 index 8a14f7e965a..00000000000 Binary files a/src/current/images/v19.1/recovery1.png and /dev/null differ diff --git a/src/current/images/v19.1/recovery2.png b/src/current/images/v19.1/recovery2.png deleted file mode 100644 index 7ec3fed2adc..00000000000 Binary files a/src/current/images/v19.1/recovery2.png and /dev/null differ diff --git a/src/current/images/v19.1/recovery3.png b/src/current/images/v19.1/recovery3.png deleted file mode 100644 index a82da79f64a..00000000000 Binary files a/src/current/images/v19.1/recovery3.png and /dev/null differ diff --git a/src/current/images/v19.1/remove-dead-node1.png b/src/current/images/v19.1/remove-dead-node1.png deleted file mode 100644 index 26569078efd..00000000000 Binary files a/src/current/images/v19.1/remove-dead-node1.png and /dev/null differ diff --git a/src/current/images/v19.1/replication1.png b/src/current/images/v19.1/replication1.png deleted file mode 100644 index 303fa425835..00000000000 Binary files a/src/current/images/v19.1/replication1.png and /dev/null differ diff --git a/src/current/images/v19.1/replication2.png b/src/current/images/v19.1/replication2.png deleted file mode 100644 index b218066c81d..00000000000 Binary files a/src/current/images/v19.1/replication2.png and /dev/null differ diff --git a/src/current/images/v19.1/scalability1.png b/src/current/images/v19.1/scalability1.png deleted file mode 100644 index 7a70afb6d6a..00000000000 Binary files a/src/current/images/v19.1/scalability1.png and /dev/null differ diff --git a/src/current/images/v19.1/scalability2.png b/src/current/images/v19.1/scalability2.png deleted file mode 100644 index 400748466c6..00000000000 Binary files a/src/current/images/v19.1/scalability2.png and /dev/null differ diff --git a/src/current/images/v19.1/serializable_schema.png b/src/current/images/v19.1/serializable_schema.png deleted file mode 100644 index 7e8b4e324c6..00000000000 Binary files a/src/current/images/v19.1/serializable_schema.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production1.png b/src/current/images/v19.1/topology-patterns/topology_basic_production1.png deleted file mode 100644 index b96d185197b..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production2.png b/src/current/images/v19.1/topology-patterns/topology_basic_production2.png deleted file mode 100644 index 22359506c75..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production2.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production_reads.png b/src/current/images/v19.1/topology-patterns/topology_basic_production_reads.png deleted file mode 100644 index fd6b9a35e40..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency1.png b/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency1.png deleted file mode 100644 index 218e3443668..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency2.png b/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency2.png deleted file mode 100644 index a9efce59a67..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency2.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency3.png b/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency3.png deleted file mode 100644 index 3c3fd57b457..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production_resiliency3.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_basic_production_writes.gif b/src/current/images/v19.1/topology-patterns/topology_basic_production_writes.gif deleted file mode 100644 index 5f12f331e7f..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_basic_production_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_development1.png b/src/current/images/v19.1/topology-patterns/topology_development1.png deleted file mode 100644 index 2882937e438..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_development1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_development2.png b/src/current/images/v19.1/topology-patterns/topology_development2.png deleted file mode 100644 index 1eed95fbaba..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_development2.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_development_latency.png b/src/current/images/v19.1/topology-patterns/topology_development_latency.png deleted file mode 100644 index 3aa54c45c13..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_development_latency.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes1.png b/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes1.png deleted file mode 100644 index c9ad5d97fa3..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_reads.png b/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_reads.png deleted file mode 100644 index 097927ea410..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_resiliency.png b/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_resiliency.png deleted file mode 100644 index 39056e22a48..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_resiliency.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_writes.gif b/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_writes.gif deleted file mode 100644 index 16433549cb4..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_duplicate_indexes_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_reads.png b/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_reads.png deleted file mode 100644 index 67b01da4d37..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_writes.gif b/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_writes.gif deleted file mode 100644 index 6cd6be01196..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follow_the_workload_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follower_reads1.png b/src/current/images/v19.1/topology-patterns/topology_follower_reads1.png deleted file mode 100644 index 1eb07d53d6a..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follower_reads1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follower_reads3.png b/src/current/images/v19.1/topology-patterns/topology_follower_reads3.png deleted file mode 100644 index d6a125c1079..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follower_reads3.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follower_reads_reads.png b/src/current/images/v19.1/topology-patterns/topology_follower_reads_reads.png deleted file mode 100644 index 47657b885b3..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follower_reads_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follower_reads_resiliency.png b/src/current/images/v19.1/topology-patterns/topology_follower_reads_resiliency.png deleted file mode 100644 index 73868163a1e..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follower_reads_resiliency.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_follower_reads_writes.gif b/src/current/images/v19.1/topology-patterns/topology_follower_reads_writes.gif deleted file mode 100644 index 8fc4b2c55b7..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_follower_reads_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders1.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders1.png deleted file mode 100644 index 66d03a7f113..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png deleted file mode 100644 index 0daa6665d05..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png deleted file mode 100644 index 09aaa95ded9..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png deleted file mode 100644 index f372f14552c..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_resiliency2.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif b/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif deleted file mode 100644 index f5c8d077818..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioned_leaseholders_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1.png deleted file mode 100644 index a7bc25e6279..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1_no-map.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1_no-map.png deleted file mode 100644 index 3b348dd7430..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning1_no-map.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_reads.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_reads.png deleted file mode 100644 index 6dcdd7e418e..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_reads.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency1.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency1.png deleted file mode 100644 index d3353c2f8d0..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency2.png b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency2.png deleted file mode 100644 index 04191e8ddef..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_resiliency2.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_writes.gif b/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_writes.gif deleted file mode 100644 index 11435a6bd51..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_geo-partitioning_writes.gif and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_multi-region_hardware.png b/src/current/images/v19.1/topology-patterns/topology_multi-region_hardware.png deleted file mode 100644 index dad856590d0..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_multi-region_hardware.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_pinned_index_leaseholders3.png b/src/current/images/v19.1/topology-patterns/topology_pinned_index_leaseholders3.png deleted file mode 100644 index 7d792d3a5ed..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_pinned_index_leaseholders3.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency1.png b/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency1.png deleted file mode 100644 index 7fe13079fe0..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency1.png and /dev/null differ diff --git a/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency2.png b/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency2.png deleted file mode 100644 index f11c5989677..00000000000 Binary files a/src/current/images/v19.1/topology-patterns/topology_single-region_cluster_resiliency2.png and /dev/null differ diff --git a/src/current/images/v19.1/trace.png b/src/current/images/v19.1/trace.png deleted file mode 100644 index 4f0fb98a753..00000000000 Binary files a/src/current/images/v19.1/trace.png and /dev/null differ diff --git a/src/current/images/v19.1/training-1.1.png b/src/current/images/v19.1/training-1.1.png deleted file mode 100644 index d1adf35bcde..00000000000 Binary files a/src/current/images/v19.1/training-1.1.png and /dev/null differ diff --git a/src/current/images/v19.1/training-1.2.png b/src/current/images/v19.1/training-1.2.png deleted file mode 100644 index 1993355b08e..00000000000 Binary files a/src/current/images/v19.1/training-1.2.png and /dev/null differ diff --git a/src/current/images/v19.1/training-1.png b/src/current/images/v19.1/training-1.png deleted file mode 100644 index 9f8de513337..00000000000 Binary files a/src/current/images/v19.1/training-1.png and /dev/null differ diff --git a/src/current/images/v19.1/training-10.png b/src/current/images/v19.1/training-10.png deleted file mode 100644 index b319a5bf490..00000000000 Binary files a/src/current/images/v19.1/training-10.png and /dev/null differ diff --git a/src/current/images/v19.1/training-11.png b/src/current/images/v19.1/training-11.png deleted file mode 100644 index 2af80764aaf..00000000000 Binary files a/src/current/images/v19.1/training-11.png and /dev/null differ diff --git a/src/current/images/v19.1/training-12.png b/src/current/images/v19.1/training-12.png deleted file mode 100644 index 7a8e4cd8e05..00000000000 Binary files a/src/current/images/v19.1/training-12.png and /dev/null differ diff --git a/src/current/images/v19.1/training-13.png b/src/current/images/v19.1/training-13.png deleted file mode 100644 index fc870143136..00000000000 Binary files a/src/current/images/v19.1/training-13.png and /dev/null differ diff --git a/src/current/images/v19.1/training-14.png b/src/current/images/v19.1/training-14.png deleted file mode 100644 index fe517518ed7..00000000000 Binary files a/src/current/images/v19.1/training-14.png and /dev/null differ diff --git a/src/current/images/v19.1/training-15.png b/src/current/images/v19.1/training-15.png deleted file mode 100644 index 1879ee29d2e..00000000000 Binary files a/src/current/images/v19.1/training-15.png and /dev/null differ diff --git a/src/current/images/v19.1/training-16.png b/src/current/images/v19.1/training-16.png deleted file mode 100644 index 24f6fa3d908..00000000000 Binary files a/src/current/images/v19.1/training-16.png and /dev/null differ diff --git a/src/current/images/v19.1/training-17.png b/src/current/images/v19.1/training-17.png deleted file mode 100644 index 9bb5c8a46dd..00000000000 Binary files a/src/current/images/v19.1/training-17.png and /dev/null differ diff --git a/src/current/images/v19.1/training-18.png b/src/current/images/v19.1/training-18.png deleted file mode 100644 index 8f0ae7aa857..00000000000 Binary files a/src/current/images/v19.1/training-18.png and /dev/null differ diff --git a/src/current/images/v19.1/training-19.png b/src/current/images/v19.1/training-19.png deleted file mode 100644 index e1a2414bf29..00000000000 Binary files a/src/current/images/v19.1/training-19.png and /dev/null differ diff --git a/src/current/images/v19.1/training-2.png b/src/current/images/v19.1/training-2.png deleted file mode 100644 index d6d8afd7828..00000000000 Binary files a/src/current/images/v19.1/training-2.png and /dev/null differ diff --git a/src/current/images/v19.1/training-20.png b/src/current/images/v19.1/training-20.png deleted file mode 100644 index d55c4f249ae..00000000000 Binary files a/src/current/images/v19.1/training-20.png and /dev/null differ diff --git a/src/current/images/v19.1/training-21.png b/src/current/images/v19.1/training-21.png deleted file mode 100644 index 5726c9c69a7..00000000000 Binary files a/src/current/images/v19.1/training-21.png and /dev/null differ diff --git a/src/current/images/v19.1/training-22.png b/src/current/images/v19.1/training-22.png deleted file mode 100644 index fe2ca336a95..00000000000 Binary files a/src/current/images/v19.1/training-22.png and /dev/null differ diff --git a/src/current/images/v19.1/training-23.png b/src/current/images/v19.1/training-23.png deleted file mode 100644 index de87538279f..00000000000 Binary files a/src/current/images/v19.1/training-23.png and /dev/null differ diff --git a/src/current/images/v19.1/training-3.png b/src/current/images/v19.1/training-3.png deleted file mode 100644 index 02b5724da59..00000000000 Binary files a/src/current/images/v19.1/training-3.png and /dev/null differ diff --git a/src/current/images/v19.1/training-4.png b/src/current/images/v19.1/training-4.png deleted file mode 100644 index ae55051e60e..00000000000 Binary files a/src/current/images/v19.1/training-4.png and /dev/null differ diff --git a/src/current/images/v19.1/training-5.png b/src/current/images/v19.1/training-5.png deleted file mode 100644 index 65c805404c4..00000000000 Binary files a/src/current/images/v19.1/training-5.png and /dev/null differ diff --git a/src/current/images/v19.1/training-6.1.png b/src/current/images/v19.1/training-6.1.png deleted file mode 100644 index 128ab631ce8..00000000000 Binary files a/src/current/images/v19.1/training-6.1.png and /dev/null differ diff --git a/src/current/images/v19.1/training-6.png b/src/current/images/v19.1/training-6.png deleted file mode 100644 index 8d93f4c3e3d..00000000000 Binary files a/src/current/images/v19.1/training-6.png and /dev/null differ diff --git a/src/current/images/v19.1/training-7.png b/src/current/images/v19.1/training-7.png deleted file mode 100644 index 46179bfd04b..00000000000 Binary files a/src/current/images/v19.1/training-7.png and /dev/null differ diff --git a/src/current/images/v19.1/training-8.png b/src/current/images/v19.1/training-8.png deleted file mode 100644 index d31f2e95a29..00000000000 Binary files a/src/current/images/v19.1/training-8.png and /dev/null differ diff --git a/src/current/images/v19.1/training-9.png b/src/current/images/v19.1/training-9.png deleted file mode 100644 index f386b9a9aa7..00000000000 Binary files a/src/current/images/v19.1/training-9.png and /dev/null differ diff --git a/src/current/images/v19.1/window-functions.png b/src/current/images/v19.1/window-functions.png deleted file mode 100644 index 887ceeac669..00000000000 Binary files a/src/current/images/v19.1/window-functions.png and /dev/null differ diff --git a/src/current/releases/v19.1.md b/src/current/releases/v19.1.md index e66b1d10e5e..5cd774fe67e 100644 --- a/src/current/releases/v19.1.md +++ b/src/current/releases/v19.1.md @@ -1,5 +1,5 @@ --- -title: What's New in v19.1 +title: What's New in v19.1 toc: true toc_not_nested: true summary: Additions and changes in CockroachDB version v19.1 since version v2.1 @@ -8,16 +8,30 @@ docs_area: releases keywords: gin, gin index, gin indexes, inverted index, inverted indexes, accelerated index, accelerated indexes --- -{% assign rel = site.data.releases | where_exp: "rel", "rel.major_version == page.major_version" | sort: "release_date" | reverse %} + + + + + + + + + + + + + + + + + + + + + + + -{% assign vers = site.data.versions | where_exp: "vers", "vers.major_version == page.major_version" | first %} +This release is no longer supported. For more information, see our [Release support policy]({% link releases/release-support-policy.md %}). -{% assign today = "today" | date: "%Y-%m-%d" %} - -{% include releases/testing-release-notice.md major_version=vers %} - -{% include releases/whats-new-intro.md major_version=vers %} - -{% for r in rel %} -{% include releases/{{ page.major_version }}/{{ r.release_name }}.md release=r.release_name release_date=r.release_date %} -{% endfor %} +To download the archived documentation for this release, see [Archived Documentation]({% link releases/archived-documentation.md %}). \ No newline at end of file diff --git a/src/current/v19.1/404.md b/src/current/v19.1/404.md deleted file mode 100644 index 13a69ddde5c..00000000000 --- a/src/current/v19.1/404.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Page Not Found -description: "Page not found." -sitemap: false -search: exclude -related_pages: none -toc: false ---- - - -{%comment%} - - -{%endcomment%} \ No newline at end of file diff --git a/src/current/v19.1/add-column.md b/src/current/v19.1/add-column.md deleted file mode 100644 index 5584ce60376..00000000000 --- a/src/current/v19.1/add-column.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: ADD COLUMN -summary: Use the ADD COLUMN statement to add columns to tables. -toc: true ---- - -The `ADD COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and adds columns to tables. - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
-{% include {{ page.version.version }}/sql/diagrams/add_column.html %} -
- -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table to which you want to add the column. - `column_name` | The name of the column you want to add. The column name must follow these [identifier rules](keywords-and-identifiers.html#identifiers) and must be unique within the table but can have the same name as indexes or constraints. - `typename` | The [data type](data-types.html) of the new column. - `col_qualification` | An optional list of column definitions, which may include [column-level constraints](constraints.html), [collation](collate.html), or [column family assignments](column-families.html).

If the column family is not specified, the column will be added to the first column family. For more information about how column families are assigned, see [Column Families](column-families.html#assign-column-families-when-adding-columns).

Note that it is not possible to add a column with the [foreign key](foreign-key.html) constraint. As a workaround, you can add the column without the constraint, then use [`CREATE INDEX`](create-index.html) to index the column, and then use [`ADD CONSTRAINT`](add-constraint.html) to add the foreign key constraint to the column. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Add a single column - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN names STRING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| balance | DECIMAL | true | NULL | | {} | -| names | STRING | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -### Add multiple columns - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location STRING, ADD COLUMN amount DECIMAL; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| balance | DECIMAL | true | NULL | | {} | -| names | STRING | true | NULL | | {} | -| location | STRING | true | NULL | | {} | -| amount | DECIMAL | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(5 rows) -~~~ - -### Add a column with a `NOT NULL` constraint and a `DEFAULT` value - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN interest DECIMAL NOT NULL DEFAULT (DECIMAL '1.3'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ -~~~ -+-------------+-----------+-------------+------------------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+------------------------+-----------------------+-------------+ -| id | INT | false | NULL | | {"primary"} | -| balance | DECIMAL | true | NULL | | {} | -| names | STRING | true | NULL | | {} | -| location | STRING | true | NULL | | {} | -| amount | DECIMAL | true | NULL | | {} | -| interest | DECIMAL | false | 1.3:::DECIMAL::DECIMAL | | {} | -+-------------+-----------+-------------+------------------------+-----------------------+-------------+ -(6 rows) -~~~ - -### Add a column with `NOT NULL` and `UNIQUE` constraints - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN cust_number DECIMAL UNIQUE NOT NULL; -~~~ - -### Add a column with collation - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN more_names STRING COLLATE en; -~~~ - -### Add a column and assign it to a column family - -#### Add a column and assign it to a new column family - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location1 STRING CREATE FAMILY new_family; -~~~ - -#### Add a column and assign it to an existing column family - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN location2 STRING FAMILY existing_family; -~~~ - -#### Add a column and create a new column family if column family does not exist - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE accounts ADD COLUMN new_name STRING CREATE IF NOT EXISTS FAMILY f1; -~~~ - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [Column-level Constraints](constraints.html) -- [Collation](collate.html) -- [Column Families](column-families.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/add-constraint.md b/src/current/v19.1/add-constraint.md deleted file mode 100644 index c311f0000c4..00000000000 --- a/src/current/v19.1/add-constraint.md +++ /dev/null @@ -1,172 +0,0 @@ ---- -title: ADD CONSTRAINT -summary: Use the ADD CONSTRAINT statement to add constraints to columns. -toc: true ---- - -The `ADD CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and can add the following [constraints](constraints.html) to columns: - -- [`UNIQUE`](#add-the-unique-constraint) -- [`CHECK`](#add-the-check-constraint) -- [Foreign key](#add-the-foreign-key-constraint-with-cascade) - -{{site.data.alerts.callout_info}} -The [`PRIMARY KEY`](primary-key.html) and [`NOT NULL`](not-null.html) constraints can only be applied through [`CREATE TABLE`](create-table.html). The [`DEFAULT`](default-value.html) constraint is managed through [`ALTER COLUMN`](alter-column.html). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
-{% include {{ page.version.version }}/sql/diagrams/add_constraint.html %} -
- -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table containing the column you want to constrain. - `constraint_name` | The name of the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - `constraint_elem` | The [`CHECK`](check.html), [foreign key](foreign-key.html), [`UNIQUE`](unique.html) constraint you want to add.

Adding/changing a `DEFAULT` constraint is done through [`ALTER COLUMN`](alter-column.html).

Adding/changing the table's `PRIMARY KEY` is not supported through `ALTER TABLE`; it can only be specified during [table creation](create-table.html#create-a-table-primary-key-defined). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Add the `UNIQUE` constraint - -Adding the [`UNIQUE` constraint](unique.html) requires that all of a column's values be distinct from one another (except for *NULL* values). - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT id_customer_unique UNIQUE (id, customer); -~~~ - -### Add the `CHECK` constraint - -Adding the [`CHECK` constraint](check.html) requires that all of a column's values evaluate to `TRUE` for a Boolean expression. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT check_id_non_zero CHECK (id > 0); -~~~ - -New in v19.1: Check constraints can be added to columns that were created earlier in the transaction. For example: - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -> ALTER TABLE customers ADD COLUMN gdpr_status STRING; -> ALTER TABLE customers ADD CONSTRAINT check_gdpr_status CHECK (gdpr_status IN ('yes', 'no', 'unknown')); -> COMMIT; -~~~ - -~~~ -BEGIN -ALTER TABLE -ALTER TABLE -COMMIT -~~~ - -{{site.data.alerts.callout_info}} -The entire transaction will be rolled back, including any new columns that were added, in the following cases: - -- If an existing column is found containing values that violate the new constraint. -- If a new column has a default value or is a [computed column](computed-columns.html) that would have contained values that violate the new constraint. -{{site.data.alerts.end}} - -### Add the foreign key constraint with `CASCADE` - -To add a foreign key constraint, use the steps shown below. - -Given two tables, `customers` and `orders`: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE customers; -~~~ - -~~~ - table_name | create_statement -------------+---------------------------------------------------- - customers | CREATE TABLE customers ( + - | id INT8 NOT NULL, + - | name STRING NOT NULL, + - | address STRING NULL, + - | CONSTRAINT "primary" PRIMARY KEY (id ASC),+ - | FAMILY "primary" (id, name, address) + - | ) -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE orders; -~~~ - -~~~ - table_name | create_statement -------------+---------------------------------------------------------------------------------------------------------------- - orders | CREATE TABLE orders ( + - | id INT8 NOT NULL, + - | customer_id INT8 NULL, + - | status STRING NOT NULL, + - | CONSTRAINT "primary" PRIMARY KEY (id ASC), + - | FAMILY "primary" (id, customer_id, status), + - | CONSTRAINT check_status CHECK (status IN ('open':::STRING, 'complete':::STRING, 'cancelled':::STRING))+ - | ) -(1 row) -~~~ - -You can include a [foreign key action](foreign-key.html#foreign-key-actions) to specify what happens when a foreign key is updated or deleted. - -Using `ON DELETE CASCADE` will ensure that when the referenced row is deleted, all dependent objects are also deleted. - -{{site.data.alerts.callout_danger}} -`CASCADE` does not list the objects it drops or updates, so it should be used with caution. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE; -~~~ - -New in v19.1: An index on the referencing columns is automatically created for you when you add a foreign key constraint to an empty table, if an appropriate index does not already exist. You can see it using [`SHOW INDEXES`](show-index.html): - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEXES FROM orders; -~~~ - -~~~ - table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit -------------+-------------------------------+------------+--------------+-------------+-----------+---------+---------- - orders | primary | f | 1 | id | ASC | f | f - orders | orders_auto_index_customer_fk | t | 1 | customer_id | ASC | f | f - orders | orders_auto_index_customer_fk | t | 2 | id | ASC | f | t -(3 rows) -~~~ - -{{site.data.alerts.callout_info}} -Adding a foreign key for a non-empty table without an appropriate index will fail, since foreign key columns must be indexed. For more information about the requirements for creating foreign keys, see [Rules for creating foreign keys](foreign-key.html#rules-for-creating-foreign-keys). -{{site.data.alerts.end}} - -## See also - -- [Constraints](constraints.html) -- [Foreign Key Constraint](foreign-key.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`RENAME CONSTRAINT`](rename-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`VALIDATE CONSTRAINT`](validate-constraint.html) -- [`ALTER COLUMN`](alter-column.html) -- [`CREATE TABLE`](create-table.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/admin-ui-access-and-navigate.md b/src/current/v19.1/admin-ui-access-and-navigate.md deleted file mode 100644 index 7191d4ff3af..00000000000 --- a/src/current/v19.1/admin-ui-access-and-navigate.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: Use the CockroachDB Admin UI -summary: Learn how to access and navigate the Admin UI. -toc: true ---- - -The built-in Admin UI helps you monitor and troubleshoot CockroachDB by providing information about the cluster's health, configuration, and operations. - -## Access the Admin UI - -For insecure clusters, anyone can access and view the Admin UI. For secure clusters, only authorized users can [access and view the Admin UI](#accessing-the-admin-ui-for-a-secure-cluster). In addition, certain areas of the Admin UI can only be [accessed by `admin` users](admin-ui-overview.html#admin-ui-access). - -You can access the Admin UI from any node in the cluster. - -The Admin UI is reachable at the IP address/hostname and port set via the `--http-addr` flag when [starting each node](start-a-node.html), for example, `http://
:` for an insecure cluster or `https://
:` for a secure cluster. - -If `--http-addr` is not specified when starting a node, the Admin UI is reachable at the IP address/hostname set via the `--listen-addr` flag and port `8080`. - -For additional guidance on accessing the Admin UI in the context of cluster deployment, see [Start a Local Cluster](start-a-local-cluster.html) and [Manual Deployment](manual-deployment.html). - -### Accessing the Admin UI for a secure cluster - -Note that on secure clusters, certain areas of the Admin UI can only be accessed by `admin` users. For details on providing access to users, see [this page](admin-ui-overview.html#admin-ui-access). - -On [accessing the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), your browser will consider the CockroachDB-created certificate invalid, so you’ll need to click through a warning message to get to the UI. For secure clusters, you can avoid getting the warning message by using a certificate issued by a public CA. For more information, refer to [Use a UI certificate and key to access the Admin UI](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). - -For each user who should have access to the Admin UI for a secure cluster, [create a user with a password](create-user.html). On accessing the Admin UI, the users will see a Login screen, where they will need to enter their usernames and passwords. - -{{site.data.alerts.callout_info}} -This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the Admin UI. -{{site.data.alerts.end}} - -To log out of the Admin UI, click the **Log Out** link at the bottom of the left-hand navigation bar. - -## Navigate the Admin UI - -The left-hand navigation bar allows you to navigate to the [Cluster Overview page](admin-ui-access-and-navigate.html), [cluster metrics dashboards](admin-ui-overview.html), the [Databases page](admin-ui-databases-page.html), the [Statements page](admin-ui-statements-page.html), the [Jobs page](admin-ui-jobs-page.html), and the [Advanced Debugging page](admin-ui-debug-pages.html). - -The main panel display changes for each page: - -Page | Main Panel Component ------------|------------ -Cluster Overview |
  • [Cluster Overview panel](admin-ui-cluster-overview-page.html)
  • [Node List](admin-ui-cluster-overview-page.html#node-list)
  • [Enterprise users](enterprise-licensing.html) can enable and switch to the [Node Map](admin-ui-cluster-overview-page.html#node-map-enterprise) view.
-Cluster Metrics |
  • [Time Series graphs](admin-ui-access-and-navigate.html#cluster-metrics)
  • [Summary Panel](admin-ui-access-and-navigate.html#summary-panel)
  • [Events List](admin-ui-access-and-navigate.html#events-panel)
-Databases | Information about the tables and grants in your [databases](admin-ui-databases-page.html). -Statements | Information about the SQL [statements](admin-ui-statements-page.html) running in the cluster. -Jobs | Information about all currently active schema changes and backup/restore [jobs](admin-ui-jobs-page.html). -Advanced Debugging | Advanced monitoring and troubleshooting [reports](admin-ui-debug-pages.html). These pages are experimental. If you find an issue, let us know through [these channels](https://www.cockroachlabs.com/community/). - -### Cluster Metrics - -The **Cluster Metrics** dashboards display the time series graphs that are useful to visualize and monitor data trends. To access the time series graphs, click **Metrics** on the left. - -You can hover over each graph to see actual point-in-time values. - -CockroachDB Admin UI - -{{site.data.alerts.callout_info}} -By default, CockroachDB stores time series metrics for the last 30 days, but you can reduce the interval for timeseries storage. Alternatively, if you are exclusively using a third-party tool such as [Prometheus](monitor-cockroachdb-with-prometheus.html) for time series monitoring, you can disable time series storage entirely. For more details, see this [FAQ](operational-faqs.html#can-i-reduce-or-disable-the-storage-of-timeseries-data). -{{site.data.alerts.end}} - -#### Change time range - -You can change the time range by clicking on the time window. -CockroachDB Admin UI - -{{site.data.alerts.callout_info}}The Admin UI shows time in UTC, even if you set a different time zone for your cluster. {{site.data.alerts.end}} - -#### View metrics for a single node - -By default, the time series panel displays the metrics for the entire cluster. To view the metrics for an individual node, select the node from the **Graph** drop-down list. -CockroachDB Admin UI - -### Summary panel - -The **Cluster Metrics** dashboards display the **Summary** panel of key metrics. To view the **Summary** panel, click **Metrics** on the left. - -CockroachDB Admin UI Summary Panel - -The **Summary** panel provides the following metrics: - -Metric | Description ---------|---- -Total Nodes | The total number of nodes in the cluster. Decommissioned nodes are not included in the Total Nodes count.

You can further drill down into the nodes details by clicking on [**View nodes list**](admin-ui-cluster-overview-page.html#node-list). -Dead Nodes | The number of [dead nodes](admin-ui-cluster-overview-page.html#dead-nodes) in the cluster. -Capacity Used | The storage capacity used as a percentage of total storage capacity allocated across all nodes. -Unavailable Ranges | The number of unavailable ranges in the cluster. A non-zero number indicates an unstable cluster. -Queries per second | The total number of `SELECT`, `UPDATE`, `INSERT`, and `DELETE` queries executed per second across the cluster. -P50 Latency | The 50th percentile of service latency. Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. -P99 Latency | The 99th percentile of service latency. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -### Events panel - -The **Cluster Metrics** dashboards display the **Events** panel that lists the 10 most recent events logged for the all nodes across the cluster. To view the **Events** panel, click **Metrics** on the left-hand navigation bar. To see the list of all events, click **View all events** in the **Events** panel. - -CockroachDB Admin UI Events - -The following types of events are listed: - -- Database created -- Database dropped -- Table created -- Table dropped -- Table altered -- Index created -- Index dropped -- View created -- View dropped -- Schema change reversed -- Schema change finished -- Node joined -- Node decommissioned -- Node restarted -- Cluster setting changed - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-cdc-dashboard.md b/src/current/v19.1/admin-ui-cdc-dashboard.md deleted file mode 100644 index 61ebdf51810..00000000000 --- a/src/current/v19.1/admin-ui-cdc-dashboard.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: Changefeeds Dashboard -summary: The Changefeeds dashboard lets you monitor the changefeeds created across your cluster. -toc: true ---- - -The **Changefeeds** dashboard in the CockroachDB Admin UI lets you monitor the [changefeeds](change-data-capture.html) created across your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Changefeeds**. - - -The **Changefeeds** dashboard displays the following time series graphs: - -## Max Changefeed Latency - -CockroachDB Admin UI Max Changefeed Latency graph - -- In the node view, the graph shows the maximum latency for resolved timestamps of any running changefeed for the node. - -- In the cluster view, the graph shows the maximum latency for resolved timestamps of any running changefeed across all nodes. - -{{site.data.alerts.callout_info}} -The maximum latency for resolved timestamps is distinct from and slower than the commit-to-emit latency for individual change messages. For more information about resolved timestamps, see [Ordering guarantees](change-data-capture.html#ordering-guarantees). -{{site.data.alerts.end}} - -## Sink Byte Traffic - -CockroachDB Admin UI Sink Byte Traffic graph - -- In the node view, the graph shows the number of bytes emitted by CockroachDB into the sink across all changefeeds for the selected node. - -- In the cluster view, the graph shows the number of bytes emitted by CockroachDB into the sink across all changefeeds and across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -**Emitted Bytes** | The number of bytes emitted by CockroachDB into the sink for all changefeeds. - -## Sink Counts - -CockroachDB Admin UI Sink Counts graph - -- In the node view, the graph shows the number of messages that CockroachDB sent to the sink as well as the number of flushes that the sink performed for all changefeeds. - -- In the cluster view, the graph shows the number of messages that CockroachDB sent to the sink as well as the number of flushes that the sink performed for all changefeeds across the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -**Messages** | The number of messages that CockroachDB sent to the sink for all changefeeds. -**Flushes** | The the number of flushes that the sink performed for all changefeeds. - -## Sink Timings - -CockroachDB Admin UI Sink Timings graph - -- In the node view, the graph shows the time in milliseconds per second required by CockroachDB to send messages to the sink as well as the time CockroachDB spent waiting for the sink to flush the messages for all changefeeds. - -- In the cluster view, the graph shows the time in milliseconds per second required by CockroachDB to send messages to the sink as the time CockroachDB spent waiting for the sink to flush the messages for all changefeeds across the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -**Message Emit Time** | The time in milliseconds per second required by CockroachDB to send messages to the sink for all changefeeds. -**Flush Time** | The time in milliseconds per second that CockroachDB spent waiting for the sink to flush the messages for all changefeeds. - -## See also - -- [Change Data Capture](change-data-capture.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-cluster-overview-page.md b/src/current/v19.1/admin-ui-cluster-overview-page.md deleted file mode 100644 index a831103d53c..00000000000 --- a/src/current/v19.1/admin-ui-cluster-overview-page.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: Cluster Overview Page -toc: true ---- - -The **Cluster Overview** page of the Admin UI provides details of the cluster nodes and their liveness status, replication status, uptime, and key hardware metrics. [Enterprise users](enterprise-licensing.html) can enable and switch to the [Node Map](admin-ui-cluster-overview-page.html#node-map-enterprise) view. - -## Cluster Overview Panel - -CockroachDB Admin UI - -The **Cluster Overview** panel provides the following metrics: - -Metric | Description ---------|---- -Capacity Usage |
  • Used capacity: The storage capacity used by CockroachDB (represented as a percentage of total storage capacity allocated across all nodes).
  • Usable capacity: The space available for CockroachDB data storage (i.e., the storage capacity of the machine excluding the capacity used by the Cockroach binary, operating system, and other system files).
-Node Status |
  • The number of [live nodes](#live-nodes) in the cluster.
  • The number of suspect nodes in the cluster. A node is considered suspect if its liveness status is unavailable or the node is in the process of decommissioning.
  • The number of [dead nodes](#dead-nodes) in the cluster.
  • -Replication Status |
    • The total number of [ranges](architecture/overview.html#glossary) in the cluster.
    • The number of [under-replicated ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
    • The number of [unavailable ranges](admin-ui-replication-dashboard.html#review-of-cockroachdb-terminology) in the cluster. A non-zero number indicates an unstable cluster.
    • - -## Node List - -The **Node List** is the default view on the **Overview** page. -CockroachDB Admin UI - -### Live Nodes -Live nodes are nodes that are online and responding. They are marked with a green dot. If a node is removed or dies, the dot turns yellow to indicate that it is not responding. If the node remains unresponsive for a certain amount of time (5 minutes by default), the node turns red and is moved to the [**Dead Nodes**](#dead-nodes) section, indicating that it is no longer expected to come back. - -The following details are shown for each live node: - -Column | Description --------|------------ -ID | The ID of the node. -Address | The address of the node. You can click on the address to view further details about the node. -Uptime | How long the node has been running. -Replicas | The number of replicas on the node. -CPUs | The number of CPU cores on the machine. -Capacity Usage | The storage capacity used by CockroachDB as a percentage of the total usable capacity on the node. The value is represented numerically and as a bar graph. -Mem Usage | The memory used by CockroachDB as a percentage of the total memory on the node. The value is represented numerically and as a bar graph. -Version | The build tag of the CockroachDB version installed on the node. -Logs | Click **Logs** to see detailed logs for the node. [Requires `admin` privileges](admin-ui-overview.html#admin-ui-access) on secure clusters. - -### Dead Nodes - -Nodes are considered dead once they have not responded for a certain amount of time (5 minutes by default). At this point, the automated repair process starts, wherein CockroachDB automatically rebalances replicas from the dead node, using the unaffected replicas as sources. See [Stop a Node](stop-a-node.html#how-it-works) for more information. - -The following details are shown for each dead node: - -Column | Description --------|------------ -ID | The ID of the node. -Address | The address of the node. You can click on the address to view further details about the node. -Down Since | How long the node has been down. - -### Decommissioned Nodes - -Nodes that have been decommissioned for removal from the cluster are listed in the **Decommissioned Nodes** table. - -When you initiate the [decommissioning process](remove-nodes.html#how-it-works) on a node, CockroachDB transfers all range replicas and range leases off the node so that it can be safely shut down. - -## Node Map (Enterprise) - -The **Node Map** is an [enterprise-only](enterprise-licensing.html) feature that gives you a visual representation of the geographical configuration of your cluster. - -CockroachDB Admin UI Summary Panel - -The Node Map consists of the following components: - -### Region component - -CockroachDB Admin UI Summary Panel - -{{site.data.alerts.callout_info}} -For multi-core systems, the user CPU percent can be greater than 100%. Full utilization of one core is considered as 100% CPU usage. If you have n cores, then the user CPU percent can range from 0% (indicating an idle system) to (n*100)% (indicating full utilization). -{{site.data.alerts.end}} - -### Node component - -CockroachDB Admin UI Summary Panel - -{{site.data.alerts.callout_info}} -For multi-core systems, the user CPU percent can be greater than 100%. Full utilization of one core is considered as 100% CPU usage. If you have n cores, then the user CPU percent can range from 0% (indicating an idle system) to (n*100)% (indicating full utilization). -{{site.data.alerts.end}} - -For guidance on enabling and using the node map, see [Enable Node Map](enable-node-map.html). - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-custom-chart-debug-page.md b/src/current/v19.1/admin-ui-custom-chart-debug-page.md deleted file mode 100644 index 9f0441e7297..00000000000 --- a/src/current/v19.1/admin-ui-custom-chart-debug-page.md +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Custom Chart Debug Page -toc: true ---- - -The **Custom Chart** debug page in the Admin UI can be used to create one or multiple custom charts showing any combination of over [200 available metrics](#available-metrics). - -The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc. - - -## Accessing the **Custom Chart** page - -To access the **Custom Chart** debug page, [access the Admin UI](admin-ui-access-and-navigate.html), and either: - -- Open http://localhost:8080/#/debug/chart in your browser (replacing `localhost` and `8080` with your node's host and port). - -- Click the gear icon on the left to access the **Advanced Debugging Page**. In the **Reports** section, click **Custom TimeSeries Chart**. - -## Using the **Custom Chart** page - -CockroachDB Admin UI - -On the **Custom Chart** page, you can set the time span for all charts, add new custom charts, and customize each chart: - -- To set the time span for the page, use the dropdown menu above the charts and select the desired time span. - -- To add a chart, click **Add Chart** and customize the new chart. - -- To customize each chart, use the **Units** dropdown menu to set the units to display. Then use the table below the chart to select the metrics being queried, and how they'll be combined and displayed. Options include: -{% include {{page.version.version}}/admin-ui-custom-chart-debug-page-00.html %} - -## Examples - -### Query user and system CPU usage - -CockroachDB Admin UI - -To compare system vs. userspace CPU usage, select the following values under **Metric Name**: - -- `sys.cpu.sys.percent` -- `sys.cpu.user.percent` - -The Y-axis label is the **Count**. A count of 1 represents 100% utilization. The **Aggregator** of **Sum** can show the count to be above 1, which would mean CPU utilization is greater than 100%. - -Checking **Per Node** displays statistics for each node, which could show whether an individual node's CPU usage was higher or lower than the average. - -## Available metrics - -{{site.data.alerts.callout_info}} -This list is taken directly from the source code and is subject to change. Some of the metrics listed below are already visible in other areas of the [Admin UI](admin-ui-overview.html). -{{site.data.alerts.end}} - -{% include {{page.version.version}}/metric-names.md %} - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-databases-page.md b/src/current/v19.1/admin-ui-databases-page.md deleted file mode 100644 index 39ea705afb4..00000000000 --- a/src/current/v19.1/admin-ui-databases-page.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Database Page -toc: true ---- - -{{site.data.alerts.callout_info}} -On a secure cluster, this area of the Admin UI can only be accessed by an `admin` user. See [Admin UI access](admin-ui-overview.html#admin-ui-access). -{{site.data.alerts.end}} - -The **Databases** page of the Admin UI provides details of the databases configured, the tables in each database, and the grants assigned to each user. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Databases** on the left-hand navigation bar. - - -## Tables view - -The **Tables** view shows details of the system table as well as the tables in your databases. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar. - -CockroachDB Admin UI Database Tables View - -The following details are displayed for each table: - -Metric | Description ---------|---- -Table Name | The name of the table. -Size | Approximate total disk size of the table across all replicas. -Ranges | The number of ranges in the table. -\# of Columns | The number of columns in the table. -\# of Indices | The number of indices for the table. - -## Grants view - -The **Grants** view shows the [privileges](authorization.html#assign-privileges) granted to users for each database. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then select **Databases** from the left-hand navigation bar, select **Databases** from the left-hand navigation bar, and then select **Grants** from the **View** menu. - -For more details about grants and privileges, see [Grants](grant.html). - -CockroachDB Admin UI Database Grants View - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-debug-pages.md b/src/current/v19.1/admin-ui-debug-pages.md deleted file mode 100644 index 72a19461140..00000000000 --- a/src/current/v19.1/admin-ui-debug-pages.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Advanced Debugging Page -toc: true ---- - -The **Advanced Debugging** page of the Admin UI provides links to advanced monitoring and troubleshooting reports and cluster configuration details. To view the **Advanced Debugging** page, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click the gear icon on the left-hand navigation bar. - -{{site.data.alerts.callout_info}} -These pages are experimental and undocumented. If you find an issue, let us know through [these channels](https://www.cockroachlabs.com/community/). - {{site.data.alerts.end}} - -## License and node information - -On the right-side of the page, the following information is displayed: - -- CockroachDB license type: Helps determine if you have access to Enterprise features. -- Current node ID: Helps identify the current node when viewing the Admin UI through a load balancer. - -## Reports and Configuration - -The following debug reports and configuration views are useful for monitoring and troubleshooting CockroachDB: - -Report | Description | Access level ---------|-----|-------- -[Custom Time Series Chart](admin-ui-custom-chart-debug-page.html) | Create a custom chart of time series data. | All users -Problem Ranges | View ranges in your cluster that are unavailable, underreplicated, slow, or have other problems. | [`admin` users only on secure clusters](admin-ui-overview.html#admin-ui-access) -Network Latency | Check latencies between all nodes in your cluster. | All users -Data Distribution and Zone Configs | View the distribution of table data across nodes and verify zone configuration. | [`admin` users only on secure clusters](admin-ui-overview.html#admin-ui-access) -Cluster Settings | View cluster settings and their configured values. | All users can view data according to their privileges -Localities | Check node localities for your cluster. | [`admin` users only on secure clusters](admin-ui-overview.html#admin-ui-access) - -## Even More Advanced Debugging - -The **Even More Advanced Debugging** section of the page lists additional reports that are largely internal and intended for use by CockroachDB developers. You can ignore this section while monitoring and troubleshooting CockroachDB. Alternatively, if you want to learn how to use these pages, feel free to contact us through [these channels](https://www.cockroachlabs.com/community/). - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-hardware-dashboard.md b/src/current/v19.1/admin-ui-hardware-dashboard.md deleted file mode 100644 index f63177910b1..00000000000 --- a/src/current/v19.1/admin-ui-hardware-dashboard.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: Hardware Dashboard -summary: The Hardware dashboard lets you monitor CPU usage, disk throughput, network traffic, storage capacity, and memory. -toc: true ---- - -The **Hardware** dashboard lets you monitor CPU usage, disk throughput, network traffic, storage capacity, and memory. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left, and then select **Dashboard** > **Hardware**. - -The **Hardware** dashboard displays the following time series graphs: - -## CPU Percent - -CockroachDB Admin UI CPU Percent graph - -- In the node view, the graph shows the percentage of CPU in use by the CockroachDB process for the selected node. - -- In the cluster view, the graph shows the percentage of CPU in use by the CockroachDB process across all nodes. - -{{site.data.alerts.callout_info}} -For multi-core systems, the percentage of CPU usage is calculated by normalizing the CPU usage across all cores, whereby 100% utilization indicates that all cores are fully utilized. -{{site.data.alerts.end}} - -## Memory Usage - -CockroachDB Admin UI Memory Usage graph - -- In the node view, the graph shows the memory in use by CockroachDB for the selected node. - -- In the cluster view, the graph shows the memory in use by CockroachDB across all nodes in the cluster. - -## Disk Read Bytes - -CockroachDB Admin UI Disk Read Bytes graph - -- In the node view, the graph shows the 10-second average of the number of bytes read per second by all processes, including CockroachDB, for the selected node. - -- In the cluster view, the graph shows the 10-second average of the number of bytes read per second by all processes, including CockroachDB, across all nodes. - -## Disk Write Bytes - -CockroachDB Admin UI Disk Write Bytes graph - -- In the node view, the graph shows the 10-second average of the number of bytes written per second by all processes, including CockroachDB, for the node. - -- In the cluster view, the graph shows the 10-second average of the number of bytes written per second by all processes, including CockroachDB, across all nodes. - -## Disk Read Ops - -CockroachDB Admin UI Disk Read Ops graph - -- In the node view, the graph shows the 10-second average of the number of disk read ops per second for all processes, including CockroachDB, for the selected node. - -- In the cluster view, the graph shows the 10-second average of the number of disk read ops per second for all processes, including CockroachDB, across all nodes. - -## Disk Write Ops - -CockroachDB Admin UI Disk Write Ops graph - -- In the node view, the graph shows the 10-second average of the number of disk write ops per second for all processes, including CockroachDB, for the node. - -- In the cluster view, the graph shows the 10-second average of the number of disk write ops per second for all processes, including CockroachDB, across all nodes. - -## Disk IOPS in Progress - -CockroachDB Admin UI Disk IOPS in Progress graph - -- In the node view, the graph shows the number of disk reads and writes in queue for all processes, including CockroachDB, for the selected node. - -- In the cluster view, the graph shows the number of disk reads and writes in queue for all processes, including CockroachDB, across all nodes in the cluster. - -{{site.data.alerts.callout_info}} -For Mac OS, this graph is not populated and shows zero disk IOPS in progress. This is a [known limitation](https://github.com/cockroachdb/cockroach/issues/27927) that may be lifted in the future. -{{site.data.alerts.end}} - -## Available Disk Capacity - -CockroachDB Admin UI Disk Capacity graph - -- In the node view, the graph shows the available storage capacity for the selected node. - -- In the cluster view, the graph shows the available storage capacity across all nodes in the cluster. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -## Network Bytes Received - -CockroachDB Admin UI Network Bytes Received graph - -- In the node view, the graph shows the 10-second average of the number of network bytes received per second for all processes, including CockroachDB, for the node. - -- In the cluster view, the graph shows the 10-second average of the number of network bytes received for all processes, including CockroachDB, per second across all nodes. - -## Network Bytes Sent - -CockroachDB Admin UI Network Bytes Sent graph - -- In the node view, the graph shows the 10-second average of the number of network bytes sent per second by all processes, including CockroachDB, for the node. - -- In the cluster view, the graph shows the 10-second average of the number of network bytes sent per second by all processes, including CockroachDB, across all nodes. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-jobs-page.md b/src/current/v19.1/admin-ui-jobs-page.md deleted file mode 100644 index 7ce1e5fe89f..00000000000 --- a/src/current/v19.1/admin-ui-jobs-page.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Jobs Page -toc: true ---- - -{{site.data.alerts.callout_info}} -On a secure cluster, this area of the Admin UI can only be accessed by an `admin` user. See [Admin UI access](admin-ui-overview.html#admin-ui-access). -{{site.data.alerts.end}} - -The **Jobs** page of the Admin UI provides details about the backup/restore jobs, schema changes, [user-created table statistics](create-statistics.html) and [automatic table statistics](cost-based-optimizer.html#table-statistics) jobs, and changefeeds performed across all nodes in the cluster. To view these details, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Jobs** on the left-hand navigation bar. - - -## Job details - -The **Jobs** table displays the ID, description, user, creation time, and status of each backup and restore job, schema changes, user-created table statistics and automatic table statistics jobs, and changefeeds performed across all nodes in the cluster. To view the job's full description, click the drop-down arrow in the first column. - -CockroachDB Admin UI Jobs Page - -For changefeeds, the table displays a [high-water timestamp that advances as the changefeed progresses](change-data-capture.html#monitor-a-changefeed). This is a guarantee that all changes before or at the timestamp have been emitted. Hover over the high-water timestamp to view the [system time](as-of-system-time.html). - -The automatic table statistics jobs are not displayed even when the **TYPE** drop-down is set to **All**. To view the automatic statistics creation jobs, filter the results to **Automatic-Statistics Creation** as described in the [Filtering results](#filtering-results) section. - -## Filtering results - -You can filter the results based on the status of the jobs or the type of jobs (backups, restores, schema changes, changefeeds, user-created table statistics, and automatic table statistics). You can also choose to view either the latest 50 jobs or all the jobs across all nodes. - -Filter By | Description -----------|------------ -Job Status | From the **Status** menu, select the required status filter. -Job Type | From the **Type** menu, select **Backups**, **Restores**, **Imports**, **Schema Changes**, **Changefeed**, **Statistics Creation**, or **Auto-Statistics Creation**. -Jobs Shown | From the **Show** menu, select **First 50** or **All**. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-overview-dashboard.md b/src/current/v19.1/admin-ui-overview-dashboard.md deleted file mode 100644 index d5373f82886..00000000000 --- a/src/current/v19.1/admin-ui-overview-dashboard.md +++ /dev/null @@ -1,72 +0,0 @@ ---- -title: Overview Dashboard -summary: The Overview dashboard lets you monitor important SQL performance, replication, and storage metrics. -toc: true ---- - -The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default. - - -The **Overview** dashboard displays the following time series graphs: - -## SQL Queries - -CockroachDB Admin UI SQL Queries graph - -- In the node view, the graph shows the 10-second average of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -## Service Latency: SQL, 99th percentile - -CockroachDB Admin UI Service Latency graph - -Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. - -- In the node view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the node. - -- In the cluster view, the graph shows the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency across all nodes in the cluster. - -## Replicas per Node - -CockroachDB Admin UI Replicas per node graph - -Ranges are subsets of your data, which are replicated to ensure survivability. Ranges are replicated to a configurable number of CockroachDB nodes. - -- In the node view, the graph shows the number of range replicas on the selected node. - -- In the cluster view, the graph shows the number of range replicas on each node in the cluster. - -For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html). - -{{site.data.alerts.callout_info}} -The timeseries data used to power the graphs in the Admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this [FAQ](operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes). -{{site.data.alerts.end}} - -## Capacity - -CockroachDB Admin UI Capacity graph - -You can monitor the **Capacity** graph to determine when additional storage is needed. - -- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node. - -- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -**Capacity** | The maximum storage capacity allocated to CockroachDB. You can configure the maximum storage capacity for a given node using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store). -**Available** | The free storage capacity available to CockroachDB. -**Used** | Disk space used by the data in the CockroachDB store. Note that this value is less than (**Capacity** - **Available**) because **Capacity** and **Available** metrics consider the entire disk and all applications on the disk, including CockroachDB, whereas **Used** metric tracks only the store's disk usage. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-overview.md b/src/current/v19.1/admin-ui-overview.md deleted file mode 100644 index fa6fd5e6e90..00000000000 --- a/src/current/v19.1/admin-ui-overview.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: Admin UI Overview -summary: Use the Admin UI to monitor and optimize cluster performance. -toc: true -key: explore-the-admin-ui.html ---- - -The CockroachDB Admin UI provides details about your cluster and database configuration, and helps you optimize cluster performance. - -## Admin UI areas - -Area | Description ---------|---- -[Node Map](enable-node-map.html) | View and monitor the metrics and geographical configuration of your cluster. -[Cluster Health](admin-ui-access-and-navigate.html#summary-panel) | View essential metrics about the cluster's health, such as the number of live, dead, and suspect nodes, the number of unavailable ranges, and the queries per second and service latency across the cluster. -[Overview Metrics](admin-ui-overview-dashboard.html) | View important SQL performance, replication, and storage metrics. -[Hardware Metrics](admin-ui-hardware-dashboard.html) | View metrics about CPU usage, disk throughput, network traffic, storage capacity, and memory. -[Runtime Metrics](admin-ui-runtime-dashboard.html) | View metrics about node count, CPU time, and memory usage. -[SQL Performance](admin-ui-sql-dashboard.html) | View metrics about SQL connections, byte traffic, queries, transactions, and service latency. -[Storage Utilization](admin-ui-storage-dashboard.html) | View metrics about storage capacity and file descriptors. -[Replication Details](admin-ui-replication-dashboard.html) | View metrics about how data is replicated across the cluster, such as range status, replicas per store, and replica quiescence. -[Changefeed Details](admin-ui-cdc-dashboard.html) | View metrics about the [changefeeds](change-data-capture.html) created across your cluster. -[Nodes Details](admin-ui-access-and-navigate.html#summary-panel) | View details of live, dead, and decommissioned nodes. -[Events](admin-ui-access-and-navigate.html#events-panel) | View a list of recent cluster events. -[Database Details](admin-ui-databases-page.html) | View details about the system and user databases in the cluster. -[Statements Details](admin-ui-statements-page.html) | Identify frequently executed or high latency [SQL statements](sql-statements.html) -[Jobs Details](admin-ui-jobs-page.html) | View details of the jobs running in the cluster. -[Advanced Debugging Pages](admin-ui-debug-pages.html) | View advanced monitoring and troubleshooting reports. These include details about data distribution, the state of specific queues, and slow query metrics. These details are largely intended for use by CockroachDB developers. - -## Admin UI access - -On insecure clusters, all areas of the Admin UI are accessible to all users. - -On secure clusters, certain areas of the Admin UI can only be accessed by [`admin` users](authorization.html#terminology). These areas display information from privileged HTTP endpoints that operate with `admin` privilege. - -For security reasons, non-admin users access only the data over which they have privileges (e.g., their tables and list of sessions), and data that does not require privileges (e.g., cluster health, node status, metrics). - -{{site.data.alerts.callout_info}} -The default `root` user is a member of the `admin` role, but on CockroachDB clusters prior to v20.1, the Admin UI cannot be accessed by `root`. To access the secure Admin UI areas, [grant a user membership to the `admin` role](grant-roles.html): - -GRANT admin TO \; -{{site.data.alerts.end}} - -Secure area | Privileged information ------|----- -[Node Map](enable-node-map.html) | Database and table names -[Database Details](admin-ui-databases-page.html) | Stored table data -[Statements Details](admin-ui-statements-page.html) | SQL statements -[Jobs Details](admin-ui-jobs-page.html) | SQL statements and operational details -[Advanced Debugging Pages](admin-ui-debug-pages.html) (some reports) | Stored table data, operational details, internal IP addresses, names, credentials, application data (depending on report) - -{{site.data.alerts.callout_info}} -By default, the Admin UI shares anonymous usage details with Cockroach Labs. For information about the details shared and how to opt-out of reporting, see [Diagnostics Reporting](diagnostics-reporting.html). -{{site.data.alerts.end}} - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-replication-dashboard.md b/src/current/v19.1/admin-ui-replication-dashboard.md deleted file mode 100644 index 46b744f58ff..00000000000 --- a/src/current/v19.1/admin-ui-replication-dashboard.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: Replication Dashboard -summary: The Replication dashboard lets you monitor the replication metrics for your cluster. -toc: true ---- - -The **Replication** dashboard in the CockroachDB Admin UI enables you to monitor the replication metrics for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Replication**. - - -## Review of CockroachDB terminology - -- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -- **Range Lease:** For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. -- **Under-replicated Ranges:** When a cluster is first initialized, the few default starting ranges will only have a single replica, but as soon as other nodes are available, they will replicate to them until they've reached their desired replication factor, the default being 3. If a range does not have enough replicas, the range is said to be "under-replicated". -- **Unavailable Ranges:** If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries. - -For more details, see [Scalable SQL Made Easy: How CockroachDB Automates Operations](https://www.cockroachlabs.com/blog/automated-rebalance-and-repair/) - -## Replication dashboard - -The **Replication** dashboard displays the following time series graphs: - -### Ranges - -CockroachDB Admin UI Replicas per Store - -The **Ranges** graph shows you various details about the status of ranges. - -- In the node view, the graph shows details about ranges on the node. - -- In the cluster view, the graph shows details about ranges across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Ranges | The number of ranges. -Leaders | The number of ranges with leaders. If the number does not match the number of ranges for a long time, troubleshoot your cluster. -Lease Holders | The number of ranges that have leases. -Leaders w/o Leases | The number of Raft leaders without leases. If the number if non-zero for a long time, troubleshoot your cluster. -Unavailable | The number of unavailable ranges. If the number if non-zero for a long time, troubleshoot your cluster. -Under-replicated | The number of under-replicated ranges. - -### Replicas Per Store - -CockroachDB Admin UI Replicas per Store - -- In the node view, the graph shows the number of range replicas on the store. - -- In the cluster view, the graph shows the number of range replicas on each store. - -You can [Configure replication zones](configure-replication-zones.html) to set the number and location of replicas. You can monitor the configuration changes using the Admin UI, as described in [Fault tolerance and recovery](demo-fault-tolerance-and-recovery.html). - -### Replica Quiescence - -CockroachDB Admin UI Replica Quiescence - -- In the node view, the graph shows the number of replicas on the node. - -- In the cluster view, the graph shows the number of replicas across all nodes. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -Replicas | The number of replicas. -Quiescent | The number of replicas that haven't been accessed for a while. - -### Snapshots - -CockroachDB Admin UI Replica Snapshots - -Usually the nodes in a [Raft group](architecture/replication-layer.html#raft) stay synchronized by following along the log message by message. However, if a node is far enough behind the log (e.g., if it was offline or is a new node getting up to speed), rather than send all the individual messages that changed the range, the cluster can send it a snapshot of the range and it can start following along from there. Commonly this is done preemptively, when the cluster can predict that a node will need to catch up, but occasionally the Raft protocol itself will request the snapshot. - -Metric | Description --------|------------ -Generated | The number of snapshots created per second. -Applied (Raft-initiated) | The number of snapshots applied to nodes per second that were initiated within Raft. -Applied (Preemptive) | The number of snapshots applied to nodes per second that were anticipated ahead of time (e.g., because a node was about to be added to a Raft group). -Reserved | The number of slots reserved per second for incoming snapshots that will be sent to a node. - -### Other graphs - -The **Replication** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Leaseholders per Store -- Average Queries per Store -- Logical Bytes per Store -- Range Operations - -For monitoring CockroachDB, it is sufficient to use the [**Ranges**](#ranges), [**Replicas per Store**](#replicas-per-store), and [**Replica Quiescence**](#replica-quiescence) graphs. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-runtime-dashboard.md b/src/current/v19.1/admin-ui-runtime-dashboard.md deleted file mode 100644 index 9dd6c964039..00000000000 --- a/src/current/v19.1/admin-ui-runtime-dashboard.md +++ /dev/null @@ -1,75 +0,0 @@ ---- -title: Runtime Dashboard -toc: true ---- - -The **Runtime** dashboard in the CockroachDB Admin UI lets you monitor runtime metrics for you cluster, such as node count, memory usage, and CPU time. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Runtime**. - - -The **Runtime** dashboard displays the following time series graphs: - -## Live Node Count - -CockroachDB Admin UI Node Count - -In the node view as well as the cluster view, the graph shows the number of live nodes in the cluster. - -A dip in the graph indicates decommissioned nodes, dead nodes, or nodes that are not responding. To troubleshoot the dip in the graph, refer to the [Summary panel](admin-ui-access-and-navigate.html#summary-panel). - -## Memory Usage - -CockroachDB Admin UI Memory Usage - -- In the node view, the graph shows the memory in use for the selected node. - -- In the cluster view, the graph shows the memory in use across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -RSS | Total memory in use by CockroachDB. -Go Allocated | Memory allocated by the Go layer. -Go Total | Total memory managed by the Go layer. -CGo Allocated | Memory allocated by the C layer. -CGo Total | Total memory managed by the C layer. - -{{site.data.alerts.callout_info}}If Go Total or CGO Total fluctuates or grows steadily over time, contact us.{{site.data.alerts.end}} - -## CPU Time - -CockroachDB Admin UI CPU Time - - -- In the node view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations for the selected node. -- In the cluster view, the graph shows the [CPU time](https://en.wikipedia.org/wiki/CPU_time) used by CockroachDB user and system-level operations across all nodes in the cluster. - -On hovering over the CPU Time graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -User CPU Time | Total CPU seconds per second used by the CockroachDB process across all nodes. -Sys CPU Time | Total CPU seconds per second used for CockroachDB system-level operations across all nodes. - -## Clock Offset - -CockroachDB Admin UI Clock Offset - -- In the node view, the graph shows the mean clock offset of the node against the rest of the cluster. -- In the cluster view, the graph shows the mean clock offset of each node against the rest of the cluster. - -## Other graphs - -The **Runtime** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Goroutine Count -- GC Runs -- GC Pause Time - -For monitoring CockroachDB, it is sufficient to use the [**Live Node Count**](#live-node-count), [**Memory Usage**](#memory-usage), [**CPU Time**](#cpu-time), and [**Clock Offset**](#clock-offset) graphs. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-sql-dashboard.md b/src/current/v19.1/admin-ui-sql-dashboard.md deleted file mode 100644 index eca960e3a13..00000000000 --- a/src/current/v19.1/admin-ui-sql-dashboard.md +++ /dev/null @@ -1,82 +0,0 @@ ---- -title: SQL Dashboard -summary: The SQL dashboard lets you monitor the performance of your SQL queries. -toc: true ---- - -The **SQL** dashboard in the CockroachDB Admin UI lets you monitor the performance of your SQL queries. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **SQL**. - - -The **SQL** dashboard displays the following time series graphs: - -## SQL Connections - -CockroachDB Admin UI SQL Connections - -- In the node view, the graph shows the number of connections currently open between the client and the selected node. - -- In the cluster view, the graph shows the total number of SQL client connections to all nodes combined. - -## SQL Byte Traffic - -CockroachDB Admin UI SQL Byte Traffic - -The **SQL Byte Traffic** graph helps you correlate SQL query count to byte traffic, especially in bulk data inserts or analytic queries that return data in bulk. - -- In the node view, the graph shows the current byte throughput (bytes/second) between all the currently connected SQL clients and the node. - -- In the cluster view, the graph shows the aggregate client throughput across all nodes. - -## SQL Queries - -CockroachDB Admin UI SQL Queries - -- In the node view, the graph shows the 10-second average of the number of `SELECT`/`INSERT`/`UPDATE`/`DELETE` queries per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current query load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -## SQL Query Errors - -CockroachDB Admin UI SQL Query Errors - -- In the node view, the graph shows the 10-second average of the number of SQL statements issued to the node that returned a [planning](architecture/sql-layer.html#sql-parser-planner-executor), [runtime](architecture/sql-layer.html#sql-parser-planner-executor), or [retry error](transactions.html#error-handling). - -- In the cluster view, the graph shows the 10-second average of the number of SQL statements that returned a [planning](architecture/sql-layer.html#sql-parser-planner-executor), [runtime](architecture/sql-layer.html#sql-parser-planner-executor), or [retry error](transactions.html#error-handling) across all nodes. - -## Service Latency: SQL, 99th percentile - -CockroachDB Admin UI Service Latency - -Service latency is calculated as the time between when the cluster receives a query and finishes executing the query. This time does not include returning results to the client. - -- In the node view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for the selected node. - -- In the cluster view, the graph displays the 99th [percentile](https://en.wikipedia.org/wiki/Percentile#The_normal_distribution_and_percentiles) of service latency for each node in the cluster. - -## Transactions - -CockroachDB Admin UI Transactions - -- In the node view, the graph shows the 10-second average of the number of opened, committed, aborted, and rolled back [transactions](transactions.html) per second issued by SQL clients on the node. - -- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current [transactions](transactions.html) load over the cluster, assuming the last 10 seconds of activity per node are representative of this load. - -If the graph shows excessive aborts or rollbacks, it might indicate issues with the SQL queries. In that case, re-examine queries to lower contention. - -## Other graphs - -The **SQL** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Execution Latency -- Active Distributed SQL Queries -- Active Flows for Distributed SQL Queries -- Service Latency: DistSQL -- Schema Changes - -For monitoring CockroachDB, it is sufficient to use the [**SQL Connections**](#sql-connections), [**SQL Byte Traffic**](#sql-byte-traffic), [**SQL Queries**](#sql-queries), [**Service Latency**](#service-latency-sql-99th-percentile), and [**Transactions**](#transactions) graphs. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-statements-page.md b/src/current/v19.1/admin-ui-statements-page.md deleted file mode 100644 index 95a0c988b2c..00000000000 --- a/src/current/v19.1/admin-ui-statements-page.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -title: Statements Page -toc: true ---- - -{{site.data.alerts.callout_info}} -On a secure cluster, this area of the Admin UI can only be accessed by an `admin` user. See [Admin UI access](admin-ui-overview.html#admin-ui-access). -{{site.data.alerts.end}} - -The **Statements** page helps you identify frequently executed or high latency [SQL statements](sql-statements.html). The **Statements** page also allows you to view the details of an individual SQL statement by clicking on the statement to view the **Statement Details** page. - -To view the **Statements** page, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and then click **Statements** on the left. - -CockroachDB Admin UI Statements Page - -## Limitation - -The **Statements** page displays the details of the SQL statements executed within a specified time interval. At the end of the interval, the display is wiped clean, and you'll not see any statements on the page until the next set of statements is executed. By default, the time interval is set to one hour; however, you can customize the interval using the [`diagnostics.reporting.interval`](cluster-settings.html#settings) cluster setting. - -## Filtering by application - -If you have multiple applications running on the cluster, the **Statements** page shows the statements from all of the applications by default. To view the statements pertaining to a particular application, select the [application name](connection-parameters.html#additional-connection-parameters) from the **App** dropdown menu. If you haven't set the application name in the connection string, it appears as `unset` in the dropdown menu. - -## Understanding the Statements page - -### SQL statement fingerprint - -The **Statements** page displays the details of SQL statement fingerprints instead of individual SQL statements. - -A statement fingerprint is a grouping of similar SQL statements in their abstracted form by replacing the literal values with underscores (`_`). Grouping similar SQL statements as fingerprints helps you quickly identify frequently executed SQL statements and their latencies. - -A statement fingerprint is generated when two or more statements are the same after any literal values in them (e.g.,numbers and strings) are replaced with underscores. For example, the following statements have the same once their numbers have been replaced with underscores: - -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (192, 891, 20)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (784, 452, 78)` - -Thus, they can have the same fingerprint: - -`INSERT INTO new_order(product_id, customer_id, no_w_id) VALUES (_, _, _)` - -The following statements are different enough to not have the same fingerprint: - -- `INSERT INTO orders(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES (380, 11, 11098)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, 11, 11098)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, $2, 11098)` -- `INSERT INTO new_order(product_id, customer_id, transaction_id) VALUES ($1, $2, $3)` - -### Parameters - -The **Statements** page displays the time, execution count, number of [retries](transactions.html#transaction-retries), number of rows affected, and latency for each statement fingerprint. By default, the statement fingerprints are sorted by time; however, you can sort the table by execution count, retries, rows affected, and latency. - -The following details are provided for each statement fingerprint: - -Parameter | Description ------|------------ -Statement | The SQL statement or the fingerprint of similar SQL statements.

      To view additional details of a statement fingerprint, click on the statement fingerprint in the **Statement** column to see the [**Statement Details** page](#statement-details-page). -Time | The cumulative time taken to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Execution Count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed within the last hour or the [specified time interval](#limitation).

      The execution count is displayed in numerical value as well as in the form of a horizontal bar. The bar is color-coded to indicate the ratio of runtime success (indicated by blue) to runtime failure (indicated by red) of the execution count for the fingerprint. The bar also helps you compare the execution count across all SQL fingerprints in the table.

      You can sort the table by count. -Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Rows Affected | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).

      The number of rows returned are represented in two ways: The numerical value shows the number of rows returned, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of the number of rows returned). The bar helps you compare the mean rows across all SQL fingerprints in the table.

      You can sort the table by rows returned. -Latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).

      The latency is represented in two ways: The numerical value shows the mean latency, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of latency). The bar also helps you compare the mean latencies across all SQL fingerprints in the table.

      You can sort the table by latency. - -## Statement Details page - -The **Statement Details** page displays the logical plan as well as the details of the time, execution count, retries, rows returned, and latency by phase and by gateway node for the selected statement fingerprint. - -CockroachDB Admin UI Statements Page - -### Logical plan - -New in v19.1 The **Logical Plan** section displays CockroachDB's query plan for an [explainable statement](sql-grammar.html#preparable_stmt). You can then use this information to optimize the query. For more information about logical plans, see [`EXPLAIN`](explain.html). - -By default, the logical plan for each fingerprint is sampled every 5 minutes. You can use the `sql.metrics.statement_details.plan_collection.period` [cluster setting](cluster-settings.html) to change this time interval. For example, to change the interval to 2 minutes, run the following [`SET CLUSTER SETTING`](set-cluster-setting.html) command: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.metrics.statement_details.plan_collection.period = '2m0s'; -~~~ - -### Latency by Phase - -The **Latency by Phase** table provides the mean value and one standard deviation of the mean value of the overall service latency as well as latency for each execution phase (parse, plan, run) for the SQL statement (or multiple statements having the same fingerprint). The table provides the service latency details in numerical values as well as color-coded bar graphs: blue indicates the mean value and yellow indicates one standard deviation of the mean value of latency. - -### Statistics by Gateway Node - -The **Statistics by Gateway Node** table provides a breakdown of the number of statements of the selected fingerprint per gateway node. For each gateway node, the table also provides the following details: - -Parameter | Description ------|------------ -Node | The ID of the gateway node. -Time | The cumulative time taken to execute the statement within the last hour or the [specified time interval](#limitation). -Execution Count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed. -Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Rows Affected | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).

      The number of rows returned are represented in two ways: The numerical value shows the number of rows returned, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value of the number of rows returned). The bar helps you compare the mean rows across all SQL fingerprints in the table.

      You can sort the table by rows returned. -Latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation).

      The latency is represented in two ways: The numerical value shows the mean latency, while the horizontal bar is color-coded (blue indicates the mean value and yellow indicates one standard deviation of the mean value). The bar also helps you compare the mean latencies across all SQL fingerprints in the table.

      You can sort the table by latency. - -### Execution Count - -The **Execution Count** table provides information about the following parameters in numerical values as well as bar graphs: - -Parameter | Description ------|------------ -First Attempts | The cumulative number of first attempts to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Retries | The cumulative number of retries to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Max Retries | The highest number of retries for a single SQL statement with this fingerprint within the last hour or the [specified time interval](#limitation).

      For example, if three statements having the same fingerprint had to be retried 0, 1, and 5 times, then the Max Retries value for the fingerprint is 5. -Total | The total number of executions of statements with this fingerprint. It is calculated as the sum of first attempts and cumulative retries. - -### Row Count - -The **Row Count** table provides the mean value and one standard deviation of the mean value of cumulative count of rows returned by the SQL statement (or multiple statements having the same fingerprint). The table provides the service latency details in numerical values as well as a bar graph. - -### Statistics - -The statistics box on the right-hand side of the **Statements Details** page provides the following details for the statement fingerprint: - -Parameter | Description ------|------------ -Total time | The cumulative time taken to execute the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Execution count | The total number of times the SQL statement (or multiple statements having the same fingerprint) is executed within the last hour or the [specified time interval](#limitation). -Executed without retry | The percentage of successful executions of the SQL statement (or multiple statements having the same fingerprint) on the first attempt within the last hour or the [specified time interval](#limitation). -Mean service latency | The average service latency of the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). -Mean number of rows | The average number of rows returned while executing the SQL statement (or multiple statements having the same fingerprint) within the last hour or the [specified time interval](#limitation). - -The table below the statistics box provides the following details: - -Parameter | Description ------|------------ -App | Name of the application specified by the [`application_name`](show-vars.html#supported-variables) session setting. The **Statements Details** page shows the details for this application. -Distributed execution? | Indicates whether the statement execution was distributed. -Used cost-based optimizer? | Indicates whether the statement (or multiple statements having the same fingerprint) were executed using the [cost-based optimizer](cost-based-optimizer.html). -Failed? | Indicate if the statement (or multiple statements having the same fingerprint) were executed successfully. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/admin-ui-storage-dashboard.md b/src/current/v19.1/admin-ui-storage-dashboard.md deleted file mode 100644 index 16e8867ea78..00000000000 --- a/src/current/v19.1/admin-ui-storage-dashboard.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Storage Dashboard -summary: The Storage dashboard lets you monitor the storage utilization for your cluster. -toc: true ---- - -The **Storage** dashboard in the CockroachDB Admin UI lets you monitor the storage utilization for your cluster. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui), click **Metrics** on the left-hand navigation bar, and then select **Dashboard** > **Storage**. - - -The **Storage** dashboard displays the following time series graphs: - -## Capacity - -CockroachDB Admin UI Capacity graph - -You can monitor the **Capacity** graph to determine when additional storage is needed. - -- In the node view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB for the selected node. - -- In the cluster view, the graph shows the maximum allocated capacity, available storage capacity, and capacity used by CockroachDB across all nodes in the cluster. - -On hovering over the graph, the values for the following metrics are displayed: - -Metric | Description ---------|---- -**Capacity** | The maximum storage capacity allocated to CockroachDB. You can configure the maximum storage capacity for a given node using the `--store` flag. For more information, see [Start a Node](start-a-node.html#store). -**Available** | The free storage capacity available to CockroachDB. -**Used** | Disk space used by the data in the CockroachDB store. Note that this value is less than (**Capacity** - **Available**) because **Capacity** and **Available** metrics consider the entire disk and all applications on the disk, including CockroachDB, whereas **Used** metric tracks only the store's disk usage. - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} -{{site.data.alerts.end}} - -## File Descriptors - -CockroachDB Admin UI File Descriptors - -- In the node view, the graph shows the number of open file descriptors for that node, compared with the file descriptor limit. - -- In the cluster view, the graph shows the number of open file descriptors across all nodes, compared with the file descriptor limit. - -If the Open count is almost equal to the Limit count, increase [File Descriptors](recommended-production-settings.html#file-descriptors-limit). - -{{site.data.alerts.callout_info}} -If you are running multiple nodes on a single machine (not recommended), the actual number of open file descriptors are considered open on each node. Thus the limit count value displayed on the Admin UI is the actual value of open file descriptors multiplied by the number of nodes, compared with the file descriptor limit. -{{site.data.alerts.end}} - -For Windows systems, you can ignore the File Descriptors graph because the concept of file descriptors is not applicable to Windows. - -## Other graphs - -The **Storage** dashboard shows other time series graphs that are important for CockroachDB developers: - -- Live Bytes -- Log Commit Latency -- Command Commit Latency -- RocksDB Read Amplification -- RocksDB SSTables -- Time Series Writes -- Time Series Bytes Written - -For monitoring CockroachDB, it is sufficient to use the [**Capacity**](#capacity) and [**File Descriptors**](#file-descriptors) graphs. - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) -- [Raw Status Endpoints](monitoring-and-alerting.html#raw-status-endpoints) diff --git a/src/current/v19.1/advanced-client-side-transaction-retries.md b/src/current/v19.1/advanced-client-side-transaction-retries.md deleted file mode 100644 index 6657a4c0385..00000000000 --- a/src/current/v19.1/advanced-client-side-transaction-retries.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -title: Advanced Client-side Transaction Retries -summary: Advanced client-side transaction retry features for library authors -toc: true ---- - -This page has instructions for authors of [database drivers and ORMs](install-client-drivers.html) who would like to implement client-side retries in their database driver or ORM for maximum efficiency and ease of use by application developers. - -{{site.data.alerts.callout_info}} -If you are an application developer who needs to implement an application-level retry loop, see the [Client-side intervention example](transactions.html#client-side-intervention-example). -{{site.data.alerts.end}} - -## Overview - -To improve the performance of transactions that fail due to contention, CockroachDB includes a set of statements (listed below) that let you retry those transactions. Retrying transactions using these statements has the following benefits: - -1. When you use savepoints, you "hold your place in line" between attempts. Without savepoints, you're starting from scratch every time. -2. Transactions increase their priority each time they're retried, increasing the likelihood they will succeed. This has a lesser effect than #1. - -## How transaction retries work - -A retryable transaction goes through the process described below, which maps to the following SQL statements: - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -- #1 -> SAVEPOINT cockroach_restart; -- #2 --- ... various transaction statements ... -- #3 -> RELEASE SAVEPOINT cockroach_restart; -- #5 (Or #4, ROLLBACK, in case of retry error) -> COMMIT; -~~~ - -1. The transaction starts with the [`BEGIN`](begin-transaction.html) statement. - -2. The [`SAVEPOINT`](savepoint.html) statement declares the intention to retry the transaction in the case of contention errors. Note that CockroachDB's savepoint implementation does not support all savepoint functionality, such as nested transactions. It must be executed after [`BEGIN`](begin-transaction.html) but before the first statement that manipulates a database. - -3. The statements in the transaction are executed. - -4. If a statement returns a retry error (identified via the `40001` error code or `"retry transaction"` string at the start of the error message), you can issue the [`ROLLBACK TO SAVEPOINT`](rollback-transaction.html) statement to restart the transaction and increase the transaction's priority. Alternately, the original [`SAVEPOINT`](savepoint.html) statement can be reissued to restart the transaction. - - You must now issue the statements in the transaction again. - - In cases where you do not want the application to retry the transaction, you can simply issue [`ROLLBACK`](rollback-transaction.html) at this point. Any other statements will be rejected by the server, as is generally the case after an error has been encountered and the transaction has not been closed. - -5. Once the transaction executes all statements without encountering contention errors, execute [`RELEASE SAVEPOINT`](release-savepoint.html) to commit the changes. If this succeeds, all changes made by the transaction become visible to subsequent transactions and are guaranteed to be durable if a crash occurs. - - In some cases, the [`RELEASE SAVEPOINT`](release-savepoint.html) statement itself can fail with a retry error, mainly because transactions in CockroachDB only realize that they need to be restarted when they attempt to commit. If this happens, the retry error is handled as described in step 4. - -## Customizing the savepoint name - -{% include {{ page.version.version }}/misc/customizing-the-savepoint-name.md %} - -## Examples - -For examples showing how to use [`SAVEPOINT`](savepoint.html) and the other statements described on this page to implement library support for a programming language, see the following: - -- [Build a Java app with CockroachDB](build-a-java-app-with-cockroachdb.html), in particular the logic in the `runSQL` method. -- The source code of the [sqlalchemy-cockroachdb](https://github.com/cockroachdb/sqlalchemy-cockroachdb) adapter for SQLAlchemy. - -## See also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`SHOW`](show-vars.html) -- [CockroachDB Architecture: Transaction Layer](architecture/transaction-layer.html) diff --git a/src/current/v19.1/alter-column.md b/src/current/v19.1/alter-column.md deleted file mode 100644 index 6461c315989..00000000000 --- a/src/current/v19.1/alter-column.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: ALTER COLUMN -summary: Use the ALTER COLUMN statement to set, change, or drop a column's DEFAULT constraint or to drop the NOT NULL constraint. -toc: true ---- - -The `ALTER COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and sets, changes, or drops a column's [`DEFAULT` constraint](default-value.html) or drops the [`NOT NULL` constraint](not-null.html). - -{{site.data.alerts.callout_info}} -To manage other constraints, see [`ADD CONSTRAINT`](add-constraint.html) and [`DROP CONSTRAINT`](drop-constraint.html). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/alter_column.html %} -
      - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table with the column you want to modify. | -| `column_name` | The name of the column you want to modify. | -| `a_expr` | The new [Default Value](default-value.html) you want to use. | - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Set or change a `DEFAULT` value - -Setting the [`DEFAULT` value constraint](default-value.html) inserts the value when data's written to the table without explicitly defining the value for the column. If the column already has a `DEFAULT` value set, you can use this statement to change it. - -The below example inserts the Boolean value `true` whenever you inserted data to the `subscriptions` table without defining a value for the `newsletter` column. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter SET DEFAULT true; -~~~ - -### Remove `DEFAULT` constraint - -If the column has a defined [`DEFAULT` value](default-value.html), you can remove the constraint, which means the column will no longer insert a value by default if one is not explicitly defined for the column. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP DEFAULT; -~~~ - -### Remove `NOT NULL` constraint - -If the column has the [`NOT NULL` constraint](not-null.html) applied to it, you can remove the constraint, which means the column becomes optional and can have *NULL* values written into it. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE subscriptions ALTER COLUMN newsletter DROP NOT NULL; -~~~ - -### Convert a computed column into a regular column - -{% include {{ page.version.version }}/computed-columns/convert-computed-column.md %} - -## See also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/alter-database.md b/src/current/v19.1/alter-database.md deleted file mode 100644 index 7d57b67e356..00000000000 --- a/src/current/v19.1/alter-database.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: ALTER DATABASE -summary: Use the ALTER DATABASE statement to change an existing database. -toc: false ---- - -The `ALTER DATABASE` [statement](sql-statements.html) applies a schema change to a database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -For information on using `ALTER DATABASE`, see the documents for its relevant subcommands. - -Subcommand | Description ------------|------------ -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a database. -[`RENAME`](rename-database.html) | Change the name of a database. diff --git a/src/current/v19.1/alter-index.md b/src/current/v19.1/alter-index.md deleted file mode 100644 index b4e630ec84b..00000000000 --- a/src/current/v19.1/alter-index.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: ALTER INDEX -summary: Use the ALTER INDEX statement to change an existing index. -toc: false ---- - -The `ALTER INDEX` [statement](sql-statements.html) applies a schema change to an index. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -For information on using `ALTER INDEX`, see the documents for its relevant subcommands. - -{% include {{{ page.version.version }}/misc/schema-change-view-job.md %} - -Subcommand | Description ------------|------------ -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for an index. -[`RENAME`](rename-index.html) | Change the name of an index. -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the index. diff --git a/src/current/v19.1/alter-range.md b/src/current/v19.1/alter-range.md deleted file mode 100644 index 85461295d17..00000000000 --- a/src/current/v19.1/alter-range.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: ALTER RANGE -summary: Use the ALTER RANGE statement to change an existing system range. -toc: false ---- - -The `ALTER RANGE` [statement](sql-statements.html) applies a schema change to a system range. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -For information on using `ALTER RANGE`, see the documents for its relevant subcommands. - -Subcommand | Description ------------|------------ -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a system range. diff --git a/src/current/v19.1/alter-sequence.md b/src/current/v19.1/alter-sequence.md deleted file mode 100644 index c09c49d2087..00000000000 --- a/src/current/v19.1/alter-sequence.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: ALTER SEQUENCE -summary: Use the ALTER SEQUENCE statement to change the name, increment values, and other settings of a sequence. -toc: true ---- - -The `ALTER SEQUENCE` [statement](sql-statements.html) [changes the name](rename-sequence.html), increment values, and other settings of a sequence. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/alter_sequence_options.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Modify the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence you want to modify. -`INCREMENT` | The new value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence. -`MINVALUE` | The new minimum value of the sequence.

      Default: `1` -`MAXVALUE` | The new maximum value of the sequence.

      Default: `9223372036854775807` -`START` | The value the sequence starts at if you `RESTART` or if the sequence hits the `MAXVALUE` and `CYCLE` is set.

      `RESTART` and `CYCLE` are not implemented yet. -`CYCLE` | The sequence will wrap around when the sequence value hits the maximum or minimum value. If `NO CYCLE` is set, the sequence will not wrap. - -## Examples - -### Change the increment value of a sequence - -In this example, we're going to change the increment value of a sequence from its current state (i.e., `1`) to `2`. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER SEQUENCE customer_seq INCREMENT 2; -~~~ - -Next, we'll add another record to the table and check that the new record adheres to the new sequence. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customer_list (customer, address) VALUES ('Marie', '333 Ocean Ave'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_list; -~~~ -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -| 5 | Marie | 333 Ocean Ave | -+----+----------+--------------------+ -~~~ - -### Set the next value of a sequence - -In this example, we're going to change the next value of the example sequence (`customer_seq`). Currently, the next value will be `7` (i.e., `5` + `INCREMENT 2`). We will change the next value to `20`. - -{{site.data.alerts.callout_info}}You cannot set a value outside the MAXVALUE or MINVALUE of the sequence. {{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SELECT setval('customer_seq', 20, false); -~~~ -~~~ -+--------+ -| setval | -+--------+ -| 20 | -+--------+ -~~~ - -{{site.data.alerts.callout_info}} -The `setval('seq_name', value, is_called)` function in CockroachDB SQL mimics the `setval()` function in PostgreSQL, but it does not store the `is_called` flag. Instead, it sets the value to `val - increment` for `false` or `val` for `true`. -{{site.data.alerts.end}} - -Let's add another record to the table to check that the new record adheres to the new next value. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customer_list (customer, address) VALUES ('Lola', '333 Schermerhorn'); -~~~ -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -| 5 | Marie | 333 Ocean Ave | -| 20 | Lola | 333 Schermerhorn | -+----+----------+--------------------+ -~~~ - -## See also - -- [`RENAME SEQUENCE`](rename-sequence.html) -- [`CREATE SEQUENCE`](create-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [`SHOW SEQUENCES`](show-sequences.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/alter-table.md b/src/current/v19.1/alter-table.md deleted file mode 100644 index bb3382c11fb..00000000000 --- a/src/current/v19.1/alter-table.md +++ /dev/null @@ -1,42 +0,0 @@ ---- -title: ALTER TABLE -summary: Use the ALTER TABLE statement to change the schema of a table. -toc: true ---- - -The `ALTER TABLE` [statement](sql-statements.html) applies a schema change to a table. - -{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Subcommands - -For information on using `ALTER TABLE`, see the documents for its relevant subcommands. - -{{site.data.alerts.callout_success}} -New in v19.1: Some subcommands can be used in combination in a single `ALTER TABLE` statement. For example, you can [atomically rename a column and add a new column with the old name of the existing column](rename-column.html#add-and-rename-columns-atomically). -{{site.data.alerts.end}} - -Subcommand | Description | Can combine with other subcommands? ------------|-------------|------------------------------------ -[`ADD COLUMN`](add-column.html) | Add columns to tables. | Yes -[`ADD CONSTRAINT`](add-constraint.html) | Add constraints to columns. | Yes -[`ALTER COLUMN`](alter-column.html) | Change or drop a column's [`DEFAULT` constraint](default-value.html) or drop the [`NOT NULL` constraint](not-null.html). | Yes -[`ALTER TYPE`](alter-type.html) | Change a column's [data type](data-types.html). | Yes -[`CONFIGURE ZONE`](configure-zone.html) | [Configure replication zones](configure-replication-zones.html) for a table. | No -[`DROP COLUMN`](drop-column.html) | Remove columns from tables. | Yes -[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from columns. | Yes -[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Enable per-table audit logs. | Yes -[`PARTITION BY`](partition-by.html) | Repartition or unpartition a table with partitions ([Enterprise-only](enterprise-licensing.html)). | Yes -[`RENAME COLUMN`](rename-column.html) | Change the names of columns. | Yes -[`RENAME CONSTRAINT`](rename-constraint.html) | Change constraints columns. | Yes -[`RENAME TABLE`](rename-table.html) | Change the names of tables. | No -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table. | No -[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column. | Yes - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} diff --git a/src/current/v19.1/alter-type.md b/src/current/v19.1/alter-type.md deleted file mode 100644 index 6994c69d31f..00000000000 --- a/src/current/v19.1/alter-type.md +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: ALTER TYPE -summary: Use the ALTER TYPE statement to change a column's data type. -toc: true ---- - -The `ALTER TYPE` [statement](sql-statements.html) is part of [`ALTER TABLE`](alter-table.html) and changes a column's [data type](data-types.html). - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Considerations - -You can use the `ALTER TYPE` subcommand if the following conditions are met: - -- On-disk representation of the column remains unchanged. For example, you cannot change the column data type from `STRING` to an `INT`, even if the string is just a number. -- The existing data remains valid. For example, you can change the column data type from `STRING[10]` to `STRING[20]`, but not to `STRING [5]` since that will invalidate the existing data. - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/alter_type.html %} -
      - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - -| Parameter | Description -|-----------|------------- -| `table_name` | The name of the table with the column whose data type you want to change. -| `column_name` | The name of the column whose data type you want to change. -| `typename` | The new [data type](data-types.html) you want to use. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Success scenario - -The [TPC-C](performance-benchmarking-with-tpc-c.html) database has a `customer` table with a column `c_credit_lim DECIMAL (10,2)`. Suppose you want to change the data type to `DECIMAL (12,2)`: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE customer ALTER c_credit_lim type DECIMAL (12,2); -~~~ - -~~~ -ALTER TABLE - -Time: 80.814044ms -~~~ - -### Error scenarios - -Changing a column data type from `DECIMAL` to `INT` would change the on-disk representation of the column. Therefore, attempting to do so results in an error: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE customer ALTER c_credit_lim type INT; -~~~ - -~~~ -pq: type conversion not yet implemented -~~~ - -Changing a column data type from `DECIMAL(12,2)` to `DECIMAL (8,2)` would invalidate the existing data. Therefore, attempting to do so results in an error: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE customer ALTER c_credit_lim type DECIMAL (8,2); -~~~ - -~~~ -pq: type conversion not yet implemented -~~~ - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [Other SQL Statements](sql-statements.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/alter-user.md b/src/current/v19.1/alter-user.md deleted file mode 100644 index db54cbb9b7f..00000000000 --- a/src/current/v19.1/alter-user.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: ALTER USER -summary: The ALTER USER statement can be used to add or change a user's password. -toc: true ---- - -The `ALTER USER` [statement](sql-statements.html) can be used to add or change a [user's](create-and-manage-users.html) password. - -{{site.data.alerts.callout_success}} -You can also use the [`cockroach user`](create-and-manage-users.html#update-a-users-password) command to add or change a user's password. -{{site.data.alerts.end}} - - -## Considerations - -- Password creation and alteration is supported only in secure clusters for non-`root` users. - -## Required privileges - -The user must have the `INSERT` and `UPDATE` [privileges](authorization.html#assign-privileges) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/alter_user_password.html %}
      - -## Parameters - - - -Parameter | Description -----------|------------- -`name` | The name of the user whose password you want to create or add. -`password` | Let the user [authenticate their access to a secure cluster](authentication.html#client-authentication) using this new password. Passwords should be entered as [string literal](sql-constants.html#string-literals). For compatibility with PostgreSQL, a password can also be entered as an [identifier](#change-password-using-an-identifier), although this is discouraged. - -## Examples - -### Change password using a string literal - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD 'ilov3beefjerky'; -~~~ -~~~ -ALTER USER 1 -~~~ - -### Change password using an identifier - -The following statement changes the password to `ilov3beefjerky`, as above: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD ilov3beefjerky; -~~~ - -This is equivalent to the example in the previous section because the password contains only lowercase characters. - -In contrast, the following statement changes the password to `thereisnotomorrow`, even though the password in the syntax contains capitals, because identifiers are normalized automatically: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD ThereIsNoTomorrow; -~~~ - -To preserve case in a password specified using identifier syntax, use double quotes: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER USER carl WITH PASSWORD "ThereIsNoTomorrow"; -~~~ - -## See also - -- [`cockroach user` command](create-and-manage-users.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT `](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/alter-view.md b/src/current/v19.1/alter-view.md deleted file mode 100644 index 48fee782202..00000000000 --- a/src/current/v19.1/alter-view.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: ALTER VIEW -summary: The ALTER VIEW statement changes the name of a view. -toc: true ---- - -The `ALTER VIEW` [statement](sql-statements.html) changes the name of a [view](views.html). - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -{{site.data.alerts.callout_info}} -It is not currently possible to change the `SELECT` statement executed by a view. Instead, you must drop the existing view and create a new view. Also, it is not currently possible to rename a view that other views depend on, but this ability may be added in the future (see [this issue](https://github.com/cockroachdb/cockroach/issues/10083)). -{{site.data.alerts.end}} - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the view and the `CREATE` privilege on the parent database. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/alter_view.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Rename the view only if a view of `view_name` exists; if one does not exist, do not return an error. -`view_name` | The name of the view to rename. To find view names, use:

      `SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';` -`name` | The new [`name`](sql-grammar.html#name) for the view, which must be unique to its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 2 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER VIEW bank.user_emails RENAME TO bank.user_email_addresses; -~~~ - -{% include copy-clipboard.html %} -~~~ -> RENAME VIEW -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+----------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+----------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 2 | -| def | bank | user_email_addresses | VIEW | 3 | -+---------------+-------------------+----------------------+------------+---------+ -(2 rows) -~~~ - -## See also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE`](show-create.html) -- [`DROP VIEW`](drop-view.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/architecture/distribution-layer.md b/src/current/v19.1/architecture/distribution-layer.md deleted file mode 100644 index c51bb63c13c..00000000000 --- a/src/current/v19.1/architecture/distribution-layer.md +++ /dev/null @@ -1,203 +0,0 @@ ---- -title: Distribution Layer -summary: The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data. -toc: true ---- - -The distribution layer of CockroachDB's architecture provides a unified view of your cluster's data. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -To make all data in your cluster accessible from any node, CockroachDB stores data in a monolithic sorted map of key-value pairs. This key-space describes all of the data in your cluster, as well as its location, and is divided into what we call "ranges", contiguous chunks of the key-space, so that every key can always be found in a single range. - -CockroachDB implements a sorted map to enable: - - - **Simple lookups**: Because we identify which nodes are responsible for certain portions of the data, queries are able to quickly locate where to find the data they want. - - **Efficient scans**: By defining the order of data, it's easy to find data within a particular range during a scan. - -### Monolithic sorted map structure - -The monolithic sorted map is comprised of two fundamental elements: - -- System data, which include **meta ranges** that describe the locations of data in your cluster (among many other cluster-wide and local data elements) -- User data, which store your cluster's **table data** - -#### Meta ranges - -The locations of all ranges in your cluster are stored in a two-level index at the beginning of your key-space, known as meta ranges, where the first level (`meta1`) addresses the second, and the second (`meta2`) addresses data in the cluster. - -This two-level index plus user data can be visualized as a tree, with the root at `meta1`, the second level at `meta2`, and the leaves of the tree made up of the ranges that hold user data. - -![range-lookup.png](../../images/{{page.version.version}}/range-lookup.png "Meta ranges plus user data tree diagram") - -Importantly, every node has information on where to locate the `meta1` range (known as its range descriptor, detailed below), and the range is never split. - -This meta range structure lets us address up to 4EiB of user data by default: we can address 2^(18 + 18) = 2^36 ranges; each range addresses 2^26 B, and altogether we address 2^(36+26) B = 2^62 B = 4EiB. However, with larger range sizes, it's possible to expand this capacity even further. - -Meta ranges are treated mostly like normal ranges and are accessed and replicated just like other elements of your cluster's KV data. - -Each node caches values of the `meta2` range it has accessed before, which optimizes access of that data in the future. Whenever a node discovers that its `meta2` cache is invalid for a specific key, the cache is updated by performing a regular read on the `meta2` range. - -#### Table data - -After the node's meta ranges is the KV data your cluster stores. - -Each table and its secondary indexes initially map to a single range, where each key-value pair in the range represents a single row in the table (also called the primary index because the table is sorted by the primary key) or a single row in a secondary index. As soon as a range reaches 64 MiB in size, it splits into two ranges. This process continues as a table and its indexes continue growing. Once a table is split across multiple ranges, it's likely that the table and secondary indexes will be stored in separate ranges. However, a range can still contain data for both the table and a secondary index. - -The default 64MiB range size represents a sweet spot for us between a size that's small enough to move quickly between nodes, but large enough to store a meaningfully contiguous set of data whose keys are more likely to be accessed together. These ranges are then shuffled around your cluster to ensure survivability. - -These table ranges are replicated (in the aptly named replication layer), and have the addresses of each replica stored in the `meta2` range. - -### Using the monolithic sorted map - -As described in the [meta ranges section](#meta-ranges), the locations of all the ranges in a cluster are stored in a two-level index: - -- The first level (`meta1`) addresses the second level. -- The second level (`meta2`) addresses user data. - -This can also be visualized as a tree, with the root at `meta1`, the second level at `meta2`, and the leaves of the tree made up of the ranges that hold user data. - -When a node receives a request, it looks up the location of the range(s) that include the keys in the request in a bottom-up fashion, starting with the leaves of this tree. This process works as follows: - -1. For each key, the node looks up the location of the range containing the specified key in the second level of range metadata (`meta2`). That information is cached for performance; if the range's location is found in the cache, it is returned immediately. - -2. If the range's location is not found in the cache, the node looks up the location of the range where the actual value of `meta2` resides. This information is also cached; if the location of the `meta2` range is found in the cache, the node sends an RPC to the `meta2` range to get the location of the keys the request wants to operate on, and returns that information. - -3. Finally, if the location of the `meta2` range is not found in the cache, the node looks up the location of the range where the actual value of the first level of range metadata (`meta1`) resides. This lookup always succeeds because the location of `meta1` is distributed among all the nodes in the cluster using a gossip protocol. The node then uses the information from `meta1` to look up the location of `meta2`, and from `meta2` it looks up the locations of the ranges that include the keys in the request. - -Note that the process described above is recursive; every time a lookup is performed, it either (1) gets a location from the cache, or (2) performs another lookup on the value in the next level "up" in the tree. Because the range metadata is cached, a lookup can usually be performed without having to send an RPC to another node. - -Now that the node has the location of the range where the key from the request resides, it sends the KV operations from the request along to the range (using the [`DistSender`](#distsender) machinery) in a [`BatchRequest`](#batchrequest). - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the distribution layer: - -- Receives requests from the transaction layer on the same node. -- Identifies which nodes should receive the request, and then sends the request to the proper node's replication layer. - -## Technical details and components - -### gRPC - -gRPC is the software nodes use to communicate with one another. Because the distribution layer is the first layer to communicate with other nodes, CockroachDB implements gRPC here. - -gRPC requires inputs and outputs to be formatted as protocol buffers (protobufs). To leverage gRPC, CockroachDB implements a protocol-buffer-based API defined in `api.proto`. - -For more information about gRPC, see the [official gRPC documentation](http://www.grpc.io/docs/guides/). - -### BatchRequest - -All KV operation requests are bundled into a [protobuf](https://en.wikipedia.org/wiki/Protocol_Buffers), known as a `BatchRequest`. The destination of this batch is identified in the `BatchRequest` header, as well as a pointer to the request's transaction record. (On the other side, when a node is replying to a `BatchRequest`, it uses a protobuf––`BatchResponse`.) - -This `BatchRequest` is also what's used to send requests between nodes using gRPC, which accepts and sends protocol buffers. - -### DistSender - -The gateway/coordinating node's `DistSender` receives `BatchRequest`s from its own `TxnCoordSender`. `DistSender` is then responsible for breaking up `BatchRequests` and routing a new set of `BatchRequests` to the nodes it identifies contain the data using the system [meta ranges](#meta-ranges). For a description of the process by which this lookup from a key to the node holding the key's range is performed, see [Using the monolithic sorted map](#using-the-monolithic-sorted-map). - -It sends the `BatchRequest`s to the replicas of a range, ordered in expectation of request latency. The leaseholder is tried first, if the request needs it. Requests received by a non-leaseholder may fail with an error pointing at the replica's last known leaseholder. These requests are retried transparently with the updated lease by the gateway node and never reach the client. - -As nodes begin replying to these commands, `DistSender` also aggregates the results in preparation for returning them to the client. - -### Meta range KV structure - -Like all other data in your cluster, meta ranges are structured as KV pairs. Both meta ranges have a similar structure: - -~~~ -metaX/successorKey -> [list of nodes containing data] -~~~ - -Element | Description ---------|------------------------ -`metaX` | The level of meta range. Here we use a simplified `meta1` or `meta2`, but these are actually represented in `cockroach` as `\x02` and `\x03` respectively. -`successorKey` | The first key *greater* than the key you're scanning for. This makes CockroachDB's scans efficient; it simply scans the keys until it finds a value greater than the key it's looking for, and that is where it finds the relevant data.

      The `successorKey` for the end of a keyspace is identified as `maxKey`. - -Here's an example: - -~~~ -meta2/M -> node1:26257, node2:26257, node3:26257 -~~~ - -In this case, the replica on `node1` is the leaseholder, and nodes 2 and 3 also contain replicas. - -#### Example - -Let's imagine we have an alphabetically sorted column, which we use for lookups. Here are what the meta ranges would approximately look like: - -1. `meta1` contains the address for the nodes containing the `meta2` replicas. - - ~~~ - # Points to meta2 range for keys [A-M) - meta1/M -> node1:26257, node2:26257, node3:26257 - - # Points to meta2 range for keys [M-Z] - meta1/maxKey -> node4:26257, node5:26257, node6:26257 - ~~~ - -2. `meta2` contains addresses for the nodes containing the replicas of each range in the cluster: - - ~~~ - # Contains [A-G) - meta2/G -> node1:26257, node2:26257, node3:26257 - - # Contains [G-M) - meta2/M -> node1:26257, node2:26257, node3:26257 - - #Contains [M-Z) - meta2/Z -> node4:26257, node5:26257, node6:26257 - - #Contains [Z-maxKey) - meta2/maxKey-> node4:26257, node5:26257, node6:26257 - ~~~ - -### Table data KV structure - -Key-value data, which represents the data in your tables using the following structure: - -~~~ -/// -> -~~~ - -The table itself is stored with an `index_id` of 1 for its `PRIMARY KEY` columns, with the rest of the columns in the table considered as stored/covered columns. - -### Range descriptors - -Each range in CockroachDB contains metadata, known as a range descriptor. A range descriptor is comprised of the following: - -- A sequential RangeID -- The keyspace (i.e., the set of keys) the range contains; for example, the first and last `` in the table data KV structure above. This determines the `meta2` range's keys. -- The addresses of nodes containing replicas of the range. This determines the `meta2` range's key's values. - -Because range descriptors comprise the key-value data of the `meta2` range, each node's `meta2` cache also stores range descriptors. - -Range descriptors are updated whenever there are: - -- Membership changes to a range's Raft group (discussed in more detail in the [Replication Layer](replication-layer.html#membership-changes-rebalance-repair)) -- Range splits - -All of these updates to the range descriptor occur locally on the range, and then propagate to the `meta2` range. - -### Range splits - -By default, CockroachDB attempts to keep ranges/replicas at 64MiB. Once a range reaches that limit we split it into two 32 MiB ranges (composed of contiguous key spaces). - -During this range split, the node creates a new Raft group containing all of the same members as the range that was split. The fact that there are now two ranges also means that there is a transaction that updates `meta2` with the new keyspace boundaries, as well as the addresses of the nodes using the range descriptor. - -## Technical interactions with other layers - -### Distribution and transaction layer - -The Distribution layer's `DistSender` receives `BatchRequests` from its own node's `TxnCoordSender`, housed in the Transaction layer. - -### Distribution and replication layer - -The Distribution layer routes `BatchRequests` to nodes containing ranges of data, which is ultimately routed to the Raft group leader or leaseholder, which are handled in the replication layer. - -## What's next? - -Learn how CockroachDB copies data and ensures consistency in the [replication layer](replication-layer.html). diff --git a/src/current/v19.1/architecture/life-of-a-distributed-transaction.md b/src/current/v19.1/architecture/life-of-a-distributed-transaction.md deleted file mode 100644 index 434a7fff0e3..00000000000 --- a/src/current/v19.1/architecture/life-of-a-distributed-transaction.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -title: Life of a Distributed Transaction -summary: Learn how a query moves through the layers of CockroachDB's architecture. -toc: true ---- - -Because CockroachDB is a distributed transactional database, the path queries take is dramatically different from many other database architectures. To help familiarize you with CockroachDB's internals, this guide covers what that path actually is. - -If you've already read the [CockroachDB architecture documentation](overview.html), this guide serves as another way to conceptualize how the database works. This time, instead of focusing on the layers of CockroachDB's architecture, we're going to focus on the linear path that a query takes through the system (and then back out again). - -To get the most out of this guide, we recommend beginning with the architecture documentation's [overview](overview.html) and progressing through all of the following sections. This guide provides brief descriptions of each component's function and links to other documentation where appropriate, but assumes the reader has a basic understanding of the architecture in the first place. - -## Overview - -This guide is organized by the physical actors in the system, and then broken down into the components of each actor in the sequence in which they're involved. - -Here's a brief overview of the physical actors, in the sequence with which they're involved in executing a query: - -1. [**SQL Client**](#sql-client-postgres-wire-protocol) sends a query to your cluster. -1. [**Load Balancing**](#load-balancing-routing) routes the request to CockroachDB nodes in your cluster, which will act as a gateway. -1. [**Gateway**](#gateway) is a CockroachDB node that processes the SQL request and responds to the client. -1. [**Leaseholder**](#leaseholder-node) is a CockroachDB node responsible for serving reads and coordinating writes of a specific range of keys in your query. -1. [**Raft leader**](#raft-leader) is a CockroachDB node responsible for maintaining consensus among your CockroachDB replicas. - -Once the transaction completes, queries traverse these actors in approximately reverse order. We say "approximately" because there might be many leaseholders and Raft leaders involved in a single query, and there is little-to-no interaction with the load balancer during the response. - -## SQL Client/Postgres Wire Protocol - -To begin, a SQL client (e.g., an app) performs some kind of business logic against your CockroachDB cluster, such as inserting a new customer record. - -This request is sent over a connection to your CockroachDB cluster that's established using a PostgreSQL driver. - -## Load Balancing & Routing - -Modern architectures require distributing your cluster across machines to improve throughput, latency, and uptime. This means queries are routed through load balancers, which choose the best CockroachDB node to connect to. Because all CockroachDB nodes have perfectly symmetrical access to data, this means your load balancer can connect your client to any node in the cluster and access any data while still guaranteeing strong consistency. - -Your architecture might also have additional layers of routing to enforce regulatory compliance, such as ensuring GDPR compliance. - -Once your router and load balancer determine the best node to connect to, your client's connection is established to the gateway node. - -## Gateway - -The gateway node handles the connection with the client, both receiving and responding to the request. - -### SQL parsing & planning - -The gateway node first [parses](sql-layer.html#sql-parser-planner-executor) the client's SQL statement to ensure it's valid according to the CockroachDB dialect of SQL, and uses that information to [generate a logical SQL plan](sql-layer.html#logical-planning). - -Given that CockroachDB is a distributed database, though, it's also important to take a cluster's topology into account, so the logical plan is then converted into a physical plan—this means sometimes pushing operations onto the physical machines that contain the data. - -### SQL executor - -While CockroachDB presents a SQL interface to clients, the actual database is built on top of a key-value store. To mediate this, the physical plan generated at the end of SQL parsing is passed to the SQL executor, which executes the plan by performing key-value operations through the `TxnCoordSender`. For example, the SQL executor converts `INSERT `statements into `Put()` operations. - -### TxnCoordSender - -The `TxnCoordSender` provides an API to perform key-value operations on your database. - -On its back end, the `TxnCoordSender` performs a large amount of the accounting and tracking for a transaction, including: - -- Accounts for all keys involved in a transaction. This is used, among other ways, to manage the transaction's state. -- Packages all key-value operations into a `BatchRequest`, which are forwarded on to the node's `DistSender`. - -### DistSender - -The gateway node's `DistSender` receives `BatchRequests` from the `TxnCoordSender`. It dismantles the initial `BatchRequest` by taking each operation and finding which physical machine should receive the request for the range—known as the range's leaseholder. The address of the range's current leaseholder is readily available in both local caches, as well as in the [cluster's `meta` ranges](distribution-layer.html#meta-range-kv-structure). - -These dismantled `BatchRequests` are reassembled into new `BatchRequests` containing the address of the range's leaseholder. - -All write operations also propagate the leaseholder's address back to the `TxnCoordSender`, so it can track and clean up write operations as necessary. - -The `DistSender` sends out the first `BatchRequest` for each range in parallel. As soon as it receives a provisional acknowledgment from the leaseholder node’s evaluator (details below), it sends out the next `BatchRequest` for that range. - -The `DistSender` then waits to receive acknowledgments for all of its write operations, as well as values for all of its read operations. However, this wait isn't necessarily blocking, and the `DistSender` may still perform operations with ongoing transactions. - -## Leaseholder node - -The gateway node's `DistSender` tries to send its `BatchRequests` to the replica identified as the range's [leaseholder](replication-layer.html#leases), which is a single replica that serves all reads for a range, as well as coordinates all writes. Leaseholders play a crucial role in CockroachDB's architecture, so it's a good topic to make sure you're familiar with. - -### Request response - -Because the leaseholder replica can shift between nodes, all nodes must be able to return a request for any key, returning a response indicating one of these scenarios: - -##### No Longer Leaseholder - -If a node is no longer the leaseholder, but still contains a replica of the range, it denies the request but includes the last known address for the leaseholder of that range. - -Upon receipt of this response, the `DistSender` will update the header of the `BatchRequest` with the new address, and then resend the `BatchRequest` to the newly identified leaseholder. - -##### No Longer Has/Never Had Range - -If a node doesn't have a replica for the requested range, it denies the request without providing any further information. - -In this case, the `DistSender` must look up the current leaseholder using the [cluster's `meta` ranges](distribution-layer.html#meta-range-kv-structure). - -##### Success - -Once the node that contains the leaseholder of the range receives the `BatchRequest`, it begins processing it, and progresses onto checking the timestamp cache. - -### Timestamp cache - -The timestamp cache tracks the highest timestamp (i.e., most recent) for any read operation that a given range has served. - -Each write operation in a `BatchRequest` checks its own timestamp versus the timestamp cache to ensure that the write operation has a higher timestamp; this guarantees that history is never rewritten and you can trust that reads always served the most recent data. It's one of the crucial mechanisms CockroachDB uses to ensure serializability. If a write operation fails this check, it must be restarted at a timestamp higher than the timestamp cache's value. - -### Latch manager - -Operations in the `BatchRequest` are serialized through the leaseholder's latch manager. - -This works by giving each write operation a latch on a row. Any reads or writes that come in after the latch has been granted on the row must wait for the write to complete, at which point the latch is released and the subsequent operations can continue. - -### Batch Evaluation - -The batch evaluator ensures that write operations are valid. Our architecture makes this fairly trivial. First, the evaluator can simply check the leaseholder's data to ensure the write is valid; because it has coordinated all writes to the range, it must have the most up-to-date versions of the range's data. Secondly, because of the latch manager, each write operation is guaranteed to uncontested access to the range (i.e., there is no contention with other write operations). - -If the write operation is valid according to the evaluator, the leaseholder sends a provisional acknowledgment to the gateway node's `DistSender`; this lets the `DistSender` begin to send its subsequent `BatchRequests` for this range. - -Importantly, this feature is entirely built for transactional optimization (known as [transaction pipelining](transaction-layer.html#transaction-pipelining)). There are no issues if an operation passes the evaluator but doesn't end up committing. - -### Reads from RocksDB - -All operations (including writes) begin by reading from the local instance of RocksDB to check for write intents for the operation's key. We talk much more about [write intents in the transaction layer of the CockroachDB architecture](transaction-layer.html#write-intents), which is worth reading, but a simplified explanation is that these are provisional, uncommitted writes that express that some other concurrent transaction plans to write a value to the key. - -What we detail below is a simplified version of the CockroachDB transaction model. For more detail, check out [the transaction architecture documentation](transaction-layer.html). - -#### Resolving Write Intents - -If an operation encounters a write intent for a key, it attempts to "resolve" the write intent by checking the state of the write intent's transaction. If the write intent's transaction record is... - -- `COMMITTED`, this operation converts the write intent to a regular key-value pair, and then proceeds as if it had read that value instead of a write intent. -- `ABORTED`, this operation discards the write intent and reads the next-most-recent value from RocksDB. -- `PENDING`, the new transaction attempts to "push" the write intent's transaction by moving that transaction's timestamp forward (i.e., ahead of this transaction's timestamp); however, this only succeeds if the write intent's transaction has become inactive. - - If the push succeeds, the operation continues. - - If this push fails (which is the majority of the time), this transaction goes into the [`TxnWaitQueue`](transaction-layer.html#txnwaitqueue) on this node. The incoming transaction can only continue once the blocking transaction completes (i.e., commits or aborts). -- `MISSING`, the resolver consults the write intent's timestamp. - - If it was created within the transaction liveness threshold, it treats the transaction record as exhibiting the `PENDING` behavior, with the addition of tracking the push in the range's timestamp cache, which will inform the transaction that its timestamp was pushed once the transaction record gets created. - - If the write intent is older than the transaction liveness threshold, the resolution exhibits the `ABORTED` behavior. - - Note that transaction records might be missing because we've avoided writing the record until the transaction commits. For more information, see [Transaction Layer: Transaction records](transaction-layer.html#transaction-records). - -Check out our architecture documentation for more information about [CockroachDB's transactional model](transaction-layer.html). - -#### Read Operations - -If the read doesn't encounter a write intent and the key-value operation is meant to serve a read, it can simply use the value it read from the leaseholder's instance of RocksDB. This works because the leaseholder had to be part of the Raft consensus group for any writes to complete, meaning it must have the most up-to-date version of the range's data. - -The leaseholder aggregates all read responses into a `BatchResponse` that will get returned to the gateway node's `DistSender`. - -As we mentioned before, each read operation also updates the timestamp cache. - -### Write Operations - -After guaranteeing that there are no existing write intents for the keys, `BatchRequest`'s key-value operations are converted to [Raft operations](replication-layer.html#raft) and have their values converted into write intents. - -The leaseholder then proposes these Raft operations to the Raft group leader. The leaseholder and the Raft leader are almost always the same node, but there are situations where the roles might drift to different nodes. However, when the two roles are not collocated on the same physical machine, CockroachDB will attempt to relocate them on the same node at the next opportunity. - -## Raft Leader - -CockroachDB leverages Raft as its consensus protocol. If you aren't familiar with it, we recommend checking out the details about [how CockroachDB leverages Raft](replication-layer.html#raft), as well as [learning more about how the protocol works at large](http://thesecretlivesofdata.com/raft/). - -In terms of executing transactions, the Raft leader receives proposed Raft commands from the leaseholder. Each Raft command is a write that is used to represent an atomic state change of the underlying key-value pairs stored in RocksDB. - -### Consensus - -For each command the Raft leader receives, it proposes a vote to the other members of the Raft group. - -Once the command achieves consensus (i.e., a majority of nodes including itself acknowledge the Raft command), it is committed to the Raft leader’s Raft log and written to RocksDB. At the same time, the Raft leader also sends a command to all other nodes to include the command in their Raft logs. - -Once the leader commits the Raft log entry, it’s considered committed. At this point the value is considered written, and if another operation comes in and performs a read on RocksDB for this key, they’ll encounter this value. - -Note that this write operation creates a write intent; these writes will not be fully committed until the gateway node sets the transaction record's status to `COMMITTED`. - -## On the way back up - -Now that we have followed an operation all the way down from the SQL client to RocksDB, we can pretty quickly cover what happens on the way back up (i.e., when generating a response to the client). - -1. Once the leaseholder applies a write to its Raft log, - it sends an commit acknowledgment to the gateway node's `DistSender`, which was waiting for this signal (having already received the provisional acknowledgment from the leaseholder's evaluator). -1. The gateway node's `DistSender` aggregates commit acknowledgments from all of the write operations in the `BatchRequest`, as well as any values from read operations that should be returned to the client. -1. Once all operations have successfully completed (i.e., reads have returned values and write intents have been committed), the `DistSender` tries to record the transaction's success in the transaction record (which provides a durable mechanism of tracking the transaction's state), which can cause a few situations to arise: - - It checks the timestamp cache of the range where the first write occurred to see if its timestamp got pushed forward. If it did, the transaction performs a [read refresh](transaction-layer.html#read-refreshing) to see if any values it needed have been changed. If the read refresh is successful, the transaction can commit at the pushed timestamp. If the read refresh fails, the transaction must be restarted. - - If the transaction is in an `ABORTED` state, the `DistSender` sends a response indicating as much, which ends up back at the SQL interface. - - Upon passing these checks the transaction record is either written for the first time with the `COMMITTED` state, or if it was in a `PENDING` state, it is moved to `COMMITTED`. Only at this point is the transaction considered committed. -1. The `DistSender` propagates any values that should be returned to the client (e.g., reads or the number of affected rows) to the `TxnCoordSender`, which in turn responds to the SQL interface with the value. - The `TxnCoordSender` also begins asynchronous intent cleanup by sending a request to the `DistSender` to convert all write intents it created for the transaction to fully committed values. However, this process is largely an optimization; if any operation encounters a write intent, it checks the write intent's transaction record. If the transaction record is `COMMITTED`, the operation can perform the same cleanup and convert the write intent to a fully committed value. -1. The SQL interface then responds to the client, and is now prepared to continue accepting new connections. diff --git a/src/current/v19.1/architecture/overview.md b/src/current/v19.1/architecture/overview.md deleted file mode 100644 index fb818f8ec71..00000000000 --- a/src/current/v19.1/architecture/overview.md +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: Architecture Overview -summary: Learn about the inner-workings of the CockroachDB architecture. -toc: true -key: cockroachdb-architecture.html ---- - -CockroachDB was designed to create the source-available database our developers would want to use: one that is both scalable and consistent. Developers often have questions about how we've achieved this, and this guide sets out to detail the inner-workings of the `cockroach` process as a means of explanation. - -However, you definitely do not need to understand the underlying architecture to use CockroachDB. These pages give serious users and database enthusiasts a high-level framework to explain what's happening under the hood. - -## Using this guide - -This guide is broken out into pages detailing each layer of CockroachDB. It's recommended to read through the layers sequentially, starting with this overview and then proceeding to the SQL layer. - -If you're looking for a high-level understanding of CockroachDB, you can simply read the **Overview** section of each layer. For more technical detail––for example, if you're interested in [contributing to the project](https://wiki.crdb.io/wiki/spaces/CRDB/pages/73204033/Contributing+to+CockroachDB)––you should read the **Components** sections as well. - -{{site.data.alerts.callout_info}}This guide details how CockroachDB is built, but does not explain how you should architect an application using CockroachDB. For help with your own application's architecture using CockroachDB, check out our user documentation.{{site.data.alerts.end}} - -## Goals of CockroachDB - -CockroachDB was designed in service of the following goals: - -- Make life easier for humans. This means being low-touch and highly automated for operators and simple to reason about for developers. -- Offer industry-leading consistency, even on massively scaled deployments. This means enabling distributed transactions, as well as removing the pain of eventual consistency issues and stale reads. -- Create an always-on database that accepts reads and writes on all nodes without generating conflicts. -- Allow flexible deployment in any environment, without tying you to any platform or vendor. -- Support familiar tools for working with relational data (i.e., SQL). - -With the confluence of these features, we hope that CockroachDB lets teams easily build global, scalable, resilient cloud services. - -## Glossary - -### Terms - -It's helpful to understand a few terms before reading our architecture documentation. - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -### Concepts - -CockroachDB heavily relies on the following concepts, so being familiar with them will help you understand what our architecture achieves. - -Term | Definition ------|----------- -**Consistency** | CockroachDB uses "consistency" in both the sense of [ACID semantics](https://en.wikipedia.org/wiki/Consistency_(database_systems)) and the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem), albeit less formally than either definition. What we try to express with this term is that your data should be anomaly-free. -**Consensus** | When a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.

      When a write *doesn't* achieve consensus, forward progress halts to maintain consistency within the cluster. -**Replication** | Replication involves creating and distributing copies of data, as well as ensuring copies remain consistent. However, there are multiple types of replication: namely, synchronous and asynchronous.

      Synchronous replication requires all writes to propagate to a quorum of copies of the data before being considered committed. To ensure consistency with your data, this is the kind of replication CockroachDB uses.

      Asynchronous replication only requires a single node to receive the write to be considered committed; it's propagated to each copy of the data after the fact. This is more or less equivalent to "eventual consistency", which was popularized by NoSQL databases. This method of replication is likely to cause anomalies and loss of data. -**Transactions** | A set of operations performed on your database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/Database_transaction). This is a crucial component for a consistent system to ensure developers can trust the data in their database. -**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast. - -## Overview - -CockroachDB starts running on machines with two commands: - -- `cockroach start` with a `--join` flag for all of the initial nodes in the cluster, so the process knows all of the other machines it can communicate with -- `cockroach init` to perform a one-time initialization of the cluster - -Once the `cockroach` process is running, developers interact with CockroachDB through a SQL API, which we've modeled after PostgreSQL. Thanks to the symmetrical behavior of all nodes, you can send SQL requests to any of them; this makes CockroachDB really easy to integrate with load balancers. - -After receiving SQL RPCs, nodes convert them into operations that work with our distributed key-value store. As these RPCs start filling your cluster with data, CockroachDB algorithmically starts distributing your data among your nodes, breaking the data up into 64MiB chunks that we call ranges. Each range is replicated to at least 3 nodes to ensure survivability. This way, if nodes go down, you still have copies of the data which can be used for reads and writes, as well as replicating the data to other nodes. - -If a node receives a read or write request it cannot directly serve, it simply finds the node that can handle the request, and communicates with it. This way you do not need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node. - -Any changes made to the data in a range rely on a consensus algorithm to ensure a majority of its replicas agree to commit the change, ensuring industry-leading isolation guarantees and providing your application consistent reads, regardless of which node you communicate with. - -Ultimately, data is written to and read from disk using an efficient storage engine, which is able to keep track of the data's timestamp. This has the benefit of letting us support the SQL standard `AS OF SYSTEM TIME` clause, letting you find historical data for a period of time. - -However, while that high-level overview gives you a notion of what CockroachDB does, looking at how the `cockroach` process operates on each of these nodes will give you much greater understanding of our architecture. - -### Layers - -At the highest level, CockroachDB converts clients' SQL statements into key-value (KV) data, which is distributed among nodes and written to disk. Our architecture is the process by which we accomplish that, which is manifested as a number of layers that interact with those directly above and below it as relatively opaque services. - -The following pages describe the function each layer performs, but mostly ignore the details of other layers. This description is true to the experience of the layers themselves, which generally treat the other layers as black-box APIs. There are interactions that occur between layers which *are not* clearly articulated and require an understanding of each layer's function to understand the entire process. - -Layer | Order | Purpose -------|------------|-------- -[SQL](sql-layer.html) | 1 | Translate client SQL queries to KV operations. -[Transactional](transaction-layer.html) | 2 | Allow atomic changes to multiple KV entries. -[Distribution](distribution-layer.html) | 3 | Present replicated KV ranges as a single entity. -[Replication](replication-layer.html) | 4 | Consistently and synchronously replicate KV ranges across many nodes. This layer also enables consistent reads via leases. -[Storage](storage-layer.html) | 5 | Write and read KV data on disk. - -## What's next? - -Begin understanding our architecture by learning how CockroachDB works with applications in the [SQL layer](sql-layer.html). diff --git a/src/current/v19.1/architecture/reads-and-writes-overview.md b/src/current/v19.1/architecture/reads-and-writes-overview.md deleted file mode 100644 index 72d21e5d643..00000000000 --- a/src/current/v19.1/architecture/reads-and-writes-overview.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: Reads and Writes in CockroachDB -summary: Learn how reads and writes are affected by the replicated and distributed nature of data in CockroachDB. -toc: true ---- - -This page explains how reads and writes are affected by the replicated and distributed nature of data in CockroachDB. It starts by summarizing some important [CockroachDB architectural concepts](overview.html) and then walks you through a few simple read and write scenarios. - -{{site.data.alerts.callout_info}} -For a more detailed trace of a query through the layers of CockroachDB's architecture, see [Life of a Distributed Transaction](life-of-a-distributed-transaction.html). -{{site.data.alerts.end}} - -## Important concepts - -{% include {{ page.version.version }}/misc/basic-terms.md %} - -As mentioned above, when a query is executed, the cluster routes the request to the leaseholder for the range containing the relevant data. If the query touches multiple ranges, the request goes to multiple leaseholders. For a read request, only the leaseholder of the relevant range retrieves the data. For a write request, the Raft consensus protocol dictates that a majority of the replicas of the relevant range must agree before the write is committed. - -Let's consider how these mechanics play out in some hypothetical queries. - -## Read scenario - -First, imagine a simple read scenario where: - -- There are 3 nodes in the cluster. -- There are 3 small tables, each fitting in a single range. -- Ranges are replicated 3 times (the default). -- A query is executed against node 2 to read from table 3. - -Perf tuning concepts - -In this case: - -1. Node 2 (the gateway node) receives the request to read from table 3. -2. The leaseholder for table 3 is on node 3, so the request is routed there. -3. Node 3 returns the data to node 2. -4. Node 2 responds to the client. - -If the query is received by the node that has the leaseholder for the relevant range, there are fewer network hops: - -Perf tuning concepts - -## Write scenario - -Now imagine a simple write scenario where a query is executed against node 3 to write to table 1: - -Perf tuning concepts - -In this case: - -1. Node 3 (the gateway node) receives the request to write to table 1. -2. The leaseholder for table 1 is on node 1, so the request is routed there. -3. The leaseholder is the same replica as the Raft leader (as is typical), so it simultaneously appends the write to its own Raft log and notifies its follower replicas on nodes 2 and 3. -4. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leader and the write is committed to the key-values on the agreeing replicas. In this diagram, the follower on node 2 acknowledged the write, but it could just as well have been the follower on node 3. Also note that the follower not involved in the consensus agreement usually commits the write very soon after the others. -5. Node 1 returns acknowledgement of the commit to node 3. -6. Node 3 responds to the client. - -Just as in the read scenario, if the write request is received by the node that has the leaseholder and Raft leader for the relevant range, there are fewer network hops: - -Perf tuning concepts - -## Network and I/O bottlenecks - -With the above examples in mind, it's always important to consider network latency and disk I/O as potential performance bottlenecks. In summary: - -- For reads, hops between the gateway node and the leaseholder add latency. -- For writes, hops between the gateway node and the leaseholder/Raft leader, and hops between the leaseholder/Raft leader and Raft followers, add latency. In addition, since Raft log entries are persisted to disk before a write is committed, disk I/O is important. diff --git a/src/current/v19.1/architecture/replication-layer.md b/src/current/v19.1/architecture/replication-layer.md deleted file mode 100644 index 5bf02f973db..00000000000 --- a/src/current/v19.1/architecture/replication-layer.md +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: Replication Layer -summary: The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between copies. -toc: true ---- - -The replication layer of CockroachDB's architecture copies data between nodes and ensures consistency between these copies by implementing our consensus algorithm. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -High availability requires that your database can tolerate nodes going offline without interrupting service to your application. This means replicating data between nodes to ensure the data remains accessible. - -Ensuring consistency with nodes offline, though, is a challenge many databases fail. To solve this problem, CockroachDB uses a consensus algorithm to require that a quorum of replicas agrees on any changes to a range before those changes are committed. Because 3 is the smallest number that can achieve quorum (i.e., 2 out of 3), CockroachDB's high availability (known as multi-active availability) requires 3 nodes. - -The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones](../configure-replication-zones.html). - -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the replication layer: - -- Receives requests from and sends responses to the distribution layer. -- Writes accepted requests to the storage layer. - -## Components - -### Raft - -Raft is a consensus protocol––an algorithm which makes sure that your data is safely stored on multiple machines, and that those machines agree on the current state even if some of them are temporarily disconnected. - -Raft organizes all nodes that contain a replica of a range into a group--unsurprisingly called a Raft group. Each replica in a Raft group is either a "leader" or a "follower". The leader, which is elected by Raft and long-lived, coordinates all writes to the Raft group. It heartbeats followers periodically and keeps their logs replicated. In the absence of heartbeats, followers become candidates after randomized election timeouts and proceed to hold new leader elections. - -Once a node receives a `BatchRequest` for a range it contains, it converts those KV operations into Raft commands. Those commands are proposed to the Raft group leader––which is what makes it ideal for the [leaseholder](#leases) and the Raft leader to be one in the same––and written to the Raft log. - -For a great overview of Raft, we recommend [The Secret Lives of Data](http://thesecretlivesofdata.com/raft/). - -#### Raft logs - -When writes receive a quorum, and are committed by the Raft group leader, they're appended to the Raft log. This provides an ordered set of commands that the replicas agreed on and is essentially the source of truth for consistent replication. - -Because this log is treated as serializable, it can be replayed to bring a node from a past state to its current state. This log also lets nodes that temporarily went offline to be "caught up" to the current state without needing to receive a copy of the existing data in the form of a snapshot. - -### Snapshots - -Each replica can be "snapshotted", which copies all of its data as of a specific timestamp (available because of [MVCC](storage-layer.html#mvcc)). This snapshot can be sent to other nodes during a rebalance event to expedite replication. - -After loading the snapshot, the node gets up to date by replaying all actions from the Raft group's log that have occurred since the snapshot was taken. - -### Leases - -A single node in the Raft group acts as the leaseholder, which is the only node that can serve reads or propose writes to the Raft group leader (both actions are received as `BatchRequests` from [`DistSender`](distribution-layer.html#distsender)). - -When serving reads, leaseholders bypass Raft; for the leaseholder's writes to have been committed in the first place, they must have already achieved consensus, so a second consensus on the same data is unnecessary. This has the benefit of not incurring networking round trips required by Raft and greatly increases the speed of reads (without sacrificing consistency). - -CockroachDB attempts to elect a leaseholder who is also the Raft group leader, which can also optimize the speed of writes. - -If there is no leaseholder, any node receiving a request will attempt to become the leaseholder for the range. To prevent two nodes from acquiring the lease, the requester includes a copy of the last valid lease it had; if another node became the leaseholder, its request is ignored. - -#### Co-location with Raft leadership - -The range lease is completely separate from Raft leadership, and so without further efforts, Raft leadership and the Range lease might not be held by the same replica. However, we can optimize query performance by making the same node both Raft leader and the leaseholder; it reduces network round trips if the leaseholder receiving the requests can simply propose the Raft commands to itself, rather than communicating them to another node. - -To achieve this, each lease renewal or transfer also attempts to collocate them. In practice, that means that the mismatch is rare and self-corrects quickly. - -#### Epoch-based leases (table data) - -To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. To extend its leases, each node must periodically update its liveness record, which is stored on a system range key. When a node disconnects, it stops updating the liveness record, and the epoch is considered changed. This causes the node to immediately lose all of its leases. - -Because leases do not expire until a node disconnects from a cluster, leaseholders do not have to individually renew their own leases. Tying lease lifetimes to node liveness in this way lets us eliminate a substantial amount of traffic and Raft processing we would otherwise incur, while still tracking leases for every range. - -#### Expiration-based leases (meta and system ranges) - -A table's meta and system ranges (detailed in the [distribution layer](distribution-layer.html#meta-ranges)) are treated as normal key-value data, and therefore have leases just like table data. - -However, unlike table data, system ranges cannot use epoch-based leases because that would create a circular dependency: system ranges are already being used to implement epoch-based leases for table data. Therefore, system ranges use expiration-based leases instead. Expiration-based leases expire at a particular timestamp (typically after a few seconds). However, as long as a node continues proposing Raft commands, it continues to extend the expiration of its leases. If it doesn't, the next node containing a replica of the range that tries to read from or write to the range will become the leaseholder. - -#### Leaseholder rebalancing - -Because CockroachDB serves reads from a range's leaseholder, it benefits your cluster's performance if the replica closest to the primary geographic source of traffic holds the lease. However, as traffic to your cluster shifts throughout the course of the day, you might want to dynamically shift which nodes hold leases. - -{{site.data.alerts.callout_info}} - -This feature is also called [Follow-the-Workload](../demo-follow-the-workload.html) in our documentation. - -{{site.data.alerts.end}} - -Periodically (every 10 minutes by default in large clusters, but more frequently in small clusters), each leaseholder considers whether it should transfer the lease to another replica by considering the following inputs: - -- Number of requests from each locality -- Number of leases on each node -- Latency between localities - -##### Intra-locality - -If all the replicas are in the same locality, the decision is made entirely on the basis of the number of leases on each node that contains a replica, trying to achieve a roughly equitable distribution of leases across all of them. This means the distribution isn't perfectly equal; it intentionally tolerates small deviations between nodes to prevent thrashing (i.e., excessive adjustments trying to reach an equilibrium). - -##### Inter-locality - -If replicas are in different localities, CockroachDB attempts to calculate which replica would make the best leaseholder, i.e., provide the lowest latency. - -To enable dynamic leaseholder rebalancing, a range's current leaseholder tracks how many requests it receives from each locality as an exponentially weighted moving average. This calculation results in the locality that has recently requested the range most often being assigned the greatest weight. If another locality then begins requesting the range very frequently, this calculation would shift to assign the second region the greatest weight. - -When checking for leaseholder rebalancing opportunities, the leaseholder correlates each requesting locality's weight (i.e., the proportion of recent requests) to the locality of each replica by checking how similar the localities are. For example, if the leaseholder received requests from gateway nodes in locality `country=us,region=central`, CockroachDB would assign the following weights to replicas in the following localities: - -Replica locality | Replica rebalancing weight ------------------|------------------- -`country=us,region=central` | 100% because it is an exact match -`country=us,region=east` | 50% because only the first locality matches -`country=aus,region=central` | 0% because the first locality does not match - -The leaseholder then evaluates its own weight and latency versus the other replicas to determine an adjustment factor. The greater the disparity between weights and the larger the latency between localities, the more CockroachDB favors the node from the locality with the larger weight. - -When checking for leaseholder rebalancing opportunities, the current leaseholder evaluates each replica's rebalancing weight and adjustment factor for the localities with the greatest weights. If moving the leaseholder is both beneficial and viable, the current leaseholder will transfer the lease to the best replica. - -##### Controlling leaseholder rebalancing - -You can control leaseholder rebalancing through the `kv.allocator.load_based_lease_rebalancing.enabled` and `kv.allocator.lease_rebalancing_aggressiveness` [cluster settings](../cluster-settings.html). Note that depending on the needs of your deployment, you can exercise additional control over the location of leases and replicas by [configuring replication zones](../configure-replication-zones.html). - -### Membership changes: rebalance/repair - -Whenever there are changes to a cluster's number of nodes, the members of Raft groups change and, to ensure optimal survivability and performance, replicas need to be rebalanced. What that looks like varies depending on whether the membership change is nodes being added or going offline. - -- **Nodes added**: The new node communicates information about itself to other nodes, indicating that it has space available. The cluster then rebalances some replicas onto the new node. - -- **Nodes going offline**: If a member of a Raft group ceases to respond, after 5 minutes, the cluster begins to rebalance by replicating the data the downed node held onto other nodes. - -Rebalancing is achieved by using a snapshot of a replica from the leaseholder, and then sending the data to another node over [gRPC](distribution-layer.html#grpc). After the transfer has been completed, the node with the new replica joins that range's Raft group; it then detects that its latest timestamp is behind the most recent entries in the Raft log and it replays all of the actions in the Raft log on itself. - -#### Load-based replica rebalancing - -In addition to the rebalancing that occurs when nodes join or leave a cluster, replicas are also rebalanced automatically based on the relative load across the nodes within a cluster. For more information, see the `kv.allocator.load_based_rebalancing` and `kv.allocator.qps_rebalance_threshold` [cluster settings](../cluster-settings.html). Note that depending on the needs of your deployment, you can exercise additional control over the location of leases and replicas by [configuring replication zones](../configure-replication-zones.html). - -## Interactions with other layers - -### Replication and distribution layers - -The replication layer receives requests from its and other nodes' `DistSender`. If this node is the leaseholder for the range, it accepts the requests; if it isn't, it returns an error with a pointer to which node it believes *is* the leaseholder. These KV requests are then turned into Raft commands. - -The replication layer sends `BatchResponses` back to the distribution layer's `DistSender`. - -### Replication and storage layers - -Committed Raft commands are written to the Raft log and ultimately stored on disk through the storage layer. - -The leaseholder serves reads from its RocksDB instance, which is in the storage layer. - -## What's next? - -Learn how CockroachDB reads and writes data from disk in the [storage layer](storage-layer.html). diff --git a/src/current/v19.1/architecture/sql-layer.md b/src/current/v19.1/architecture/sql-layer.md deleted file mode 100644 index 07e51a16a20..00000000000 --- a/src/current/v19.1/architecture/sql-layer.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: SQL Layer -summary: The SQL layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations. -toc: true ---- - -The SQL layer of CockroachDB's architecture exposes its SQL API to developers and converts SQL statements into key-value operations used by the rest of the database. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -Once CockroachDB has been deployed, developers need nothing more than a connection string to the cluster and SQL statements to start working. - -Because CockroachDB's nodes all behave symmetrically, developers can send requests to any node (which means CockroachDB works well with load balancers). Whichever node receives the request acts as the "gateway node," as other layers process the request. - -When developers send requests to the cluster, they arrive as SQL statements, but data is ultimately written to and read from the storage layer as key-value (KV) pairs. To handle this, the SQL layer converts SQL statements into a plan of KV operations, which it passes along to the transaction layer. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the SQL layer: - -- Sends requests to the transaction layer. - -## Components - -### Relational structure - -Developers experience data stored in CockroachDB in a relational structure, i.e., rows and columns. Sets of rows and columns are organized into tables. Collections of tables are organized into databases. Your cluster can contain many databases. - -Because of this structure, CockroachDB provides typical relational features like constraints (e.g., foreign keys). This lets application developers trust that the database will ensure consistent structuring of the application's data; data validation doesn't need to be built into the application logic separately. - -### SQL API - -CockroachDB implements a large portion of the ANSI SQL standard to manifest its relational structure. You can view [all of the SQL features CockroachDB supports here](../sql-feature-support.html). - -Importantly, through the SQL API, we also let developers use ACID-semantic transactions like they would through any SQL database (`BEGIN`, `END`, `COMMIT`, etc.) - -### PostgreSQL wire protocol - -SQL queries reach your cluster through the PostgreSQL wire protocol. This makes connecting your application to the cluster simple by supporting most PostgreSQL-compatible drivers, as well as many PostgreSQL ORMs, such as GORM (Go) and Hibernate (Java). - -### SQL parser, planner, executor - -After your node ultimately receives a SQL request from a client, CockroachDB parses the statement, [creates a query plan](../cost-based-optimizer.html), and then executes the plan. - -#### Parsing - -Received queries are parsed against our `yacc` file (which describes our supported syntax), and converts the string version of each query into [abstract syntax trees](https://en.wikipedia.org/wiki/Abstract_syntax_tree) (AST). - -#### Logical planning - -The AST is subsequently transformed into a query plan in three phases: - -1. The AST is transformed into a high-level logical query plan. During this transformation, CockroachDB also performs [semantic analysis](https://en.wikipedia.org/wiki/Semantic_analysis_(compilers)), which includes checking whether the query is valid, resolving names, eliminating unneeded intermediate computations, and finalizing which data types to use for intermediate results. - -2. The logical plan is *simplified* using transformation optimizations that are always valid. - -3. The logical plan is *optimized* using a [search algorithm](../cost-based-optimizer.html) that evaluates many possible ways to execute a query and selects an execution plan with the least costs. - -The result of the optimization phase is an optimized logical plan. This can be observed with [`EXPLAIN`](../explain.html). - -#### Physical planning - -The physical planning phase decides which nodes will participate in -the execution of the query, based on range locality information. This -is where CockroachDB decides to distribute a query to perform some -computations close to where the data is stored. - -The result of physical planning is a physical plan and can be observed -with [`EXPLAIN(DISTSQL)`](../explain.html). - -#### Query execution - -Components of the physical plan are sent to one or more nodes for execution. On each node, CockroachDB spawns a *logical processor* to compute a part of the query. Logical processors inside or across nodes communicate with each other over a *logical flow* of data. The combined results of the query are sent back to the first node where the query was received, to be sent further to the SQL client. - -Each processor uses an encoded form for the scalar values manipulated by the query. This is a binary form which is different from that used in SQL. So the values listed in the SQL query must be encoded, and the data communicated between logical processors, and read from disk, must be decoded before it is sent back to the SQL client. - -### Encoding - -Though SQL queries are written in parsable strings, lower layers of CockroachDB deal primarily in bytes. This means at the SQL layer, in query execution, CockroachDB must convert row data from their SQL representation as strings into bytes, and convert bytes returned from lower layers into SQL data that can be passed back to the client. - -It's also important––for indexed columns––that this byte encoding preserve the same sort order as the data type it represents. This is because of the way CockroachDB ultimately stores data in a sorted key-value map; storing bytes in the same order as the data it represents lets us efficiently scan KV data. - -However, for non-indexed columns (e.g., non-`PRIMARY KEY` columns), CockroachDB instead uses an encoding (known as "value encoding") which consumes less space but does not preserve ordering. - -You can find more exhaustive detail in the [Encoding Tech Note](https://github.com/cockroachdb/cockroach/blob/master/docs/tech-notes/encoding.md). - -### DistSQL - -Because CockroachDB is a distributed database, we've developed a Distributed SQL (DistSQL) optimization tool for some queries, which can dramatically speed up queries that involve many ranges. Though DistSQL's architecture is worthy of its own documentation, this cursory explanation can provide some insight into how it works. - -In non-distributed queries, the coordinating node receives all of the rows that match its query, and then performs any computations on the entire data set. - -However, for DistSQL-compatible queries, each node does computations on the rows it contains, and then sends the results (instead of the entire rows) to the coordinating node. The coordinating node then aggregates the results from each node, and finally returns a single response to the client. - -This dramatically reduces the amount of data brought to the coordinating node, and leverages the well-proven concept of parallel computing, ultimately reducing the time it takes for complex queries to complete. In addition, this processes data on the node that already stores it, which lets CockroachDB handle row-sets that are larger than an individual node's storage. - -To run SQL statements in a distributed fashion, we introduce a couple of concepts: - -- **Logical plan**: Similar to the AST/`planNode` tree described above, it represents the abstract (non-distributed) data flow through computation stages. -- **Physical plan**: A physical plan is conceptually a mapping of the logical plan nodes to physical machines running `cockroach`. Logical plan nodes are replicated and specialized depending on the cluster topology. Like `planNodes` above, these components of the physical plan are scheduled and run on the cluster. - -You can find much greater detail in the [DistSQL RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20160421_distributed_sql.md). - -## Technical interactions with other layers - -### SQL and transaction layer - -KV operations from executed `planNodes` are sent to the transaction layer. - -## What's next? - -Learn how CockroachDB handles concurrent requests in the [transaction layer](transaction-layer.html). diff --git a/src/current/v19.1/architecture/storage-layer.md b/src/current/v19.1/architecture/storage-layer.md deleted file mode 100644 index 4af45f4f4e3..00000000000 --- a/src/current/v19.1/architecture/storage-layer.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: Storage Layer -summary: The storage layer of CockroachDB's architecture reads and writes data to disk. -toc: true ---- - -The storage layer of CockroachDB's architecture reads and writes data to disk. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - - -## Overview - -Each CockroachDB node contains at least one `store`, specified when the node starts, which is where the `cockroach` process reads and writes its data on disk. - -This data is stored as key-value pairs on disk using RocksDB, which is treated primarily as a black-box API. Internally, each store contains two instances of RocksDB: - -- One for storing temporary distributed SQL data -- One for all other data on the node - -In addition, there is also a block cache shared amongst all of the stores in a node. These stores in turn have a collection of range replicas. More than one replica for a range will never be placed on the same store or even the same node. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the storage layer: - -- Serves successful reads and writes from the replication layer. - -## Components - -### RocksDB - -CockroachDB uses RocksDB––an embedded key-value store––to read and write data to disk. You can find more information about it on the [RocksDB Basics GitHub page](https://github.com/facebook/rocksdb/wiki/RocksDB-Basics). - -RocksDB integrates really well with CockroachDB for a number of reasons: - -- Key-value store, which makes mapping to our key-value layer simple -- Atomic write batches and snapshots, which give us a subset of transactions - -Efficient storage for the keys is guaranteed by the underlying RocksDB engine by means of prefix compression. - -### MVCC - -CockroachDB relies heavily on [multi-version concurrency control (MVCC)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) to process concurrent requests and guarantee consistency. Much of this work is done by using [hybrid logical clock (HLC) timestamps](transaction-layer.html#time-and-hybrid-logical-clocks) to differentiate between versions of data, track commit timestamps, and identify a value's garbage collection expiration. All of this MVCC data is then stored in RocksDB. - -Despite being implemented in the storage layer, MVCC values are widely used to enforce consistency in the [transaction layer](transaction-layer.html). For example, CockroachDB maintains a [timestamp cache](transaction-layer.html#timestamp-cache), which stores the timestamp of the last time that the key was read. If a write operation occurs at a lower timestamp than the largest value in the read timestamp cache, it signifies there’s a potential anomaly and the transaction must be restarted at a later timestamp. - -#### Time-travel - -As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC). - -To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB uses the schema information as of that time to formulate the queries. - -Using these tools, you can get consistent data from your database as far back as your garbage collection period. - -### Garbage collection - -CockroachDB regularly garbage collects MVCC values to reduce the size of data stored on disk. To do this, we compact old MVCC values when there is a newer MVCC value with a timestamp that's older than the garbage collection period. The garbage collection period can be set at the cluster, database, or table level by configuring the [`gc.ttlseconds` replication zone variable](../configure-replication-zones.html#gc-ttlseconds). For more information about replication zones, see [Configure Replication Zones](../configure-replication-zones.html). - -## Interactions with other layers - -### Storage and replication layers - -The storage layer commits writes from the Raft log to disk, as well as returns requested data (i.e., reads) to the replication layer. - -## What's next? - -Now that you've learned about our architecture, [start a local cluster](../install-cockroachdb.html) and start [building an app with CockroachDB](../build-an-app-with-cockroachdb.html). diff --git a/src/current/v19.1/architecture/transaction-layer.md b/src/current/v19.1/architecture/transaction-layer.md deleted file mode 100644 index f495230f90c..00000000000 --- a/src/current/v19.1/architecture/transaction-layer.md +++ /dev/null @@ -1,254 +0,0 @@ ---- -title: Transaction Layer -summary: The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. -toc: true ---- - -The transaction layer of CockroachDB's architecture implements support for ACID transactions by coordinating concurrent operations. - -{{site.data.alerts.callout_info}} -If you haven't already, we recommend reading the [Architecture Overview](overview.html). -{{site.data.alerts.end}} - -## Overview - -Above all else, CockroachDB believes consistency is the most important feature of a database––without it, developers cannot build reliable tools, and businesses suffer from potentially subtle and hard to detect anomalies. - -To provide consistency, CockroachDB implements full support for ACID transaction semantics in the transaction layer. However, it's important to realize that *all* statements are handled as transactions, including single statements––this is sometimes referred to as "autocommit mode" because it behaves as if every statement is followed by a `COMMIT`. - -For code samples of using transactions in CockroachDB, see our documentation on [transactions](../transactions.html#sql-statements). - -Because CockroachDB enables transactions that can span your entire cluster (including cross-range and cross-table transactions), it optimizes correctness through a two-phase transaction protocol with asynchronous cleanup. - -### Writes and reads (phase 1) - -#### Writing - -When the transaction layer executes write operations, it doesn't directly write values to disk. Instead, it creates two things that help it mediate a distributed transaction: - -- A **transaction record** stored in the range where the first write occurs, which includes the transaction's current state (which is either `PENDING`, `COMMITTED`, or `ABORTED`). - -- **Write intents** for all of a transaction’s writes, which represent a provisional, uncommitted state. These are essentially the same as standard [multi-version concurrency control (MVCC)](storage-layer.html#mvcc) values but also contain a pointer to the transaction record stored on the cluster. - -As write intents are created, CockroachDB checks for newer committed values. If newer committed values exist, the transaction may be restarted. If existing write intents for the same keys exist, it is resolved as a [transaction conflict](#transaction-conflicts). - -If transactions fail for other reasons, such as failing to pass a SQL constraint, the transaction is aborted. - -#### Reading - -If the transaction has not been aborted, the transaction layer begins executing read operations. If a read only encounters standard MVCC values, everything is fine. However, if it encounters any write intents, the operation must be resolved as a [transaction conflict](#transaction-conflicts). - -### Commits (phase 2) - -CockroachDB checks the running transaction's record to see if it's been `ABORTED`; if it has, it restarts the transaction. - -If the transaction passes these checks, it's moved to `COMMITTED` and responds with the transaction's success to the client. At this point, the client is free to begin sending more requests to the cluster. - -### Cleanup (asynchronous phase 3) - -After the transaction has been resolved, all of the write intents should resolved. To do this, the coordinating node––which kept a track of all of the keys it wrote––reaches out to the values and either: - -- Resolves their write intents to MVCC values by removing the element that points it to the transaction record. -- Deletes the write intents. - -This is simply an optimization, though. If operations in the future encounter write intents, they always check their transaction records––any operation can resolve or remove write intents by checking the transaction record's status. - -### Interactions with other layers - -In relationship to other layers in CockroachDB, the transaction layer: - -- Receives KV operations from the SQL layer. -- Controls the flow of KV operations sent to the distribution layer. - -## Technical details and components - -### Time and hybrid logical clocks - -In distributed systems, ordering and causality are difficult problems to solve. While it's possible to rely entirely on Raft consensus to maintain serializability, it would be inefficient for reading data. To optimize performance of reads, CockroachDB implements hybrid-logical clocks (HLC) which are composed of a physical component (always close to local wall time) and a logical component (used to distinguish between events with the same physical component). This means that HLC time is always greater than or equal to the wall time. You can find more detail in the [HLC paper](http://www.cse.buffalo.edu/tech-reports/2014-04.pdf). - -In terms of transactions, the gateway node picks a timestamp for the transaction using HLC time. Whenever a transaction's timestamp is mentioned, it's an HLC value. This timestamp is used to both track versions of values (through [multi-version concurrency control](storage-layer.html#mvcc)), as well as provide our transactional isolation guarantees. - -When nodes send requests to other nodes, they include the timestamp generated by their local HLCs (which includes both physical and logical components). When nodes receive requests, they inform their local HLC of the timestamp supplied with the event by the sender. This is useful in guaranteeing that all data read/written on a node is at a timestamp less than the next HLC time. - -This then lets the node primarily responsible for the range (i.e., the leaseholder) serve reads for data it stores by ensuring the transaction reading the data is at an HLC time greater than the MVCC value it's reading (i.e., the read always happens "after" the write). - -#### Max clock offset enforcement - -CockroachDB requires moderate levels of clock synchronization to preserve data consistency. For this reason, when a node detects that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default), **it crashes immediately**. - -While [serializable consistency](https://en.wikipedia.org/wiki/Serializability) is maintained regardless of clock skew, skew outside the configured clock offset bounds can result in violations of single-key linearizability between causally dependent transactions. It's therefore important to prevent clocks from drifting too far by running [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more detail about the risks that large clock offsets can cause, see [What happens when node clocks are not properly synchronized?](../operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized) - -### Timestamp cache - -As part of providing serializability, whenever an operation reads a value, we store the operation's timestamp in a timestamp cache, which shows the high-water mark for values being read. - -Whenever a write occurs, its timestamp is checked against the timestamp cache. If the timestamp is less than the timestamp cache's latest value, we attempt to push the timestamp for its transaction forward to a later time. Pushing the timestamp might cause the transaction to restart in the second phase of the transaction (see [read refreshing](#read-refreshing)). - -### client.Txn and TxnCoordSender - -As we mentioned in the SQL layer's architectural overview, CockroachDB converts all SQL statements into key-value (KV) operations, which is how data is ultimately stored and accessed. - -All of the KV operations generated from the SQL layer use `client.Txn`, which is the transactional interface for the CockroachDB KV layer––but, as we discussed above, all statements are treated as transactions, so all statements use this interface. - -However, `client.Txn` is actually just a wrapper around `TxnCoordSender`, which plays a crucial role in our code base by: - -- Dealing with transactions' state. After a transaction is started, `TxnCoordSender` starts asynchronously sending heartbeat messages to that transaction's transaction record, which signals that it should be kept alive. If the `TxnCoordSender`'s heartbeating stops, the transaction record is moved to the `ABORTED` status. -- Tracking each written key or key range over the course of the transaction. -- Clearing the accumulated write intent for the transaction when it's committed or aborted. All requests being performed as part of a transaction have to go through the same `TxnCoordSender` to account for all of its write intents, which optimizes the cleanup process. - -After setting up this bookkeeping, the request is passed to the `DistSender` in the distribution layer. - -### Latch manager - -As write operations occur for a range, the range's leaseholder serializes them; that is to say that they are placed into some consistent order. - -To enforce this serialization, the leaseholder creates a "latch" for the keys in the write value, providing uncontested access to the keys. If other operations come into the leaseholder for the same set of keys, they must wait for the latch to be released before they can proceed. - -Of note, only write operations generate a latch for the keys. Read operations do not block other operations from executing. - -Another way to think of a latch is like a mutex, which is only needed for the duration of a low-level operation. To coordinate longer-running, higher-level operations (i.e., client transactions), we use a durable system of [write intents](#write-intents). - -### Transaction records - -To track the status of a transaction's execution, we write a value called a transaction record to our key-value store. All of a transaction's write intents point back to this record, which lets any transaction check the status of any write intents it encounters. This kind of canonical record is crucial for supporting concurrency in a distributed environment. - -Transaction records are always written to the same range as the first key in the transaction, which is known by the `TxnCoordSender`. However, the transaction record itself isn't created until one of the following conditions occur: - -- The write operation commits -- The `TxnCoordSender` heartbeats the transaction -- An operation forces the transaction to abort - -Given this mechanism, the transaction record uses the following states: - -- `PENDING`: Indicates that the write intent's transaction is still in progress. -- `COMMITTED`: Once a transaction has completed, this status indicates that write intents can be treated as committed values. -- `ABORTED`: Indicates that the transaction was aborted and its values should be discarded. -- _Record does not exist_: If a transaction encounters a write intent whose transaction record doesn't exist, it uses the write intent's timestamp to determine how to proceed. If the write intent's timestamp is within the transaction liveness threshold, the write intent's transaction is treated as if it is `PENDING`, otherwise it's treated as if the transaction is `ABORTED`. - -The transaction record for a committed transaction remains until all its write intents are converted to MVCC values. - -### Write intents - -Values in CockroachDB are not written directly to the storage layer; instead everything is written in a provisional state known as a "write intent." These are essentially MVCC records with an additional value added to them which identifies the transaction record to which the value belongs. - -Whenever an operation encounters a write intent (instead of an MVCC value), it looks up the status of the transaction record to understand how it should treat the write intent value. If the transaction record is missing, the operation checks the write intent's timestamp and evaluates whether or not it is considered expired. - -#### Resolving write intents - -Whenever an operation encounters a write intent for a key, it attempts to "resolve" it, the result of which depends on the write intent's transaction record: - -- `COMMITTED`: The operation reads the write intent and converts it to an MVCC value by removing the write intent's pointer to the transaction record. -- `ABORTED`: The write intent is ignored and deleted. -- `PENDING`: This signals there is a [transaction conflict](#transaction-conflicts), which must be resolved. -- _Record does not exist_: If the write intent was created within the transaction liveness threshold, it's the same as `PENDING`, otherwise it's treated as `ABORTED`. - -### Isolation levels - -Isolation is an element of [ACID transactions](https://en.wikipedia.org/wiki/ACID), which determines how concurrency is controlled, and ultimately guarantees consistency. - -CockroachDB executes all transactions at the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolation levels (e.g., `SNAPSHOT`, `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. - -CockroachDB now only supports `SERIALIZABLE` isolation. In previous versions of CockroachDB, you could set transactions to `SNAPSHOT` isolation, but that feature has been removed. - -`SERIALIZABLE` isolation does not allow any anomalies in your data, and is enforced by requiring the client to retry transactions if serializability violations are possible. - -### Transaction conflicts - -CockroachDB's transactions allow the following types of conflicts that involve running into an intent: - -- **Write/write**, where two `PENDING` transactions create write intents for the same key. -- **Write/read**, when a read encounters an existing write intent with a timestamp less than its own. - -To make this simpler to understand, we'll call the first transaction `TxnA` and the transaction that encounters its write intents `TxnB`. - -CockroachDB proceeds through the following steps: - -1. If the transaction has an explicit priority set (i.e., `HIGH` or `LOW`), the transaction with the lower priority is aborted (in the write/write case) or has its timestamp pushed (in the write/read case). - -1. If the encountered transaction is expired, it's `ABORTED` and conflict resolution succeeds. We consider a write intent expired if: - - It doesn't have a transaction record and its timestamp is outside of the transaction liveness threshold. - - Its transaction record hasn't been heartbeated within the transaction liveness threshold. - -2. `TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete. - -Additionally, the following types of conflicts that do not involve running into intents can arise: - -- **Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [timestamp cache](#timestamp-cache). -- **Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node. - -### TxnWaitQueue - -The `TxnWaitQueue` tracks all transactions that could not push a transaction whose writes they encountered, and must wait for the blocking transaction to complete before they can proceed. - -The `TxnWaitQueue`'s structure is a map of blocking transaction IDs to those they're blocking. For example: - -~~~ -txnA -> txn1, txn2 -txnB -> txn3, txn4, txn5 -~~~ - -Importantly, all of this activity happens on a single node, which is the leader of the range's Raft group that contains the transaction record. - -Once the transaction does resolve––by committing or aborting––a signal is sent to the `TxnWaitQueue`, which lets all transactions that were blocked by the resolved transaction begin executing. - -Blocked transactions also check the status of their own transaction to ensure they're still active. If the blocked transaction was aborted, it's simply removed. - -If there is a deadlock between transactions (i.e., they're each blocked by each other's Write Intents), one of the transactions is randomly aborted. In the above example, this would happen if `TxnA` blocked `TxnB` on `key1` and `TxnB` blocked `TxnA` on `key2`. - -### Read refreshing - -Whenever a transaction's timestamp has been pushed, additional checks are required before allowing it to commit at the pushed timestamp: any values which the transaction previously read must be checked to verify that no writes have subsequently occurred between the original transaction timestamp and the pushed transaction timestamp. This check prevents serializability violation. The check is done by keeping track of all the reads using a dedicated `RefreshRequest`. If this succeeds, the transaction is allowed to commit (transactions perform this check at commit time if they've been pushed by a different transaction or by the timestamp cache, or they perform the check whenever they encounter a `ReadWithinUncertaintyIntervalError` immediately, before continuing). -If the refreshing is unsuccessful, then the transaction must be retried at the pushed timestamp. - -### Transaction pipelining - -Transactional writes are pipelined when being replicated and when being written to disk, dramatically reducing the latency of transactions that perform multiple writes. For example, consider the following transaction: - -{% include copy-clipboard.html %} -~~~ sql --- CREATE TABLE kv (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), key VARCHAR, value VARCHAR); -> BEGIN; - SAVEPOINT cockroach_restart; - INSERT into kv (key, value) VALUES ('apple', 'red'); - INSERT into kv (key, value) VALUES ('banana', 'yellow'); - INSERT into kv (key, value) VALUES ('orange', 'orange'); - RELEASE SAVEPOINT cockroach_restart; - COMMIT; -~~~ - -In versions prior to 2.1, for each `INSERT` statement above, the transaction gateway node would have to wait for write intents to propagate to each leaseholder, resulting in higher cumulative latency. - -In versions 2.1 and later, write intents are propagated to leaseholders in parallel, so the waiting all happens at the end, at transaction commit time. - -At a high level, transaction pipelining works as follows: - -1. For each statement, the transaction gateway node communicates with the leaseholders (*L*1, *L*2, *L*3, ..., *L*i) for the ranges it wants to write to. Since the primary keys in the table above are UUIDs, the ranges are probably split across multiple leaseholders (this is a good thing, as it decreases [transaction conflicts](#transaction-conflicts)). - -2. Each leaseholder *L*i receives the communication from the transaction gateway node and does the following in parallel: - - Creates write intents and sends them to its follower nodes. - - Responds to the transaction gateway node that the write intents have been sent. Note that replication of the intents is still in-flight at this stage. - -3. When attempting to commit, the transaction gateway node then waits for the write intents to be replicated in parallel to all of the leaseholders' followers. When it receives responses from the leaseholders that the write intents have propagated, it commits the transaction. - -In terms of the SQL snippet shown above, all of the waiting for write intents to propagate and be committed happens once, at the very end of the transaction, rather than for each individual write, which was the prior behavior. This changes the cost of multiple writes from `O(n)` in the number of SQL DML statements to `O(1)`. - -## Technical interactions with other layers - -### Transaction and SQL layer - -The transaction layer receives KV operations from `planNodes` executed in the SQL layer. - -### Transaction and distribution layer - -The `TxnCoordSender` sends its KV requests to `DistSender` in the distribution layer. - -## What's next? - -Learn how CockroachDB presents a unified view of your cluster's data in the [distribution layer](distribution-layer.html). - - - -[storage]: storage-layer.html -[sql]: sql-layer.html diff --git a/src/current/v19.1/array.md b/src/current/v19.1/array.md deleted file mode 100644 index d673f317f4d..00000000000 --- a/src/current/v19.1/array.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -title: ARRAY -summary: The ARRAY data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array data types. -toc: true ---- - -The `ARRAY` data type stores one-dimensional, 1-indexed, homogeneous arrays of any non-array [data type](data-types.html). - -The `ARRAY` data type is useful for ensuring compatibility with ORMs and other tools. However, if such compatibility is not a concern, it's more flexible to design your schema with normalized tables. - - -{{site.data.alerts.callout_info}} -CockroachDB does not support nested arrays, creating database indexes on arrays, and ordering by arrays. -{{site.data.alerts.end}} - -## Syntax - -A value of data type `ARRAY` can be expressed in the following ways: - -- Appending square brackets (`[]`) to any non-array [data type](data-types.html). -- Adding the term `ARRAY` to any non-array [data type](data-types.html). - -## Size - -The size of an `ARRAY` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -{{site.data.alerts.callout_success}} -For a complete list of array functions built into CockroachDB, see the [documentation on array functions](functions-and-operators.html#array-functions). -{{site.data.alerts.end}} - -### Creating an array column by appending square brackets - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE a (b STRING[]); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO a VALUES (ARRAY['sky', 'road', 'car']); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM a; -~~~ - -~~~ -+----------------------+ -| b | -+----------------------+ -| {"sky","road","car"} | -+----------------------+ -(1 row) -~~~ - -### Creating an array column by adding the term `ARRAY` - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE c (d INT ARRAY); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO c VALUES (ARRAY[10,20,30]); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -### Accessing an array element using array index -{{site.data.alerts.callout_info}} -Arrays in CockroachDB are 1-indexed. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT d[2] FROM c; -~~~ - -~~~ -+------+ -| d[2] | -+------+ -| 20 | -+------+ -(1 row) -~~~ - -### Appending an element to an array - -#### Using the `array_append` function - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------+ -| d | -+------------+ -| {10,20,30} | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = array_append(d, 40) WHERE d[3] = 30; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+---------------+ -| d | -+---------------+ -| {10,20,30,40} | -+---------------+ -(1 row) -~~~ - -#### Using the append (`||`) operator - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+---------------+ -| d | -+---------------+ -| {10,20,30,40} | -+---------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE c SET d = d || 50 WHERE d[4] = 40; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM c; -~~~ - -~~~ -+------------------+ -| d | -+------------------+ -| {10,20,30,40,50} | -+------------------+ -(1 row) -~~~ - -## Supported casting and conversion - -[Casting](data-types.html#data-type-conversions-and-casts) between `ARRAY` values is supported when the data types of the arrays support casting. For example, it is possible to cast from a `BOOL` array to an `INT` array but not from a `BOOL` array to a `TIMESTAMP` array: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::INT[]; -~~~ - -~~~ - array -+---------+ - {1,0,1} -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[true,false,true]::TIMESTAMP[]; -~~~ - -~~~ -pq: invalid cast: bool[] -> TIMESTAMP[] -~~~ - -You can cast an array to a `STRING` value, for compatibility with PostgreSQL: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[1,NULL,3]::string; -~~~ - -~~~ - array -+------------+ - {1,NULL,3} -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ARRAY[(1,'a b'),(2,'c"d')]::string; -~~~ - -~~~ - array -+----------------------------------+ - {"(1,\"a b\")","(2,\"c\"\"d\")"} -(1 row) -~~~ - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/as-of-system-time.md b/src/current/v19.1/as-of-system-time.md deleted file mode 100644 index d83230cd3db..00000000000 --- a/src/current/v19.1/as-of-system-time.md +++ /dev/null @@ -1,190 +0,0 @@ ---- -title: AS OF SYSTEM TIME -summary: The AS OF SYSTEM TIME clause executes a statement as of a specified time. -toc: true ---- - -The `AS OF SYSTEM TIME timestamp` clause causes statements to execute -using the database contents "as of" a specified time in the past. - -This clause can be used to read historical data (also known as "[time -travel queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/)") and can also be advantageous for performance as it decreases -transaction conflicts. For more details, see [SQL Performance Best -Practices](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries). - -{{site.data.alerts.callout_info}} -Historical data is available only within the garbage collection window, which is determined by the `ttlseconds` field in the [replication zone configuration](configure-replication-zones.html). -{{site.data.alerts.end}} - -## Synopsis - -The `AS OF SYSTEM TIME` clause is supported in multiple SQL contexts, -including but not limited to: - -- In [`SELECT` clauses](select-clause.html), at the very end of the `FROM` sub-clause. -- In [`BACKUP`](backup.html), after the parameters of the `TO` sub-clause. -- In [`RESTORE`](restore.html), after the parameters of the `FROM` sub-clause. -- New in v19.1 In [`BEGIN`](begin-transaction.html), after the `BEGIN` keyword. -- New in v19.1 In [`SET`](set-transaction.html), after the `SET TRANSACTION` keyword. - -## Parameters - -The `timestamp` argument supports the following formats: - -Format | Notes ----|--- -[`INT`](int.html) | Nanoseconds since the Unix epoch. -negative [`INTERVAL`](interval.html) | Added to `statement_timestamp()`, and thus must be negative. -[`STRING`](string.html) | A [`TIMESTAMP`](timestamp.html), [`INT`](int.html) of nanoseconds, or negative [`INTERVAL`](interval.html). -`experimental_follower_read_timestamp()`| A [function](functions-and-operators.html) that runs your queries at a time as close as possible to the present time while remaining safe for [follower reads](follower-reads.html#what-are-follower-reads). - -## Examples - -### Select historical data (time-travel) - -Imagine this example represents the database's current data: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 750 | -| Edna Barath | 2200 | -+-------------+---------+ -~~~ - -We could instead retrieve the values as they were on October 3, 2016 at 12:45 UTC: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, balance - FROM accounts - AS OF SYSTEM TIME '2016-10-03 12:45:00' - WHERE name = 'Edna Barath'; -~~~ -~~~ -+-------------+---------+ -| name | balance | -+-------------+---------+ -| Edna Barath | 450 | -| Edna Barath | 2000 | -+-------------+---------+ -~~~ - - -### Using different timestamp formats - -Assuming the following statements are run at `2016-01-01 12:00:00`, they would execute as of `2016-01-01 08:00:00`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '2016-01-01 08:00:00' -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME 1451635200000000000 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME '1451635200000000000' -~~~ - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM t AS OF SYSTEM TIME '-4h' -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t AS OF SYSTEM TIME INTERVAL '-4h' -~~~ - -### Selecting from multiple tables - -{{site.data.alerts.callout_info}} -It is not yet possible to select from multiple tables at different timestamps. The entire query runs at the specified time in the past. -{{site.data.alerts.end}} - -When selecting over multiple tables in a single `FROM` clause, the `AS -OF SYSTEM TIME` clause must appear at the very end and applies to the -entire `SELECT` clause. - -For example: - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM t, u, v AS OF SYSTEM TIME '-4h'; -~~~ - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM t JOIN u ON t.x = u.y AS OF SYSTEM TIME '-4h'; -~~~ - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t), (SELECT * FROM u) AS OF SYSTEM TIME '-4h'; -~~~ - -### Using `AS OF SYSTEM TIME` in subqueries - -To enable time travel, the `AS OF SYSTEM TIME` clause must appear in -at least the top-level statement. It is not valid to use it only in a -[subquery](subqueries.html). - -For example, the following is invalid: - -~~~ -SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '-4h'), u -~~~ - -To facilitate the composition of larger queries from simpler queries, -CockroachDB allows `AS OF SYSTEM TIME` in sub-queries under the -following conditions: - -- The top level query also specifies `AS OF SYSTEM TIME`. -- All the `AS OF SYSTEM TIME` clauses specify the same timestamp. - -For example: - -{% include copy-clipboard.html %} -~~~sql -> SELECT * FROM (SELECT * FROM t AS OF SYSTEM TIME '-4h') tp - JOIN u ON tp.x = u.y - AS OF SYSTEM TIME '-4h' -- same timestamp as above - OK. - WHERE x < 123; -~~~ - -### Using `AS OF SYSTEM TIME` in transactions - -You can use the [`BEGIN`](begin-transaction.html) statement to execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/begin-transaction-as-of-system-time-example.md %} - -Alternatively, you can use the [`SET`](set-transaction.html) statement to execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/set-transaction-as-of-system-time-example.md %} - -## See also - -- [Select Historical Data](select-clause.html#select-historical-data-time-travel) -- [Time-Travel Queries](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/) - -## Tech note - -{{site.data.alerts.callout_info}} -Although the following format is supported, it is not intended to be used by most users. -{{site.data.alerts.end}} - -HLC timestamps can be specified using a [`DECIMAL`](decimal.html). The -integer part is the wall time in nanoseconds. The fractional part is -the logical counter, a 10-digit integer. This is the same format as -produced by the `cluster_logical_timestamp()` function. diff --git a/src/current/v19.1/authentication.md b/src/current/v19.1/authentication.md deleted file mode 100644 index 7b9408d68cf..00000000000 --- a/src/current/v19.1/authentication.md +++ /dev/null @@ -1,267 +0,0 @@ ---- -title: Authentication -summary: Learn about the authentication features for secure CockroachDB clusters. -toc: true ---- - -Authentication refers to the act of verifying the identity of the other party in communication. CockroachDB requires TLS 1.2 digital certificates for inter-node and client-node authentication, which require a Certificate Authority (CA) as well as keys and certificates for nodes, clients, and (optionally) the Admin UI. This document discusses how CockroachDB uses digital certificates and also gives [conceptual overview](#background-on-public-key-cryptography-and-digital-certificates) of public key cryptography and digital certificates. - -- If you are familiar with public key cryptography and digital certificates, then reading the [Using digital certificates with CockroachDB](#using-digital-certificates-with-cockroachdb) section should be enough. -- If you are unfamiliar with public key cryptography and digital certificates, you might want to skip over to the [conceptual overview](#background-on-public-key-cryptography-and-digital-certificates) first and then come back to the [Using digital certificates with CockroachDB](#using-digital-certificates-with-cockroachdb) section. -- If you want to know how to create CockroachDB security certificates, see [Create Security Certificates](create-security-certificates.html). - -## Using digital certificates with CockroachDB - -CockroachDB uses both TLS 1.2 server and client certificates. Each CockroachDB node in a secure cluster must have a **node certificate**, which is a TLS 1.2 server certificate. Note that the node certificate is multi-functional, which means that the same certificate is presented irrespective of whether the node is acting as a server or a client. The nodes use these certificates to establish secure connections with clients and with other nodes. Node certificates have the following requirements: - -- The hostname or address (IP address or DNS name) used to reach a node, either directly or through a load balancer, must be listed in the **Common Name** or **Subject Alternative Names** fields of the certificate: - - - The values specified in [`--listen-addr`](start-a-node.html#networking) and [`--advertise-addr`](start-a-node.html#networking) flags, or the node hostname and fully qualified hostname if not specified - - Any host addresses/names used to reach a specific node - - Any load balancer addresses/names or DNS aliases through which the node could be reached - - `localhost` and local address if connections are made through the loopback device on the same host - -- CockroachDB must be configured to trust the certificate authority that signed the certificate. - -Based on your security setup, you can use the [`cockroach cert` commands](create-security-certificates.html), [`openssl` commands](create-security-certificates-openssl.html), or a [custom CA](create-security-certificates-custom-ca.html) to generate all the keys and certificates. - -A CockroachDB cluster consists of multiple nodes and clients. The nodes can communicate with each other, with the SQL clients, and the Admin UI. In client-node SQL communication and client-UI communication, the node acts as a server, but in inter-node communication, a node may act as a server or a client. Hence authentication in CockroachDB involves: - -- Node authentication using [TLS 1.2](https://en.wikipedia.org/wiki/Transport_Layer_Security) digital certificates. -- Client authentication using TLS digital certificates, passwords, or [GSSAPI authentication](gssapi_authentication.html) (for Enterprise users). - -### Node authentication - -To set up a secure cluster without using an existing certificate authority, you'll need to generate the following files: - -- CA certificate -- Node certificate and key -- (Optional) UI certificate and key - -### Client authentication - -CockroachDB offers three methods for client authentication: - -- **Client certificate and key authentication**, which is available to all users. To ensure the highest level of security, we recommend only using client certificate and key authentication. - - Example: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --user=jpointsman - ~~~ - -- **Password authentication**, which is available to non-`root` users who you've created passwords for. Password creation is supported only in secure clusters. - - Example: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs --user=jpointsman - ~~~ - - Note that the client still needs the CA certificate to validate the nodes' certificates. - -- [**GSSAPI authentication**](gssapi_authentication.html), which is available to [Enterprise users](enterprise-licensing.html). - -### Using `cockroach cert` or `openssl` commands - -You can use the [`cockroach cert` commands](create-security-certificates.html) or [`openssl` commands](create-security-certificates-openssl.html) to create the CA certificate and key, and node and client certificates and keys. - -Note that the node certificate created using `cockroach cert` or`openssl` is multi-functional, which means that the same certificate is presented irrespective of whether the node is acting as a server or a client. Thus all nodes must have the following: - -- `CN=node` for the special user `node` when the node acts as a client. -- All IP addresses and DNS names for the node must be listed in `Subject Alternative Name` field for when the node acts as a server. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate). - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`node.crt` | Server certificate created using the `cockroach cert` command.

      `node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

      Must be signed by `ca.crt`. -`node.key` | Server key created using the `cockroach cert` command. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`)

      Must be signed by `ca.crt`. -`client..key` | Client key created using the `cockroach cert` command. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using a custom CA - -In the previous section, we discussed the scenario where the node and client certificates are signed by the CA created using the `cockroach cert` command. But what if you want to use an external CA, like your organizational CA or a public CA? In that case, our certificates might need some modification. Here’s why: - -As mentioned earlier, the node certificate is multi-functional, as in the same certificate is presented irrespective of whether the node is acting as a server or client. To make the certificate multi-functional, the `node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Names` field. - -But some CAs will not sign a certificate containing a `CN` that is not an IP address or domain name. Here's why: The TLS client certificates are used to authenticate the client connecting to a server. Because most client certificates authenticate a user instead of a device, the certificates contain usernames instead of hostnames. This makes it difficult for public CAs to verify the client's identity and hence most public CAs will not sign a client certificate. - -To get around this issue, we can split the node key and certificate into two: - -- `node.crt` and `node.key`: The node certificate to be presented when the node acts as a server and the corresponding key. `node.crt` must have the list of IP addresses and DNS names listed in `Subject Alternative Names`. -- `client.node.crt` and `client.node.key`: The node certificate to be presented when the node acts as a client for another node, and the corresponding key. `client.node.crt` must have `CN=node`. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`node.crt` | Node certificate for when node acts as server.

      All IP addresses and DNS names for the node must be listed in `Subject Alternative Name`. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

      Must be signed by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Node certificate for when node acts as client.

      Must have `CN=node`.

      Must be signed by `ca.crt`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the Admin UI, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`)

      Must be signed by `ca.crt`. -`client..key` | Client key corresponding to `client..crt`. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using a public CA certificate to access the Admin UI for a secure cluster - -One of the limitations of using `cockroach cert` or `openssl` is that the browsers used to access the CockroachDB Admin UI do not trust the node certificates presented to them. Web browsers come preloaded with CA certificates from well-established entities (e.g., GlobalSign and DigiTrust). The CA certificate generated using the `cockroach cert` or `openssl` is not preloaded in the browser. Hence on accessing the Admin UI for a secure cluster, you get the “Unsafe page” warning. Now you could add the CA certificate to the browser to avoid the warning, but that is not a recommended practice. Instead, you can use the established CAs (for example, Let’s Encrypt), to create a certificate and key to access the Admin UI. - -Once you have the UI cert and key, add it to the Certificates directory specified by the `--certs-dir` flag in the `cockroach cert` command. The next time the browser tries to access the UI, the node will present the UI cert instead of the node cert, and you’ll not see the “unsafe site” warning anymore. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`node.crt` | Server certificate created using the `cockroach cert` command.

      `node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

      Must be signed by `ca.crt`. -`node.key` | Server key created using the `cockroach cert` command. -`ui.crt` | UI certificate signed by the public CA. `ui.crt` must have the IP addresses and DNS names used to reach the Admin UI listed in `Subject Alternative Name`. -`ui.key` | UI key corresponding to `ui.crt`. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate created using the `cockroach cert` command. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`)

      Must be signed by `ca.crt`. -`client..key` | Client key created using the `cockroach cert` command. - -Alternatively, you can use [password authentication](#client-authentication). Remember, the client still needs `ca.crt` for node authentication. - -### Using split CA certificates - -{{site.data.alerts.callout_danger}} -We do not recommend you use split CA certificates unless your organizational security practices mandate you to do so. -{{site.data.alerts.end}} - -You might encounter situations where you need separate CAs to sign and verify node and client certificates. In that case, you would need two CAs and their respective certificates and keys: `ca.crt` and `ca-client.crt`. - -**Node key and certificates** - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate to verify node certificates. -`ca-client.crt` | CA certificate to verify client certificates. -`node.crt` | Node certificate for when node acts as server.

      All IP addresses and DNS names for the node must be listed in `Subject Alternative Name`. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate).

      Must be signed by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Node certificate for when node acts as client. This certificate must be signed using `ca-client.crt`

      Must have `CN=node`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the Admin UI, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -**Client key and certificates** - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

      Must be signed by `ca-client.crt`. -`client..key` | Client key corresponding to `client..crt`. - -## Authentication for cloud storage - -See [Backup file URLs](backup.html#backup-file-urls) - -## Authentication best practice - -As a security best practice, we recommend that you rotate the node, client, or CA certificates in the following scenarios: - -- The node, client, or CA certificates are expiring soon. -- Your organization's compliance policy requires periodical certificate rotation. -- The key (for a node, client, or CA) is compromised. -- You need to modify the contents of a certificate, for example, to add another DNS name or the IP address of a load balancer through which a node can be reached. In this case, you would need to rotate only the node certificates. - -For details about when and how to change security certificates without restarting nodes, see [Rotate Security Certificates](rotate-certificates.html). - -## Background on public key cryptography and digital certificates - -As mentioned above, CockroachDB uses the [TLS 1.2](https://en.wikipedia.org/wiki/Transport_Layer_Security) security protocol that takes advantage of both symmetric (to encrypt data in flight) as well as asymmetric encryption (to establish a secure channel as well as **authenticate** the communicating parties). - -Authentication refers to the act of verifying the identity of the other party in communication. CockroachDB uses TLS 1.2 digital certificates for inter-node and client-node authentication, which require a Certificate Authority (CA) as well as keys and certificates for nodes, clients, and (optionally) the Admin UI. - -To understand how CockroachDB uses digital certificates, let's first understand what each of these terms means. - -Consider two people: Amy and Rosa, who want to communicate securely over an insecure computer network. The traditional solution is to use symmetric encryption that involves encrypting and decrypting a plaintext message using a shared key. Amy encrypts her message using the key and sends the encrypted message across the insecure channel. Rosa decrypts the message using the same key and reads the message. This seems like a logical solution until you realize that you need a secure communication channel to send the encryption key. - -To solve this problem, cryptographers came up with **asymmetric encryption** to set up a secure communication channel over which an encryption key can be shared. - -### Asymmetric encryption - -Asymmetric encryption involves a pair of keys instead of a single key. The two keys are called the **public key** and the **private key**. The keys consist of very long numbers linked mathematically in a way such that a message encrypted using a public key can only be decrypted using the private key and vice versa. The message cannot be decrypted using the same key that was used to encrypt the message. - -So going back to our example, Amy and Rosa both have their own public-private key pairs. They keep their private keys safe with themselves and publicly distribute their public keys. Now when Amy wants to send a message to Rosa, she requests Rosa's public key, encrypts the message using Rosa’s public key, and sends the encrypted message. Rosa uses her own private key to decrypt the message. - -But what if a malicious imposter intercepts the communication? The imposter might pose as Rosa and send their public key instead of Rosa’s. There's no way for Amy to know that the public key she received isn’t Rosa’s, so she would end up using the imposter's public key to encrypt the message and send it to the imposter. The imposter can use their own private key and decrypt and read the message, thus compromising the secure communication channel between Amy and Rosa. - -To prevent this security risk, Amy needs to be sure that the public key she received was indeed Rosa’s. That’s where the Certificate Authority (CA) comes into the picture. - -### Certificate authority - -Certificate authorities are established entities with their own public and private key pairs. They act as a root of trust and verify the identities of the communicating parties and validate their public keys. CAs can be public and paid entities (e.g., GeoTrust and Comodo), or public and free CAs (e.g., Let’s Encrypt), or your own organizational CA (e.g., CockroachDB CA). The CAs' public keys are typically widely distributed (e.g., your browser comes preloaded with certs from popular CAs like DigiCert, GeoTrust, and so on). - -Think of the CA as the passport authority of a country. When you want to get your passport as your identity proof, you submit an application to your country's passport authority. The application contains important identifying information about you: your name, address, nationality, date of birth, and so on. The passport authority verifies the information they received and validates your identity. They then issue a document - the passport - that can be presented anywhere in the world to verify your identity. For example, the TSA agent at the airport does not know you and has no reason to trust you are who you say you are. However, they trust the passport authority and thus accept your identity as presented on your passport because it has been verified and issued by the passport authority. - -Going back to our example and assuming that we trust the CA, Rosa needs to get her public key verified by the CA. She sends a CSR (Certificate Signing Request) to the CA that contains her public key and relevant identifying information. The CA will verify that it is indeed Rosa’s public key and information, _sign_ the CSR using the CA's own private key, and generate a digital document called the **digital certificate**. In our passport analogy, this is Rosa's passport containing verified identifying information about her and trusted by everyone who trusts the CA. The next time Rosa wants to establish her identity, she will present her digital certificate. - -### Digital certificate - -A public key is shared using a digital certificate signed by a CA using the CA's private key. The digital certificate contains: - -- The certificate owner’s public key -- Information about the certificate owner -- The CA's digital signature - -### Digital signature - -The CA's digital signature works as follows: The certificate contents are put through a mathematical function to create a **hash value**. This hash value is encrypted using the CA's private key to generate the **digital signature**. The digital signature is added to the digital certificate. In our example, the CA adds their digital signature to Rosa's certificate validating her identity and her public key. - -As discussed [earlier](#certificate-authority), the CA's public key is widely distributed. In our example, Amy already has the CA's public key. Now when Rosa presents her digital certificate containing her public key, Amy uses the CA's public key to decrypt the digital signature on Rosa's certificate and gets the hash value encoded in the digital signature. Amy also generates the hash value for the certificate on her own. If the hash values match, then Amy can be sure that the certificate and hence the public key it contains indeed belongs to Rosa; otherwise, she can determine that the communication channel has been compromised and refuse further contact. - -### How it all works together - -Let's see how the digital certificate is used in client-server communication: The client (e.g., a web browser) has the CA certificate (containing the CA's public key). When the client receives a server's certificate signed by the same CA, it can use the CA certificate to verify the server's certificate, thus validating the server's identity, and securely connect to the server. The important thing here is that the client needs to have the CA certificate. If you use your own organizational CA instead of a publicly established CA, you need to make sure you distribute the CA certificate to all the clients. - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](secure-a-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/authorization.md b/src/current/v19.1/authorization.md deleted file mode 100644 index 520bf1a8219..00000000000 --- a/src/current/v19.1/authorization.md +++ /dev/null @@ -1,308 +0,0 @@ ---- -title: Authorization -summary: Learn about the authorization features for secure CockroachDB clusters. -toc: true ---- - -User authorization is the act of defining access policies for authenticated CockroachDB users. CockroachDB allows you to create, manage, and remove your cluster's [users](#create-and-manage-users) and assign SQL-level [privileges](#assign-privileges) to the users. Additionally, if you have an [Enterprise license](get-started-with-enterprise-trial.html), you can use [role-based access management (RBAC)](#create-and-manage-roles) for simplified user management. - -## Create and manage users - -You can use either of the following methods to create and manage users: - -- Use the [`CREATE USER`](create-user.html) and [`DROP USER`](drop-user.html) statements to create and remove users. -- Use the [`cockroach user` command](create-and-manage-users.html) with appropriate flags. - -## Create and manage roles - -Roles are SQL groups that contain any number of users and roles as members. - -### Terminology - -Term | Description ------|------------ -Role | A group containing any number of [users](create-and-manage-users.html) or other roles.

      Note: All users belong to the `public` role, to which you can [grant](grant.html) and [revoke](revoke.html) privileges. -Role admin | A member of the role that's allowed to modify role membership. To create a role admin, use [`WITH ADMIN OPTION`](grant-roles.html#grant-the-admin-option). -Superuser / Admin | A member of the `admin` role. Only superusers can [`CREATE ROLE`](create-role.html) or [`DROP ROLE`](drop-role.html). The `admin` role is created by default and cannot be dropped. -`root` | A user that exists by default as a member of the `admin` role. The `root` user must always be a member of the `admin` role. -Inherit | The behavior that grants a role's privileges to its members. -Direct member | A user or role that is an immediate member of the role.

      Example: `A` is a member of `B`. -Indirect member | A user or role that is a member of the role by association.

      Example: `A` is a member of `C` ... is a member of `B` where "..." is an arbitrary number of memberships. - -To create and manage your cluster's roles, use the following statements: - -- [`CREATE ROLE` (Enterprise)](create-role.html) -- [`DROP ROLE` (Enterprise)](drop-role.html) -- [`GRANT `](grant-roles.html) -- [`REVOKE `](revoke-roles.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`SHOW ROLES`](show-roles.html) -- [`SHOW GRANTS`](show-grants.html) - -## Assign privileges - -In CockroachDB, privileges are granted to [users](create-and-manage-users.html) and [roles](#create-and-manage-roles) at the database and table levels. They are not yet supported for other granularities such as columns or rows. - -When a user connects to a database, either via the [built-in SQL client](use-the-built-in-sql-client.html) or a [client driver](install-client-drivers.html), CockroachDB checks the user and role's privileges for each statement executed. If the user does not have sufficient privileges for a statement, CockroachDB gives an error. - -For the privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - -### Supported privileges - -For a full list of supported privileges, see the [`GRANT`](grant.html) documentation. - -### Granting privileges - -To grant privileges to a role or user, use the [`GRANT`](grant.html) statement, for example: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT SELECT, INSERT ON bank.accounts TO maxroach; -~~~ - -### Showing privileges - -To show privileges granted to roles or users, use the [`SHOW GRANTS`](show-grants.html) statement, for example: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE bank FOR maxroach; -~~~ - -### Revoking privileges - -To revoke privileges from roles or users, use the [`REVOKE`](revoke.html) statement, for example: - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE INSERT ON bank.accounts FROM maxroach; -~~~ - -## Example - -{{site.data.alerts.callout_info}} -The [`CREATE ROLE`](create-role.html) command used in this example is an enterprise-only feature. To request a 30-day trial license, see [Get CockroachDB](https://www.cockroachlabs.com/get-cockroachdb/enterprise/). - -Note that [`GRANT `](grant-roles.html) does not require an enterprise license. -{{site.data.alerts.end}} - -For the purpose of this example, you need an [enterprise license](enterprise-licensing.html) and one CockroachDB node running in insecure mode: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=roles \ ---listen-addr=localhost:26257 -~~~ - -1. As the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach user set maxroach --insecure - ~~~ - -2. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create a database and set it as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE test_roles; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = test_roles; - ~~~ - -4. [Create a role](create-role.html) and then [list all roles](show-roles.html) in your database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE ROLE system_ops; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ROLES; - ~~~ - - ~~~ - +------------+ - | rolename | - +------------+ - | admin | - | system_ops | - +------------+ - ~~~ - -5. Grant privileges to the `system_ops` role you created: - - {% include copy-clipboard.html %} - ~~~ sql - > GRANT CREATE, SELECT ON DATABASE test_roles TO system_ops; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON DATABASE test_roles; - ~~~ - - ~~~ - +------------+--------------------+------------+------------+ - | Database | Schema | User | Privileges | - +------------+--------------------+------------+------------+ - | test_roles | crdb_internal | admin | ALL | - | test_roles | crdb_internal | root | ALL | - | test_roles | crdb_internal | system_ops | CREATE | - | test_roles | crdb_internal | system_ops | SELECT | - | test_roles | information_schema | admin | ALL | - | test_roles | information_schema | root | ALL | - | test_roles | information_schema | system_ops | CREATE | - | test_roles | information_schema | system_ops | SELECT | - | test_roles | pg_catalog | admin | ALL | - | test_roles | pg_catalog | root | ALL | - | test_roles | pg_catalog | system_ops | CREATE | - | test_roles | pg_catalog | system_ops | SELECT | - | test_roles | public | admin | ALL | - | test_roles | public | root | ALL | - | test_roles | public | system_ops | CREATE | - | test_roles | public | system_ops | SELECT | - +------------+--------------------+------------+------------+ - ~~~ - -6. Add the `maxroach` user to the `system_ops` role: - - {% include copy-clipboard.html %} - ~~~ sql - > GRANT system_ops TO maxroach; - ~~~ - -7. To test the privileges you just added to the `system_ops` role, use `\q` or `ctrl-d` to exit the interactive shell, and then open the shell again as the `maxroach` user (who is a member of the `system_ops` role): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --user=maxroach --database=test_roles --insecure - ~~~ - -8. As the `maxroach` user, create a table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE employees ( - id UUID DEFAULT uuid_v4()::UUID PRIMARY KEY, - profile JSONB - ); - ~~~ - - We were able to create the table because `maxroach` has `CREATE` privileges. - -9. As the `maxroach` user, try to drop the table: - - {% include copy-clipboard.html %} - ~~~ sql - > DROP TABLE employees; - ~~~ - - ~~~ - pq: user maxroach does not have DROP privilege on relation employees - ~~~ - - You cannot drop the table because your current user (`maxroach`) is a member of the `system_ops` role, which doesn't have `DROP` privileges. - -10. `maxroach` has `CREATE` and `SELECT` privileges, so try a `SHOW` statement: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON TABLE employees; - ~~~ - - ~~~ - +------------+--------+-----------+------------+------------+ - | Database | Schema | Table | User | Privileges | - +------------+--------+-----------+------------+------------+ - | test_roles | public | employees | admin | ALL | - | test_roles | public | employees | root | ALL | - | test_roles | public | employees | system_ops | CREATE | - | test_roles | public | employees | system_ops | SELECT | - +------------+--------+-----------+------------+------------+ - ~~~ - -11. Now switch back to the `root` user to test more of the SQL statements related to roles. Use `\q` or `ctrl-d` to exit the interactive shell, and then open the shell again as the `root` user: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -12. As the `root` user, revoke privileges and then drop the `system_ops` role: - - {% include copy-clipboard.html %} - ~~~ sql - > REVOKE ALL ON DATABASE test_roles FROM system_ops; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON DATABASE test_roles; - ~~~ - ~~~ - +------------+--------------------+-------+------------+ - | Database | Schema | User | Privileges | - +------------+--------------------+-------+------------+ - | test_roles | crdb_internal | admin | ALL | - | test_roles | crdb_internal | root | ALL | - | test_roles | information_schema | admin | ALL | - | test_roles | information_schema | root | ALL | - | test_roles | pg_catalog | admin | ALL | - | test_roles | pg_catalog | root | ALL | - | test_roles | public | admin | ALL | - | test_roles | public | root | ALL | - +------------+--------------------+-------+------------+ - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > REVOKE ALL ON TABLE test_roles.* FROM system_ops; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW GRANTS ON TABLE test_roles.*; - ~~~ - ~~~ - +------------+--------+-----------+-------+------------+ - | Database | Schema | Table | User | Privileges | - +------------+--------+-----------+-------+------------+ - | test_roles | public | employees | admin | ALL | - | test_roles | public | employees | root | ALL | - +------------+--------+-----------+-------+------------+ - ~~~ - - {{site.data.alerts.callout_info}}All of a role or user's privileges must be revoked before it can be dropped.{{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ sql - > DROP ROLE system_ops; - ~~~ - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [SQL Statements](sql-statements.html) -- [`CREATE ROLE`](create-role.html) -- [`DROP ROLE`](drop-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT `](grant.html) -- [`GRANT `](grant-roles.html) -- [`REVOKE `](revoke.html) -- [`REVOKE `](revoke-roles.html) -- [`SHOW GRANTS`](show-grants.html) diff --git a/src/current/v19.1/backup-and-restore.md b/src/current/v19.1/backup-and-restore.md deleted file mode 100644 index f6049c88ba0..00000000000 --- a/src/current/v19.1/backup-and-restore.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Back up and Restore Data -summary: Learn how to back up and restore a CockroachDB database. -toc: true ---- - -Because CockroachDB is designed with high fault tolerance, backups are primarily needed for disaster recovery (i.e., if your cluster loses a majority of its nodes). Isolated issues (such as small-scale node outages) do not require any intervention. However, as an operational best practice, we recommend taking regular backups of your data. - -Based on your [license type](https://www.cockroachlabs.com/pricing/), CockroachDB offers two methods to back up and restore your cluster's data: Enterprise and Core. - -## Perform Enterprise backup and restore - -If you have an [Enterprise license](enterprise-licensing.html), you can use the [`BACKUP`](backup.html) statement to efficiently back up your cluster's schemas and data to popular cloud services such as AWS S3, Google Cloud Storage, or NFS, and the [`RESTORE`](restore.html) statement to efficiently restore schema and data as necessary. - -### Manual full backups - -In most cases, it's recommended to use the [`BACKUP`](backup.html) command to take full nightly backups of each database in your cluster: - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE TO ''; -~~~ - -If it's ever necessary, you can then use the [`RESTORE`](restore.html) command to restore a database: - -{% include copy-clipboard.html %} -~~~ sql -> RESTORE DATABASE FROM ''; -~~~ - -### Manual full and incremental backups - -If a database increases to a size where it is no longer feasible to take nightly full backups, you might want to consider taking periodic full backups (e.g., weekly) with nightly incremental backups. Incremental backups are storage efficient and faster than full backups for larger databases. - -Periodically run the [`BACKUP`](backup.html) command to take a full backup of your database: - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE TO ''; -~~~ - -Then create nightly incremental backups based off of the full backups you've already created. - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE TO 'incremental_backup_location' -INCREMENTAL FROM '', ''; -~~~ - -If it's ever necessary, you can then use the [`RESTORE`](restore.html) command to restore a database: - -{% include copy-clipboard.html %} -~~~ sql -> RESTORE FROM '', ''; -~~~ - -{{site.data.alerts.callout_success}} -[Restoring from incremental backups](restore.html#restore-from-incremental-backups) requires previous full and incremental backups. -{{site.data.alerts.end}} - -### Automated full and incremental backups - -You can automate your backups using scripts and your preferred method of automation, such as cron jobs. - -For your reference, we have created this [sample backup script](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/backup.sh) that you can customize to automate your backups. - -In the sample script, configure the day of the week for which you want to create full backups. Running the script daily will create a full backup on the configured day, and on other days, it'll create incremental backups. The script tracks the recently created backups in a separate file titled `backup.txt` and uses this file as a base for the subsequent incremental backups. - -1. Download the [sample backup script](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/backup.sh): - - {% include copy-clipboard.html %} - ~~~ shell - $ wget -qO- https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{ page.version.version }}/prod-deployment/backup.sh - ~~~ - - Alternatively, you can create the file yourself and copy the script into it: - - {% include copy-clipboard.html %} - ~~~ shell - #!/bin/bash - - set -euo pipefail - - # This script creates full backups when run on the configured - # day of the week and incremental backups when run on other days, and tracks - # recently created backups in a file to pass as the base for incremental backups. - - full_day="" # Must match (including case) the output of `LC_ALL=C date +%A`. - what="DATABASE " # The name of the database you want to back up. - base="/backups" # The URL where you want to store the backup. - extra="" # Any additional parameters that need to be appended to the BACKUP URI (e.g., AWS key params). - recent=recent_backups.txt # File in which recent backups are tracked. - backup_parameters= # e.g., "WITH revision_history" - - # Customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, and additional flags as needed to connect to the SQL client. - runsql() { cockroach sql --insecure -e "$1"; } - - destination="${base}/$(date +"%Y%m%d-%H%M")${extra}" - - prev= - while read -r line; do - [[ "$prev" ]] && prev+=", " - prev+="'$line'" - done < "$recent" - - if [[ "$(LC_ALL=C date +%A)" = "$full_day" || ! "$prev" ]]; then - runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' $backup_parameters" - echo "$destination" > "$recent" - else - destination="${base}/$(date +"%Y%m%d-%H%M")-inc${extra}" - runsql "BACKUP $what TO '$destination' AS OF SYSTEM TIME '-1m' INCREMENTAL FROM $prev $backup_parameters" - echo "$destination" >> "$recent" - fi - - echo "backed up to ${destination}" - ~~~ - -2. In the sample backup script, customize the values for the following variables: - - Variable | Description - -----|------------ - `full_day` | The day of the week on which you want to take a full backup. - `what` | The name of the database you want to back up (i.e., create backups of all tables and views in the database). - `base` | The URL where you want to store the backup.

      URL format: `[scheme]://[host]/[path]`

      For information about the components of the URL, see [Backup File URLs](backup.html#backup-file-urls). - `extra`| The parameters required for the storage.

      Parameters format: `?[parameters]`

      For information about the storage parameters, see [Backup File URLs](backup.html#backup-file-urls). - `backup_parameters` | Additional [backup parameters](backup.html#parameters) you might want to specify. - - Also customize the `cockroach sql` command with `--host`, `--certs-dir` or `--insecure`, and [additional flags](use-the-built-in-sql-client.html#flags) as required. - -3. Change the file permissions to make the script executable: - - {% include copy-clipboard.html %} - ~~~ shell - $ chmod +x backup.sh - ~~~ - -4. Run the backup script: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./backup.sh - ~~~ - -{{site.data.alerts.callout_info}} -If you miss an incremental backup, delete the `recent_backups.txt` file and run the script. It'll take a full backup for that day and incremental backups for subsequent days. -{{site.data.alerts.end}} - -## Perform Core backup and restore - -In case you do not have an Enterprise license, you can perform a Core backup. Run the [`cockroach dump`](sql-dump.html) command to dump all the tables in the database to a new file (`backup.sql` in the following example): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump > backup.sql -~~~ - -To restore a database from a Core backup, [use the `cockroach sql` command to execute the statements in the backup file](sql-dump.html#restore-a-table-from-a-backup-file): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --database=[database name] < backup.sql -~~~ - -{{site.data.alerts.callout_success}} -If you created a backup from another database and want to import it into CockroachDB, see [Import data](migration-overview.html). -{{site.data.alerts.end}} - -## See also - -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`SQL DUMP`](sql-dump.html) -- [`IMPORT`](migration-overview.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/backup.md b/src/current/v19.1/backup.md deleted file mode 100644 index 53617812ae8..00000000000 --- a/src/current/v19.1/backup.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: BACKUP -summary: Back up your CockroachDB cluster to a cloud storage services such as AWS S3, Google Cloud Storage, or other NFS. -toc: true ---- - -{{site.data.alerts.callout_danger}} -The `BACKUP` feature is only available to [enterprise](https://www.cockroachlabs.com/product/cockroachdb/) users. For non-enterprise backups, see [`cockroach dump`](sql-dump.html). -{{site.data.alerts.end}} - -CockroachDB's `BACKUP` [statement](sql-statements.html) allows you to create full or incremental backups of your cluster's schema and data that are consistent as of a given timestamp. Backups can be with or without [revision history](backup.html#backups-with-revision-history). - -Because CockroachDB is designed with high fault tolerance, these backups are designed primarily for disaster recovery (i.e., if your cluster loses a majority of its nodes) through [`RESTORE`](restore.html). Isolated issues (such as small-scale node outages) do not require any intervention. - - -## Functional details - -### Backup targets - -You can backup entire tables (which automatically includes their indexes) or [views](views.html). Backing up a database simply backs up all of its tables and views. - -{{site.data.alerts.callout_info}} -`BACKUP` only offers table-level granularity; it _does not_ support backing up subsets of a table. -{{site.data.alerts.end}} - -### Object dependencies - -Dependent objects must be backed up at the same time as the objects they depend on. - -Object | Depends On --------|----------- -Table with [foreign key](foreign-key.html) constraints | The table it `REFERENCES`; however, this dependency can be [removed during the restore](restore.html#skip_missing_foreign_keys). -Table with a [sequence](create-sequence.html) | The sequence it uses; however, this dependency can be [removed during the restore](restore.html#skip_missing_sequences). -[Views](views.html) | The tables used in the view's `SELECT` statement. -[Interleaved tables](interleave-in-parent.html) | The parent table in the [interleaved hierarchy](interleave-in-parent.html#interleaved-hierarchy). - -### Users and privileges - -The `system.users` table stores your users and their passwords. To restore your users, you must first backup the `system.users` table, and then use [this procedure](restore.html#restoring-users-from-system-users-backup). - -Restored tables inherit privilege grants from the target database; they do not preserve privilege grants from the backed up table because the restoring cluster may have different users. - -Table-level privileges must be [granted to users](grant.html) after the restore is complete. - -### Backup types - -CockroachDB offers two types of backups: full and incremental. - -#### Full backups - -Full backups contain an unreplicated copy of your data and can always be used to restore your cluster. These files are roughly the size of your data and require greater resources to produce than incremental backups. You can take full backups as of a given timestamp and (optionally) include the available [revision history](backup.html#backups-with-revision-history). - -#### Incremental backups - -Incremental backups are smaller and faster to produce than full backups because they contain only the data that has changed since a base set of backups you specify (which must include one full backup, and can include many incremental backups). You can take incremental backups either as of a given timestamp or with full [revision history](backup.html#backups-with-revision-history). - -**Note the following restriction:** Incremental backups can only be created within the garbage collection period of the base backup's most recent timestamp. This is because incremental backups are created by finding which data has been created or modified since the most recent timestamp in the base backup––that timestamp data, though, is deleted by the garbage collection process. - -You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html). - -### Backups with revision history - -{% include {{ page.version.version }}/misc/beta-warning.md %} - -You can create full or incremental backups with revision history: - -- Taking full backups with revision history allows you to back up every change made within the garbage collection period leading up to and including the given timestamp. -- Taking incremental backups with revision history allows you to back up every change made since the last backup and within the garbage collection period leading up to and including the given timestamp. You can take incremental backups with revision history even when your previous full or incremental backups were taken without revision history. - -You can configure garbage collection periods using the `ttlseconds` [replication zone setting](configure-replication-zones.html). Taking backups with revision history allows for point-in-time restores within the revision history. - -## Performance - -The `BACKUP` process minimizes its impact to the cluster's performance by distributing work to all nodes. Each node backs up only a specific subset of the data it stores (those for which it serves writes; more details about this architectural concept forthcoming), with no two nodes backing up the same data. - -For best performance, we also recommend always starting backups with a specific [timestamp](timestamp.html) at least 10 seconds in the past. For example: - -~~~ sql -> BACKUP...AS OF SYSTEM TIME '-10s'; -~~~ - -This improves performance by decreasing the likelihood that the `BACKUP` will be [retried because it contends with other statements/transactions](transactions.html#transaction-retries). However, because `AS OF SYSTEM TIME` returns historical data, your reads might be stale. - -## Automating backups - -We recommend automating daily backups of your cluster. - -To automate backups, you must have a client send the `BACKUP` statement to the cluster. - -Once the backup is complete, your client will receive a `BACKUP` response. - -## Viewing and controlling backups jobs - -After CockroachDB successfully initiates a backup, it registers the backup as a job, which you can view with [`SHOW JOBS`](show-jobs.html). - -After the backup has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html). - -{{site.data.alerts.callout_info}} -If initiated correctly, the statement returns when the backup is finished or if it encounters an error. In some cases, the backup can continue after an error has been returned (the error message will tell you that the backup has resumed in background). -{{site.data.alerts.end}} - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/backup.html %} -
      - -{{site.data.alerts.callout_info}} -The `BACKUP` statement cannot be used within a [transaction](transactions.html). -{{site.data.alerts.end}} - -## Required privileges - -Only members of the `admin` role can run `BACKUP`. By default, the `root` user belongs to the `admin` role. - -## Parameters - -| Parameter | Description | -|-----------|-------------| -| `table_pattern` | The table or [view](views.html) you want to back up. | -| `name` | The name of the database you want to back up (i.e., create backups of all tables and views in the database).| -| `destination` | The URL where you want to store the backup.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls). | -| `AS OF SYSTEM TIME timestamp` | Back up data as it existed as of [`timestamp`](as-of-system-time.html). The `timestamp` must be more recent than your cluster's last garbage collection (which defaults to occur every 25 hours, but is [configurable per table](configure-replication-zones.html#replication-zone-variables)). | -| `WITH revision_history` | Create a backup with full [revision history](backup.html#backups-with-revision-history) that records every change made to the cluster within the garbage collection period leading up to and including the given timestamp. | -| `INCREMENTAL FROM full_backup_location` | Create an incremental backup using the full backup stored at the URL `full_backup_location` as its base. For information about this URL structure, see [Backup File URLs](#backup-file-urls).

      **Note:** It is not possible to create an incremental backup if one or more tables were [created](create-table.html), [dropped](drop-table.html), or [truncated](truncate.html) after the full backup. In this case, you must create a new [full backup](#full-backups). | -| `incremental_backup_location` | Create an incremental backup that includes all backups listed at the provided URLs.

      Lists of incremental backups must be sorted from oldest to newest. The newest incremental backup's timestamp must be within the table's garbage collection period.

      For information about this URL structure, see [Backup File URLs](#backup-file-urls).

      For more information about garbage collection, see [Configure Replication Zones](configure-replication-zones.html#replication-zone-variables). | - -### Backup file URLs - -We will use the URL provided to construct a secure API call to the service you specify. The path to each backup must be unique, and the URL for your backup's destination/locations must use the following format: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -## Examples - -Per our guidance in the [Performance](#performance) section, we recommend starting backups from a time at least 10 seconds in the past using [`AS OF SYSTEM TIME`](as-of-system-time.html). - -### Backup a single table or view - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup multiple tables - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP bank.customers, bank.accounts \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup an entire database - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '-10s'; -~~~ - -### Backup with revision history - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-27-weekly' \ -AS OF SYSTEM TIME '-10s' WITH revision_history; -~~~ - -### Create incremental backups - -Incremental backups must be based off of full backups you've already created. - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/db/bank/2017-03-29-nightly' \ -AS OF SYSTEM TIME '-10s' \ -INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly'; -~~~ - -### Create incremental backups with revision history - -{% include copy-clipboard.html %} -~~~ sql -> BACKUP DATABASE bank \ -TO 'gs://acme-co-backup/database-bank-2017-03-29-nightly' \ -AS OF SYSTEM TIME '-10s' \ -INCREMENTAL FROM 'gs://acme-co-backup/database-bank-2017-03-27-weekly', 'gs://acme-co-backup/database-bank-2017-03-28-nightly' WITH revision_history; -~~~ - -## See also - -- [`RESTORE`](restore.html) -- [Configure Replication Zones](configure-replication-zones.html) diff --git a/src/current/v19.1/begin-transaction.md b/src/current/v19.1/begin-transaction.md deleted file mode 100644 index f6474b6a99b..00000000000 --- a/src/current/v19.1/begin-transaction.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: BEGIN -summary: Initiate a SQL transaction with the BEGIN statement in CockroachDB. -toc: true ---- - -The `BEGIN` [statement](sql-statements.html) initiates a [transaction](transactions.html), which either successfully executes all of the statements it contains or none at all. - -{{site.data.alerts.callout_danger}} -When using transactions, your application should include logic to [retry transactions](transactions.html#transaction-retries) that are aborted to break a dependency cycle between concurrent transactions. -{{site.data.alerts.end}} - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/begin_transaction.html %} -
      - -## Required privileges - -No [privileges](authorization.html#assign-privileges) are required to initiate a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, the following are aliases for the `BEGIN` statement: - -- `BEGIN TRANSACTION` -- `START TRANSACTION` - -## Parameters - - Parameter | Description ------------|------------- -`PRIORITY` | If you do not want the transaction to run with `NORMAL` priority, you can set it to `LOW` or `HIGH`.

      Transactions with higher priority are less likely to need to be retried.

      For more information, see [Transactions: Priorities](transactions.html#transaction-priorities).

      **Default**: `NORMAL` -`READ` | Set the transaction access mode to `READ ONLY` or `READ WRITE`. The current transaction access mode is also exposed as the [session variable](show-vars.html) `transaction_read_only`.

      **Default**: `READ WRITE` -`AS OF SYSTEM TIME` | New in v19.1 Execute the transaction using the database contents "as of" a specified time in the past.

      The `AS OF SYSTEM TIME` clause can be used only when the transaction is read-only. If the transaction contains any writes, or if the `READ WRITE` mode is specified, an error will be returned.

      For more information, see [AS OF SYSTEM TIME](as-of-system-time.html).

      - - CockroachDB now only supports `SERIALIZABLE` isolation, so transactions can no longer be meaningfully set to any other `ISOLATION LEVEL`. In previous versions of CockroachDB, you could set transactions to `SNAPSHOT` isolation, but that feature has been removed. - -## Examples - -### Begin a transaction - -#### Use default settings - -Without modifying the `BEGIN` statement, the transaction uses `SERIALIZABLE` isolation and `NORMAL` priority. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Change priority - -You can set a transaction's priority to `LOW` or `HIGH`. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN PRIORITY HIGH; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -You can also set a transaction's priority with [`SET TRANSACTION`](set-transaction.html). - -{{site.data.alerts.callout_danger}} -This example assumes you're using [client-side intervention to handle transaction retries](transactions.html#client-side-intervention). -{{site.data.alerts.end}} - -### Use the `AS OF SYSTEM TIME` option - -You can execute the transaction using the database contents "as of" a specified time in the past. - -{% include {{ page.version.version }}/sql/begin-transaction-as-of-system-time-example.md %} - -### Begin a transaction with automatic retries - -CockroachDB will [automatically retry](transactions.html#transaction-retries) all transactions that contain both `BEGIN` and `COMMIT` in the same batch. Batching is controlled by your driver or client's behavior, but means that CockroachDB receives all of the statements as a single unit, instead of a number of requests. - -From the perspective of CockroachDB, a transaction sent as a batch looks like this: - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; - -> DELETE FROM customers WHERE id = 1; - -> DELETE orders WHERE customer = 1; - -> COMMIT; -~~~ - -However, in your application's code, batched transactions are often just multiple statements sent at once. For example, in Go, this transaction would sent as a single batch (and automatically retried): - -~~~ go -db.Exec( - "BEGIN; - - DELETE FROM customers WHERE id = 1; - - DELETE orders WHERE customer = 1; - - COMMIT;" -) -~~~ - -Issuing statements this way signals to CockroachDB that you do not need to change any of the statement's values if the transaction doesn't immediately succeed, so it can continually retry the transaction until it's accepted. - -## See also - -- [Transactions](transactions.html) -- [`COMMIT`](commit-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) diff --git a/src/current/v19.1/bit.md b/src/current/v19.1/bit.md deleted file mode 100644 index 7ab5d654d94..00000000000 --- a/src/current/v19.1/bit.md +++ /dev/null @@ -1,124 +0,0 @@ ---- -title: BIT -summary: The BIT and BIT VARYING data types stores bit arrays. -toc: true ---- - -The `BIT` and `VARBIT` [data types](data-types.html) stores bit arrays. -With `BIT`, the length is fixed; with `VARBIT`, the length can be variable. - -## Aliases - -The name `BIT VARYING` is an alias for `VARBIT`. - -## Syntax - -Bit array constants are expressed as literals. For example, `B'100101'` denotes an array of 6 bits. - -For more information about bit array constants, see the [constants documentation on bit array literals](sql-constants.html#bit-array-literals). - -For usage, see the [Example](#example) below. - -## Size - -The number of bits in a `BIT` value is determined as follows: - -| Type declaration | Logical size | -|------------------|-----------------------------------| -| BIT | 1 bit | -| BIT(N) | N bits | -| VARBIT | variable with no maximum | -| VARBIT(N) | variable with a maximum of N bits | - -The effective size of a `BIT` value is larger than its logical number -of bits by a bounded constant factor. Internally, CockroachDB stores -bit arrays in increments of 64 bits plus an extra integer value to -encode the length. - -The total size of a `BIT` value can be arbitrarily large, but it is -recommended to keep values under 1 MB to ensure performance. Above -that threshold, [write -amplification](https://en.wikipedia.org/wiki/Write_amplification) and -other considerations may cause significant performance degradation. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE b (x BIT, y BIT(3), z VARBIT, w VARBIT(3)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM b; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -+-------------+-----------+-------------+----------------+-----------------------+-----------+-----------+ - x | BIT | true | NULL | | {} | false - y | BIT(3) | true | NULL | | {} | false - z | VARBIT | true | NULL | | {} | false - w | VARBIT(3) | true | NULL | | {} | false - rowid | INT | false | unique_rowid() | | {primary} | true -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO b(x, y, z, w) VALUES (B'1', B'101', B'1', B'1'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM b; -~~~ - -~~~ - x | y | z | w -+---+-----+---+---+ - 1 | 101 | 1 | 1 -~~~ - -For type `BIT`, the value must match exactly the specified size: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO b(x) VALUES (B'101'); -~~~ - -~~~ -pq: bit string length 3 does not match type BIT -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO b(y) VALUES (B'10'); -~~~ - -~~~ -pq: bit string length 2 does not match type BIT(3) -~~~ - -For type `VARBIT`, the value must not be larger than the specified maximum size: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO b(w) VALUES (B'1010'); -~~~ - -~~~ -pq: bit string length 4 too large for type VARBIT(3) -~~~ - -## Supported casting and conversion - -`BIT` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|--------- -`INT` | Converts the bit array to the corresponding numeric value, interpreting the bits as if the value was encoded using [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement). If the bit array is larger than the integer type, excess bits on the left are ignored. For example, `B'1010'::INT` equals 10. -`STRING` | Prints out the binary digits as a string. This recovers the literal representation. For example, `B'1010'::INT` equals `'1010'`. - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/bool.md b/src/current/v19.1/bool.md deleted file mode 100644 index fbd5e72f4fa..00000000000 --- a/src/current/v19.1/bool.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: BOOL -summary: The BOOL data type stores Boolean values of false or true. -toc: true ---- - -The `BOOL` [data type](data-types.html) stores a Boolean value of `false` or `true`. - - -## Aliases - -In CockroachDB, `BOOLEAN` is an alias for `BOOL`. - -## Syntax - -There are two predefined [named constants](sql-constants.html#named-constants) for `BOOL`: `TRUE` and `FALSE` (the names are case-insensitive). - -Alternately, a boolean value can be obtained by coercing a numeric value: zero is coerced to `FALSE`, and any non-zero value to `TRUE`. - -- `CAST(0 AS BOOL)` (false) -- `CAST(123 AS BOOL)` (true) - -## Size - -A `BOOL` value is 1 byte in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bool (a INT PRIMARY KEY, b BOOL, c BOOLEAN); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM bool; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| a | INT | false | NULL | | {"primary"} | -| b | BOOL | true | NULL | | {} | -| c | BOOL | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bool VALUES (12345, true, CAST(0 AS BOOL)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bool; -~~~ - -~~~ -+-------+------+-------+ -| a | b | c | -+-------+------+-------+ -| 12345 | true | false | -+-------+------+-------+ -~~~ - -## Supported casting and conversion - -`BOOL` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts `true` to `1`, `false` to `0` -`DECIMAL` | Converts `true` to `1`, `false` to `0` -`FLOAT` | Converts `true` to `1`, `false` to `0` -`STRING` | –– - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/build-a-c++-app-with-cockroachdb.md b/src/current/v19.1/build-a-c++-app-with-cockroachdb.md deleted file mode 100644 index cfc98ec59bc..00000000000 --- a/src/current/v19.1/build-a-c++-app-with-cockroachdb.md +++ /dev/null @@ -1,201 +0,0 @@ ---- -title: Build a C++ App with CockroachDB -summary: Learn how to use CockroachDB from a simple C++ application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple C++ application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [C++ libpqxx driver](https://github.com/jtv/libpqxx) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the libpqxx driver - -Install the C++ libpqxx driver as described in the [official documentation](https://github.com/jtv/libpqxx). - -{{site.data.alerts.callout_info}} -If you are running macOS, you need to install version 4.0.1 or higher of the libpqxx driver. -{{site.data.alerts.end}} - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the C++ code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/basic-sample.cpp %} -~~~ - -To build the `basic-sample.cpp` source code to an executable file named `basic-sample`, run the following command from the directory that contains the code: - -{% include copy-clipboard.html %} -``` shell -$ g++ -std=c++11 basic-sample.cpp -lpq -lpqxx -o basic-sample -``` - -Then run the `basic-sample` file from that directory: - -{% include copy-clipboard.html %} -``` shell -$ ./basic-sample -``` - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -Download the txn-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/txn-sample.cpp %} -~~~ - -To build the `txn-sample.cpp` source code to an executable file named `txn-sample`, run the following command from the directory that contains the code: - -{% include copy-clipboard.html %} -``` shell -$ g++ -std=c++11 txn-sample.cpp -lpq -lpqxx -o txn-sample -``` - -Then run the `txn-sample` file from that directory: - -{% include copy-clipboard.html %} -``` shell -$ ./txn-sample -``` - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the C++ code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/insecure/basic-sample.cpp %} -~~~ - -To build the `basic-sample.cpp` source code to an executable file named `basic-sample`, run the following command from the directory that contains the code: - -{% include copy-clipboard.html %} -``` shell -$ g++ -std=c++11 basic-sample.cpp -lpq -lpqxx -o basic-sample -``` - -Then run the `basic-sample` file from that directory: - -{% include copy-clipboard.html %} -``` shell -$ ./basic-sample -``` - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -Download the txn-sample.cpp file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ cpp -{% include {{ page.version.version }}/app/insecure/txn-sample.cpp %} -~~~ - -To build the `txn-sample.cpp` source code to an executable file named `txn-sample`, run the following command from the directory that contains the code: - -{% include copy-clipboard.html %} -``` shell -$ g++ -std=c++11 txn-sample.cpp -lpq -lpqxx -o txn-sample -``` - -Then run the `txn-sample` file from that directory: - -{% include copy-clipboard.html %} -``` shell -$ ./txn-sample -``` - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [C++ libpqxx driver](https://github.com/jtv/libpqxx). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-clojure-app-with-cockroachdb.md b/src/current/v19.1/build-a-clojure-app-with-cockroachdb.md deleted file mode 100644 index be2717f655b..00000000000 --- a/src/current/v19.1/build-a-clojure-app-with-cockroachdb.md +++ /dev/null @@ -1,223 +0,0 @@ ---- -title: Build a Clojure App with CockroachDB -summary: Learn how to use CockroachDB from a simple Clojure application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple Clojure application with CockroachDB using [leiningen](https://leiningen.org/) and a PostgreSQL-compatible driver. - -We have tested the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/) in conjunction with the [PostgreSQL JDBC driver](https://jdbc.postgresql.org/) enough to claim **beta-level** support, so that combination is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install `leiningen` - -Install the Clojure `lein` utility as described in its [official documentation](https://leiningen.org/). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -New in v19.1: Pass the [`--also-generate-pkcs8-key` flag](create-security-certificates.html#flag-pkcs8) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.maxroach.key.pk8`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -## Step 4. Create a table in the new database - -As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql \ ---certs-dir=certs \ ---database=bank \ ---user=maxroach \ --e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)' -~~~ - -## Step 5. Run the Clojure code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Create a basic Clojure/JDBC project - -1. Create a new directory `myapp`. -2. Create a file `myapp/project.clj` and populate it with the following code, or download it directly. - - {% include copy-clipboard.html %} - ~~~ clojure - {% include {{ page.version.version }}/app/project.clj %} - ~~~ - -3. Create a file `myapp/src/test/util.clj` and populate it with the code from this file. Be sure to place the file in the subdirectory `src/test` in your project. - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Create a file `myapp/src/test/test.clj` and copy the code below to it, or download it directly. Be sure to rename this file to `test.clj` in the subdirectory `src/test` in your project. - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/basic-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -$ lein run -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Copy the code below to `myapp/src/test/test.clj` or -download it directly. Again, preserve the file name `test.clj`. - -{{site.data.alerts.callout_info}} -CockroachDB may require the -[client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/txn-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -$ lein run -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Create a table in the new database - -As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---database=bank \ ---user=maxroach \ --e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)' -~~~ - -## Step 4. Run the Clojure code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Create a basic Clojure/JDBC project - -1. Create a new directory `myapp`. -2. Create a file `myapp/project.clj` and populate it with the following code, or download it directly. - - {% include copy-clipboard.html %} - ~~~ clojure - {% include {{ page.version.version }}/app/project.clj %} - ~~~ - -3. Create a file `myapp/src/test/util.clj` and populate it with the code from this file. Be sure to place the file in the subdirectory `src/test` in your project. - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Create a file `myapp/src/test/test.clj` and copy the code below to it, or download it directly. Be sure to rename this file to `test.clj` in the subdirectory `src/test` in your project. - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/insecure/basic-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -$ lein run -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Copy the code below to `myapp/src/test/test.clj` or -download it directly. Again, preserve the file name `test.clj`. - -{{site.data.alerts.callout_info}} -CockroachDB may require the -[client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ clojure -{% include {{ page.version.version }}/app/insecure/txn-sample.clj %} -~~~ - -Run with: - -{% include copy-clipboard.html %} -~~~ shell -$ lein run -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Clojure java.jdbc driver](https://clojure-doc.org/articles/ecosystem/java_jdbc/home/). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-csharp-app-with-cockroachdb.md b/src/current/v19.1/build-a-csharp-app-with-cockroachdb.md deleted file mode 100644 index bea5b586207..00000000000 --- a/src/current/v19.1/build-a-csharp-app-with-cockroachdb.md +++ /dev/null @@ -1,235 +0,0 @@ ---- -title: Build a C# (.NET) App with CockroachDB -summary: Learn how to use CockroachDB from a simple C# (.NET) application with a low-level client driver. -toc: true -twitter: true ---- - -This tutorial shows you how build a simple C# (.NET) application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [.NET Npgsql driver](http://www.npgsql.org/) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Create a .NET project - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet new console -o cockroachdb-test-app -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cd cockroachdb-test-app -~~~ - -The `dotnet` command creates a new app of type `console`. The `-o` parameter creates a directory named `cockroachdb-test-app` where your app will be stored and populates it with the required files. The `cd cockroachdb-test-app` command puts you into the newly created app directory. - -## Step 2. Install the Npgsql driver - -Install the latest version of the [Npgsql driver](https://www.nuget.org/packages/Npgsql/) into the .NET project using the built-in nuget package manager: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet add package Npgsql -~~~ - -
      - -## Step 3. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 4. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 5. Convert the key file for use by C# programs - -The private key generated for user `maxroach` by CockroachDB is [PEM encoded](https://tools.ietf.org/html/rfc1421). To read the key in a C# application, you will need to convert it into PKCS#12 format. - -To convert the key to PKCS#12 format, run the following OpenSSL command on the `maxroach` user's key file in the directory where you stored your certificates: - -{% include copy-clipboard.html %} -~~~ shell -$ openssl pkcs12 -inkey client.maxroach.key -password pass: -in client.maxroach.crt -export -out client.maxroach.pfx -~~~ - -As of December 2018, you need to provide a password for this to work on macOS. See . - -## Step 6. Run the C# code - -Now that you have created a database and set up encryption keys, in this section you will: - -- [Create a table and insert some rows](#basic-example) -- [Execute a batch of statements as a transaction](#transaction-example-with-retry-logic) - -### Basic example - -Replace the contents of `cockroachdb-test-app/Program.cs` with the following code: - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/basic-sample.cs %} -~~~ - -Then, run the code to connect as the `maxroach` user. This time, execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -~~~ - -### Transaction example (with retry logic) - -Open `cockroachdb-test-app/Program.cs` again and replace the contents with the code shown below. - -{% include {{page.version.version}}/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/txn-sample.cs %} -~~~ - -Then, run the code to connect as the `maxroach` user. This time, execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -Final balances: - account 1: 900 - account 2: 350 -~~~ - -However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -e 'SELECT id, balance FROM accounts' -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - -
      - -
      - -## Step 3. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 4. Run the C# code - -Now that you have created a database and set up encryption keys, in this section you will: - -- [Create a table and insert some rows](#basic2) -- [Execute a batch of statements as a transaction](#transaction2) - - - -### Basic example - -Replace the contents of `cockroachdb-test-app/Program.cs` with the following code: - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/insecure/basic-sample.cs %} -~~~ - -Then, run the code to connect as the `maxroach` user and execute some basic SQL statements: creating a table, inserting rows, and reading and printing the rows: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -~~~ - - - -### Transaction example (with retry logic) - -Open `cockroachdb-test-app/Program.cs` again and replace the contents with the code shown below. - -{% include {{page.version.version}}/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ csharp -{% include {{ page.version.version }}/app/insecure/txn-sample.cs %} -~~~ - -Then, run the code to connect as the `maxroach` user. This time, execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted: - -{% include copy-clipboard.html %} -~~~ shell -$ dotnet run -~~~ - -The output should be: - -~~~ -Initial balances: - account 1: 1000 - account 2: 250 -Final balances: - account 1: 900 - account 2: 350 -~~~ - -However, if you want to verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -e 'SELECT id, balance FROM accounts' -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [.NET Npgsql driver](http://www.npgsql.org/). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-go-app-with-cockroachdb-gorm.md b/src/current/v19.1/build-a-go-app-with-cockroachdb-gorm.md deleted file mode 100644 index 2fbd13a19db..00000000000 --- a/src/current/v19.1/build-a-go-app-with-cockroachdb-gorm.md +++ /dev/null @@ -1,147 +0,0 @@ ---- -title: Build a Go App with CockroachDB -summary: Learn how to use CockroachDB from a simple Go application with the GORM ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For another use of GORM with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the GORM ORM - -To install [GORM](http://gorm.io), run the following commands: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/lib/pq # dependency -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/jinzhu/gorm -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Go code - -The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically: - -- `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model. -- `db.Create(&Account{})` inserts rows into the table. -- `db.Find(&accounts)` selects from the table so that balances can be printed. -- The funds transfer occurs in `transferFunds()`. To ensure that we [handle retry errors](transactions.html#client-side-intervention), we write an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another error, it sleeps again for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). - -Copy the code or -download it directly. - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the command below. Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. -`git clone https://github.com/cockroachlabs/hello-world-go-gorm` -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/gorm-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run gorm-sample.go -~~~ - -The output should show the account balances before and after the funds transfer: - -~~~ shell -Balance at '2019-08-06 13:37:19.311423 -0400 EDT m=+0.034072606': -1 1000 -2 250 -Balance at '2019-08-06 13:37:19.325654 -0400 EDT m=+0.048303286': -1 900 -2 350 -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Go code - -The following code uses the [GORM](http://gorm.io) ORM to map Go-specific objects to SQL operations. Specifically: - -- `db.AutoMigrate(&Account{})` creates an `accounts` table based on the Account model. -- `db.Create(&Account{})` inserts rows into the table. -- `db.Find(&accounts)` selects from the table so that balances can be printed. -- The funds transfer occurs in `transferFunds()`. To ensure that we [handle retry errors](transactions.html#client-side-intervention), we write an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another error, it sleeps again for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). - -To get the code below, clone the `hello-world-go-gorm` repo to your machine: - -{% include copy-clipboard.html %} -~~~ shell -git clone https://github.com/cockroachlabs/hello-world-go-gorm -~~~ - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/gorm-sample.go %} -~~~ - -Change to the directory where you cloned the repo and run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run main.go -~~~ - -The output should show the account balances before and after the funds transfer: - -~~~ shell -Balance at '2019-07-15 13:34:22.536363 -0400 EDT m=+0.019918599': -1 1000 -2 250 -Balance at '2019-07-15 13:34:22.540037 -0400 EDT m=+0.023592845': -1 900 -2 350 -~~~ - -
      - -## What's next? - -Read more about using the [GORM ORM](http://gorm.io), or check out a more realistic implementation of GORM with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-go-app-with-cockroachdb.md b/src/current/v19.1/build-a-go-app-with-cockroachdb.md deleted file mode 100644 index 4dcb25d83b2..00000000000 --- a/src/current/v19.1/build-a-go-app-with-cockroachdb.md +++ /dev/null @@ -1,239 +0,0 @@ ---- -title: Build a Go App with CockroachDB -summary: Learn how to use CockroachDB from a simple Go application with the Go pq driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Go application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Go pq driver](https://godoc.org/github.com/lib/pq) and the [GORM ORM](http://gorm.io) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Go pq driver - -To install the [Go pq driver](https://godoc.org/github.com/lib/pq), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -u github.com/lib/pq -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Go code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the command below. Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. - -`git clone https://github.com/cockroachlabs/hello-world-go-pq/` -{{site.data.alerts.end}} - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/txn-sample.go %} -~~~ - -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client. To install Clone the library into your `$GOPATH` as follows: - -{% include copy-clipboard.html %} -~~~ shell -$ mkdir -p $GOPATH/src/github.com/cockroachdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cd $GOPATH/src/github.com/cockroachdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ git clone git@github.com:cockroachdb/cockroach-go.git -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run txn-sample.go -~~~ - -The output should be: - -~~~ shell -Success -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Go code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the command below. Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. - -`git clone https://github.com/cockroachlabs/hello-world-go-pq/` -{{site.data.alerts.end}} - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/basic-sample.go %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run basic-sample.go -~~~ - -The output should be: - -~~~ -Initial balances: -1 1000 -2 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time will execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.go file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/app/insecure/txn-sample.go %} -~~~ - -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. For Go, the CockroachDB retry function is in the `crdb` package of the CockroachDB Go client. - -To install the [CockroachDB Go client](https://github.com/cockroachdb/cockroach-go), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ go get -d github.com/cockroachdb/cockroach-go -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ go run txn-sample.go -~~~ - -The output should be: - -~~~ shell -Success -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Go pq driver](https://godoc.org/github.com/lib/pq). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-java-app-with-cockroachdb-hibernate.md b/src/current/v19.1/build-a-java-app-with-cockroachdb-hibernate.md deleted file mode 100644 index 57d812c6171..00000000000 --- a/src/current/v19.1/build-a-java-app-with-cockroachdb-hibernate.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -title: Build a Java App with CockroachDB -summary: Learn how to use CockroachDB from a simple Java application with the Hibernate ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For another use of Hibernate with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -{{site.data.alerts.callout_danger}} -The examples on this page assume you are using a Java version <= 9. They do not work with Java 10. -{{site.data.alerts.end}} - -## Step 1. Install the Gradle build tool - -This tutorial uses the [Gradle build tool](https://gradle.org/) to get all dependencies for your application, including Hibernate. - -To install Gradle on Mac, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ brew install gradle -~~~ - -To install Gradle on a Debian-based Linux distribution like Ubuntu: - -{% include copy-clipboard.html %} -~~~ shell -$ apt-get install gradle -~~~ - -To install Gradle on a Red Hat-based Linux distribution like Fedora: - -{% include copy-clipboard.html %} -~~~ shell -$ dnf install gradle -~~~ - -For other ways to install Gradle, see [its official documentation](https://gradle.org/install). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -You can pass the [`--also-generate-pkcs8-key` flag](create-security-certificates.html#flag-pkcs8) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.maxroach.key.pk8`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -## Step 4. Run the Java code - -The code below uses Hibernate to map Java methods to SQL operations. It perform the following steps which roughly correspond to method calls in the `Sample` class. - -1. Create an `accounts` table in the `bank` database as specified by the Hibernate `Account` class. -2. Inserts rows into the table using `session.save(new Account(int id, int balance))` (see `Sample.addAccounts()`). -3. Transfer money from one account to another, printing out account balances before and after the transfer (see `transferFunds(long fromId, long toId, long amount)`). -4. Print out account balances before and after the transfer (see `Sample.getAccountBalance(long id)`). - -In addition, the code shows a pattern for automatically handling [transaction retries](transactions.html#client-side-intervention-example) by wrapping transactions in a higher-order function `Sample.runTransaction()`. It also includes a method for testing the retry handling logic (`Sample.forceRetryLogic()`), which will be run if you set the `FORCE_RETRY` variable to `true`. - -It does all of the above using the practices we recommend for using Hibernate (and the underlying JDBC connection) with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - -To run it: - -1. Download and extract [hibernate-basic-sample.tgz](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/hibernate-basic-sample/hibernate-basic-sample.tgz). The settings in [`hibernate.cfg.xml`](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/hibernate-basic-sample/hibernate.cfg.xml) specify how to connect to the database. -2. Compile and run the code using [`build.gradle`](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/hibernate-basic-sample/build.gradle), which will also download the dependencies. - - {% include copy-clipboard.html %} - ~~~ shell - $ gradle run - ~~~ - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the command below. Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. - -`git clone https://github.com/cockroachlabs/example-app-java-hibernate/` -{{site.data.alerts.end}} - -The contents of [`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{page.version.version}}/app/hibernate-basic-sample/Sample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/hibernate-basic-sample/Sample.java %} -~~~ - -Toward the end of the output, you should see: - -~~~ -APP: BEGIN; -APP: addAccounts() --> 1 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(1) --> 1000 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 250 -APP: COMMIT; -APP: getAccountBalance(1) --> 1000 -APP: getAccountBalance(2) --> 250 -APP: BEGIN; -APP: transferFunds(1, 2, 100) --> 100 -APP: COMMIT; -APP: transferFunds(1, 2, 100) --> 100 -APP: BEGIN; -APP: getAccountBalance(1) --> 900 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 350 -APP: COMMIT; -APP: getAccountBalance(1) --> 900 -APP: getAccountBalance(2) --> 350 -~~~ - -To verify that the account balances were updated successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -SELECT id, balance FROM accounts; -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 - 3 | 314159 -(3 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Java code - -The code below uses Hibernate to map Java methods to SQL operations. It perform the following steps which roughly correspond to method calls in the `Sample` class. - -1. Create an `accounts` table in the `bank` database as specified by the Hibernate `Account` class. -2. Inserts rows into the table using `session.save(new Account(int id, int balance))` (see `Sample.addAccounts()`). -3. Transfer money from one account to another, printing out account balances before and after the transfer (see `transferFunds(long fromId, long toId, long amount)`). -4. Print out account balances before and after the transfer (see `Sample.getAccountBalance(long id)`). - -In addition, the code shows a pattern for automatically handling [transaction retries](transactions.html#client-side-intervention-example) by wrapping transactions in a higher-order function `Sample.runTransaction()`. It also includes a method for testing the retry handling logic (`Sample.forceRetryLogic()`), which will be run if you set the `FORCE_RETRY` variable to `true`. - -It does all of the above using the practices we recommend for using Hibernate (and the underlying JDBC connection) with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - -To run it: - -1. Clone the `example-app-java-hibernate` repo to your machine: - - {% include copy-clipboard.html %} - ~~~ shell - git clone https://github.com/cockroachlabs/example-app-java-hibernate/ - ~~~ - -2. Compile and run the code using [`build.gradle`](https://github.com/cockroachdb/docs/raw/master/_includes/{{ page.version.version }}/app/insecure/hibernate-basic-sample/build.gradle), which will also download the dependencies. - - {% include copy-clipboard.html %} - ~~~ shell - $ gradle run - ~~~ - -The contents of [`Sample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/{{page.version.version}}/app/insecure/hibernate-basic-sample/Sample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/hibernate-basic-sample/Sample.java %} -~~~ - -Toward the end of the output, you should see: - -~~~ -APP: BEGIN; -APP: addAccounts() --> 1 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(1) --> 1000 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 250 -APP: COMMIT; -APP: getAccountBalance(1) --> 1000 -APP: getAccountBalance(2) --> 250 -APP: BEGIN; -APP: transferFunds(1, 2, 100) --> 100 -APP: COMMIT; -APP: transferFunds(1, 2, 100) --> 100 -APP: BEGIN; -APP: getAccountBalance(1) --> 900 -APP: COMMIT; -APP: BEGIN; -APP: getAccountBalance(2) --> 350 -APP: COMMIT; -APP: getAccountBalance(1) --> 900 -APP: getAccountBalance(2) --> 350 -~~~ - -To verify that the account balances were updated successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -SELECT id, balance FROM accounts; -~~~ - -~~~ - id | balance -+----+---------+ - 1 | 900 - 2 | 350 - 3 | 314159 -(3 rows) -~~~ - -
      - -## Recommended Practices - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html). It bypasses the [SQL layer](architecture/sql-layer.html) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database. - -For more information about importing data from Postgres, see [Migrate from Postgres](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Use `rewriteBatchedInserts` for increased speed - -We strongly recommend setting `rewriteBatchedInserts=true`; we have seen 2-3x performance improvements with it enabled. From [the JDBC connection parameters documentation](https://jdbc.postgresql.org/documentation/use/#connection-parameters): - -> This will change batch inserts from `insert into foo (col1, col2, col3) values (1,2,3)` into `insert into foo (col1, col2, col3) values (1,2,3), (4,5,6)` this provides 2-3x performance improvement - -## What's next? - -Read more about using the [Hibernate ORM](http://hibernate.org/orm/), or check out a more realistic implementation of Hibernate with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-java-app-with-cockroachdb.md b/src/current/v19.1/build-a-java-app-with-cockroachdb.md deleted file mode 100644 index 4c00ad8376a..00000000000 --- a/src/current/v19.1/build-a-java-app-with-cockroachdb.md +++ /dev/null @@ -1,263 +0,0 @@ ---- -title: Build a Java App with CockroachDB -summary: Learn how to use CockroachDB from a simple Java application with the JDBC driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how to build a simple Java application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Java JDBC driver](https://jdbc.postgresql.org/) and the [Hibernate ORM](http://hibernate.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -{{site.data.alerts.callout_danger}} -The examples on this page assume you are using a Java version <= 9. They do not work with Java 10. -{{site.data.alerts.end}} - -## Step 1. Install the Java JDBC driver - -Download and set up the Java JDBC driver as described in the [official documentation](https://jdbc.postgresql.org/documentation/setup/). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -New in v19.1: You can pass the [`--also-generate-pkcs8-key` flag](create-security-certificates.html#flag-pkcs8) to generate a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format in Java. In this case, the generated PKCS8 key will be named `client.maxroach.key.pk8`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key --also-generate-pkcs8-key -~~~ - -## Step 4. Run the Java code - -The code below uses JDBC and the [Data Access Object (DAO)](https://en.wikipedia.org/wiki/Data_access_object) pattern to map Java methods to SQL operations. It consists of two classes: - -1. `BasicExample`, which is where the application logic lives. -2. `BasicExampleDAO`, which is used by the application to access the data store (in this case CockroachDB). This class has logic to handle [client-side transaction retries](transactions.html#client-side-intervention) (see the `BasicExampleDAO.runSQL()` method). - -It performs the following steps which roughly correspond to method calls in the `BasicExample` class. - -| Step | Method | -|------------------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------| -| 1. Create an `accounts` table in the `bank` database | `BasicExampleDAO.createAccounts()` | -| 2. Insert account data using a `Map` that corresponds to the input to `INSERT` on the backend | `BasicExampleDAO.updateAccounts(Map balance)` | -| 3. Transfer money from one account to another, printing out account balances before and after the transfer | `BasicExampleDAO.transferFunds(int from, int to, int amount)` | -| 4. Insert random account data using JDBC's bulk insertion support | `BasicExampleDAO.bulkInsertRandomAccountData()` | -| 5. Print out some account data | `BasicExampleDAO.readAccounts(int limit)` | -| 6. Drop the `accounts` table and perform any other necessary cleanup | `BasicExampleDAO.tearDown()` (This cleanup step means you can run this program more than once.) | - -It does all of the above using the practices we recommend for using JDBC with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - -To run it: - -1. Download [`BasicExample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/BasicExample.java), or create the file yourself and copy the code below. -2. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar BasicExample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar BasicExample - ~~~ - -The contents of [`BasicExample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/BasicExample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/BasicExample.java %} -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Java code - -The code below uses JDBC and the [Data Access Object (DAO)](https://en.wikipedia.org/wiki/Data_access_object) pattern to map Java methods to SQL operations. It consists of two classes: - -1. `BasicExample`, which is where the application logic lives. -2. `BasicExampleDAO`, which is used by the application to access the data store (in this case CockroachDB). This class has logic to handle [client-side transaction retries](transactions.html#client-side-intervention) (see the `BasicExampleDAO.runSQL()` method). - -It performs the following steps which roughly correspond to method calls in the `BasicExample` class. - -1. Create an `accounts` table in the `bank` database (`BasicExampleDAO.createAccounts()`). -2. Insert account data using a `Map` that corresponds to the input to `INSERT` on the backend (`BasicExampleDAO.updateAccounts(Map balance)`). -3. Transfer money from one account to another, printing out account balances before and after the transfer (`BasicExampleDAO.transferFunds(int from, int to, int amount)`). -4. Insert random account data using JDBC's bulk insertion support (`BasicExampleDAO.bulkInsertRandomAccountData()`). -5. Print out (some) account data (`BasicExampleDAO.readAccounts(int limit)`). -6. Drop the `accounts` table and perform any other necessary cleanup (`BasicExampleDAO.tearDown()`). (Note that you can run this program as many times as you want due to this cleanup step.) - -It does all of the above using the practices we recommend for using JDBC with CockroachDB, which are listed in the [Recommended Practices](#recommended-practices) section below. - -To run it: - -1. Download [the PostgreSQL JDBC driver](https://jdbc.postgresql.org/download/). -1. Compile and run the code (adding the PostgreSQL JDBC driver to your classpath): - - {% include copy-clipboard.html %} - ~~~ shell - $ javac -classpath .:/path/to/postgresql.jar BasicExample.java - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ java -classpath .:/path/to/postgresql.jar BasicExample - ~~~ - -The contents of [`BasicExample.java`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/insecure/BasicExample.java): - -{% include copy-clipboard.html %} -~~~ java -{% include {{page.version.version}}/app/insecure/BasicExample.java %} -~~~ - -
      - -The output will look like the following: - -~~~ -BasicExampleDAO.createAccounts: - 'CREATE TABLE IF NOT EXISTS accounts (id INT PRIMARY KEY, balance INT, CONSTRAINT balance_gt_0 CHECK (balance >= 0))' - -BasicExampleDAO.updateAccounts: - 'INSERT INTO accounts (id, balance) VALUES (1, 1000)' - -BasicExampleDAO.updateAccounts: - 'INSERT INTO accounts (id, balance) VALUES (2, 250)' -BasicExampleDAO.updateAccounts: - => 2 total updated accounts -main: - => Account balances at time '11:54:06.904': - ID 1 => $1000 - ID 2 => $250 - -BasicExampleDAO.transferFunds: - 'UPSERT INTO accounts (id, balance) VALUES(1, ((SELECT balance FROM accounts WHERE id = 1) - 100)),(2, ((SELECT balance FROM accounts WHERE id = 2) + 100))' -BasicExampleDAO.transferFunds: - => $100 transferred between accounts 1 and 2, 2 rows updated -main: - => Account balances at time '11:54:06.985': - ID 1 => $900 - ID 2 => $350 - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES (354685257, 158423397)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES (206179866, 950590234)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES (708995411, 892928833)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - 'INSERT INTO accounts (id, balance) VALUES (500817884, 189050420)' - => 128 row(s) updated in this batch - -BasicExampleDAO.bulkInsertRandomAccountData: - => finished, 512 total rows inserted - -BasicExampleDAO.readAccounts: - 'SELECT id, balance FROM accounts LIMIT 10' - id => 1 - balance => 900 - id => 2 - balance => 350 - id => 190756 - balance => 966414958 - id => 1002343 - balance => 243354081 - id => 1159751 - balance => 59745201 - id => 2193125 - balance => 346719279 - id => 2659707 - balance => 770266587 - id => 6819325 - balance => 511618834 - id => 9985390 - balance => 905049643 - id => 12256472 - balance => 913034434 - -BasicExampleDAO.tearDown: - 'DROP TABLE accounts' -~~~ - -## Recommended Practices - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code altogether and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html). It bypasses the [SQL layer](architecture/sql-layer.html) altogether and writes directly to the [storage layer](architecture/storage-layer.html) of the database. - -For more information about importing data from Postgres, see [Migrate from Postgres](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Use `rewriteBatchedInserts` for increased speed - -We strongly recommend setting `rewriteBatchedInserts=true`; we have seen 2-3x performance improvements with it enabled. From [the JDBC connection parameters documentation](https://jdbc.postgresql.org/documentation/use/#connection-parameters): - -> This will change batch inserts from `insert into foo (col1, col2, col3) values (1,2,3)` into `insert into foo (col1, col2, col3) values (1,2,3), (4,5,6)` this provides 2-3x performance improvement - -### Use a batch size of 128 - -PGJDBC's batching support only works with [powers of two](https://github.com/pgjdbc/pgjdbc/blob/7b52b0c9e5b9aa9a9c655bb68f23bf4ec57fd51c/pgjdbc/src/main/java/org/postgresql/jdbc/PgPreparedStatement.java#L1597), and will split batches of other sizes up into multiple sub-batches. This means that a batch of size 128 can be 6x faster than a batch of size 250. - -The code snippet below shows a pattern for using a batch size of 128, and is taken from the longer example above (specifically, the `BasicExampleDAO.bulkInsertRandomAccountData()` method). - -Specifically, it does the following: - -1. Turn off auto-commit so you can manage the transaction lifecycle and thus the size of the batch inserts. -2. Given an overall update size of 500 rows (for example), split it into batches of size 128 and execute each batch in turn. -3. Finally, commit the batches of statements you've just executed. - -~~~ java -int BATCH_SIZE = 128; -connection.setAutoCommit(false); - -try (PreparedStatement pstmt = connection.prepareStatement("INSERT INTO accounts (id, balance) VALUES (?, ?)")) { - for (int i=0; i<=(500/BATCH_SIZE);i++) { - for (int j=0; j %s row(s) updated in this batch\n", count.length); // Verifying 128 rows in the batch - } - connection.commit(); -} -~~~ - -## What's next? - -Read more about using the [Java JDBC driver](https://jdbc.postgresql.org/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-nodejs-app-with-cockroachdb-sequelize.md b/src/current/v19.1/build-a-nodejs-app-with-cockroachdb-sequelize.md deleted file mode 100644 index 17191e9abb3..00000000000 --- a/src/current/v19.1/build-a-nodejs-app-with-cockroachdb-sequelize.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: Build a Node.js App with CockroachDB -summary: Learn how to use CockroachDB from a simple Node.js application with the Sequelize ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of Sequelize with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms)repository. -{{site.data.alerts.end}} - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Sequelize ORM - -To install Sequelize, as well as a [CockroachDB Node.js package](https://github.com/cockroachdb/sequelize-cockroachdb) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ npm install sequelize sequelize-cockroachdb -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Node.js code - -The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ js -{% include {{ page.version.version }}/app/sequelize-basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node sequelize-basic-sample.js -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=/tmp/certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - - - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Node.js code - -The following code uses the [Sequelize](https://sequelize.org/) ORM to map Node.js-specific objects to SQL operations. Specifically, `Account.sync({force: true})` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.bulkCreate([...])` inserts rows into the table, and `Account.findAll()` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ js -{% include {{ page.version.version }}/app/insecure/sequelize-basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node sequelize-basic-sample.js -~~~ - -The output should be: - -~~~ shell -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, you can again use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SHOW TABLES' --database=bank -~~~ - -~~~ -+------------+ -| table_name | -+------------+ -| accounts | -+------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Sequelize ORM](https://sequelize.org/), or check out a more realistic implementation of Sequelize with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-nodejs-app-with-cockroachdb.md b/src/current/v19.1/build-a-nodejs-app-with-cockroachdb.md deleted file mode 100644 index de313356e64..00000000000 --- a/src/current/v19.1/build-a-nodejs-app-with-cockroachdb.md +++ /dev/null @@ -1,231 +0,0 @@ ---- -title: Build a Node.js App with CockroachDB -summary: Learn how to use CockroachDB from a simple Node.js application with the Node.js pg driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Node.js application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Node.js pg driver](https://www.npmjs.com/package/pg) and the [Sequelize ORM](https://sequelize.org/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install Node.js packages - -To let your application communicate with CockroachDB, install the [Node.js pg driver](https://www.npmjs.com/package/pg): - -{% include copy-clipboard.html %} -~~~ shell -$ npm install pg -~~~ - -The example app on this page also requires [`async`](https://www.npmjs.com/package/async): - -{% include copy-clipboard.html %} -~~~ shell -$ npm install async -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Node.js code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/basic-sample.js) file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node basic-sample.js -~~~ - -The output should be: - -~~~ -Initial balances: -{ id: '1', balance: '1000' } -{ id: '2', balance: '250' } -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted. - -Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/txn-sample.js) file, or create the file yourself and copy the code into it. - -{% include v19.1/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/txn-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node txn-sample.js -~~~ - -The output should be: - -~~~ -Balances after transfer: -{ id: '1', balance: '900' } -{ id: '2', balance: '350' } -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Node.js code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, creating a table, inserting rows, and reading and printing the rows. - -Download the [`basic-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/insecure/basic-sample.js) file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/insecure/basic-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node basic-sample.js -~~~ - -The output should be: - -~~~ -Initial balances: -{ id: '1', balance: '1000' } -{ id: '2', balance: '250' } -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another and then read the updated values, where all included statements are either committed or aborted. - -Download the [`txn-sample.js`](https://raw.githubusercontent.com/cockroachdb/docs/master/_includes/v19.1/app/insecure/txn-sample.js) file, or create the file yourself and copy the code into it. - -{% include v19.1/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ js -{% include {{page.version.version}}/app/insecure/txn-sample.js %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ node txn-sample.js -~~~ - -The output should be: - -~~~ -Balances after transfer: -{ id: '1', balance: '900' } -{ id: '2', balance: '350' } -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Node.js pg driver](https://www.npmjs.com/package/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-php-app-with-cockroachdb.md b/src/current/v19.1/build-a-php-app-with-cockroachdb.md deleted file mode 100644 index fd96f43e0df..00000000000 --- a/src/current/v19.1/build-a-php-app-with-cockroachdb.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Build a PHP App with CockroachDB -summary: Learn how to use CockroachDB from a simple PHP application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple PHP application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php) enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the php-pgsql driver - -Install the php-pgsql driver as described in the [official documentation](https://www.php.net/manual/en/book.pgsql.php). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the PHP code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.php file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/basic-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances: -1: 1000 -2: 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.php file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/txn-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances after transfer: -1: 900 -2: 350 -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the PHP code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.php file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/insecure/basic-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances: -1: 1000 -2: 250 -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.php file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic **retry function** that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ php -{% include {{ page.version.version }}/app/insecure/txn-sample.php %} -~~~ - -The output should be: - -~~~ shell -Account balances after transfer: -1: 900 -2: 350 -~~~ - -To verify that funds were transferred from one account to another, use the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [php-pgsql driver](https://www.php.net/manual/en/book.pgsql.php). - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-python-app-with-cockroachdb-sqlalchemy.md b/src/current/v19.1/build-a-python-app-with-cockroachdb-sqlalchemy.md deleted file mode 100644 index 53f4536c14e..00000000000 --- a/src/current/v19.1/build-a-python-app-with-cockroachdb-sqlalchemy.md +++ /dev/null @@ -1,318 +0,0 @@ ---- -title: Build a Python App with CockroachDB -summary: Learn how to use CockroachDB from a simple Python application with SQLAlchemy. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Python application with CockroachDB using [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/). - -We have tested the [psycopg2 driver](http://initd.org/psycopg/docs/) and [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -{{site.data.alerts.callout_info}} -The example code on this page uses Python 3. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -SQLAlchemy relies on the existence of [foreign keys](foreign-key.html) to generate [`JOIN` expressions](joins.html) from your application code. If you remove foreign keys from your schema, SQLAlchemy will not generate joins for you. As a workaround, you can [create a "custom foreign condition" by adding a `relationship` field to your table objects](https://stackoverflow.com/questions/37806625/sqlalchemy-create-relations-but-without-foreign-key-constraint-in-db), or do the equivalent work in your application. -{{site.data.alerts.end}} - -## Step 1. Install SQLAlchemy - -To install SQLAlchemy, as well as a [CockroachDB Python package](https://github.com/cockroachdb/sqlalchemy-cockroachdb) that accounts for some differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ pip install sqlalchemy sqlalchemy-cockroachdb psycopg2 -~~~ - -{{site.data.alerts.callout_success}} -You can substitute psycopg2 for other alternatives that include the psycopg python package. -{{site.data.alerts.end}} - -For other ways to install SQLAlchemy, see the [official documentation](http://docs.sqlalchemy.org/en/latest/intro.html#installation-guide). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Python code - -The code below uses [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) to map Python objects and methods to SQL operations. - -You can run this script as many times as you want; on each run, the script will create some new accounts and shuffle money around between randomly selected accounts. - -Specifically, the script: - -1. Reads in existing account IDs (if any) from the `bank` database. -2. Creates additional accounts with randomly generated IDs. Then, it adds a bit of money to each new account. -3. Chooses two accounts at random and takes half of the money from the first and deposits it into the second. - -It does all of the above using the practices we recommend for using SQLAlchemy with CockroachDB, which are listed in the [Best practices](#best-practices) section below. - -{{site.data.alerts.callout_info}} -You must use the `cockroachdb://` prefix in the URL passed to [`sqlalchemy.create_engine`](https://docs.sqlalchemy.org/en/latest/core/engines.html?highlight=create_engine#sqlalchemy.create_engine) to make sure the [`cockroachdb`](https://github.com/cockroachdb/sqlalchemy-cockroachdb) dialect is used. Using the `postgres://` URL prefix to connect to your CockroachDB cluster will not work. -{{site.data.alerts.end}} - -Copy the code below or -download it directly. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/sqlalchemy-basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python3 sqlalchemy-basic-sample.py -~~~ - -The output should look something like the following: - -~~~ shell -2018-12-06 15:59:58,999 INFO sqlalchemy.engine.base.Engine select current_schema() -2018-12-06 15:59:58,999 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1 -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1 -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,002 INFO sqlalchemy.engine.base.Engine select version() -2018-12-06 15:59:59,002 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,003 INFO sqlalchemy.engine.base.Engine SELECT table_name FROM information_schema.tables WHERE table_schema=%s -2018-12-06 15:59:59,004 INFO sqlalchemy.engine.base.Engine ('public',) -2018-12-06 15:59:59,005 INFO sqlalchemy.engine.base.Engine SELECT id from accounts; -2018-12-06 15:59:59,005 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine SAVEPOINT cockroach_restart -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,083 INFO sqlalchemy.engine.base.Engine INSERT INTO accounts (id, balance) VALUES (%(id)s, %(balance)s) -2018-12-06 15:59:59,083 INFO sqlalchemy.engine.base.Engine ({'id': 298865, 'balance': 208217}, {'id': 506738, 'balance': 962549}, {'id': 514698, 'balance': 986327}, {'id': 587747, 'balance': 210406}, {'id': 50148, 'balance': 347976}, {'id': 854295, 'balance': 420086}, {'id': 785757, 'balance': 364836}, {'id': 406247, 'balance': 787016} ... displaying 10 of 100 total bound parameter sets ... {'id': 591336, 'balance': 542066}, {'id': 33728, 'balance': 526531}) -2018-12-06 15:59:59,201 INFO sqlalchemy.engine.base.Engine RELEASE SAVEPOINT cockroach_restart -2018-12-06 15:59:59,201 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,205 INFO sqlalchemy.engine.base.Engine COMMIT -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine SAVEPOINT cockroach_restart -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,207 INFO sqlalchemy.engine.base.Engine SELECT accounts.id AS accounts_id, accounts.balance AS accounts_balance -FROM accounts -WHERE accounts.id = %(id_1)s -2018-12-06 15:59:59,207 INFO sqlalchemy.engine.base.Engine {'id_1': 769626} -2018-12-06 15:59:59,209 INFO sqlalchemy.engine.base.Engine UPDATE accounts SET balance=%(balance)s WHERE accounts.id = %(accounts_id)s -2018-12-06 15:59:59,209 INFO sqlalchemy.engine.base.Engine {'balance': 470580, 'accounts_id': 769626} -2018-12-06 15:59:59,212 INFO sqlalchemy.engine.base.Engine UPDATE accounts SET balance=(accounts.balance + %(balance_1)s) WHERE accounts.id = %(id_1)s -2018-12-06 15:59:59,247 INFO sqlalchemy.engine.base.Engine {'balance_1': 470580, 'id_1': 158447} -2018-12-06 15:59:59,249 INFO sqlalchemy.engine.base.Engine RELEASE SAVEPOINT cockroach_restart -2018-12-06 15:59:59,250 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,251 INFO sqlalchemy.engine.base.Engine COMMIT -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT COUNT(*) FROM accounts; -~~~ - -~~~ - count -------- - 100 -(1 row) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Python code - -The code below uses [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) to map Python objects and methods to SQL operations. - -You can run this script as many times as you want; on each run, it will create some new accounts and shuffle money around between randomly selected accounts. - -Specifically, it: - -1. Reads in existing account IDs (if any) from the `bank` database. -2. Creates additional accounts with randomly generated IDs. Then, it adds a bit of money to each new account. -3. Chooses two accounts at random and takes half of the money from the first and deposits it into the second. - -It does all of the above using the practices we recommend for using SQLAlchemy with CockroachDB, which are listed in the [Best practices](#best-practices) section below. - -{{site.data.alerts.callout_info}} -You must use the `cockroachdb://` prefix in the URL passed to [`sqlalchemy.create_engine`](https://docs.sqlalchemy.org/en/latest/core/engines.html?highlight=create_engine#sqlalchemy.create_engine) to make sure the [`cockroachdb`](https://github.com/cockroachdb/sqlalchemy-cockroachdb) dialect is used. Using the `postgres://` URL prefix to connect to your CockroachDB cluster will not work. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/sqlalchemy-basic-sample.py %} -~~~ - -Run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python3 example.py -~~~ - -The output should look something like the following: - -~~~ shell -2018-12-06 15:59:58,999 INFO sqlalchemy.engine.base.Engine select current_schema() -2018-12-06 15:59:58,999 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1 -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1 -2018-12-06 15:59:59,001 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,002 INFO sqlalchemy.engine.base.Engine select version() -2018-12-06 15:59:59,002 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,003 INFO sqlalchemy.engine.base.Engine SELECT table_name FROM information_schema.tables WHERE table_schema=%s -2018-12-06 15:59:59,004 INFO sqlalchemy.engine.base.Engine ('public',) -2018-12-06 15:59:59,005 INFO sqlalchemy.engine.base.Engine SELECT id from accounts; -2018-12-06 15:59:59,005 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine SAVEPOINT cockroach_restart -2018-12-06 15:59:59,008 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,083 INFO sqlalchemy.engine.base.Engine INSERT INTO accounts (id, balance) VALUES (%(id)s, %(balance)s) -2018-12-06 15:59:59,083 INFO sqlalchemy.engine.base.Engine ({'id': 298865, 'balance': 208217}, {'id': 506738, 'balance': 962549}, {'id': 514698, 'balance': 986327}, {'id': 587747, 'balance': 210406}, {'id': 50148, 'balance': 347976}, {'id': 854295, 'balance': 420086}, {'id': 785757, 'balance': 364836}, {'id': 406247, 'balance': 787016} ... displaying 10 of 100 total bound parameter sets ... {'id': 591336, 'balance': 542066}, {'id': 33728, 'balance': 526531}) -2018-12-06 15:59:59,201 INFO sqlalchemy.engine.base.Engine RELEASE SAVEPOINT cockroach_restart -2018-12-06 15:59:59,201 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,205 INFO sqlalchemy.engine.base.Engine COMMIT -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine BEGIN (implicit) -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine SAVEPOINT cockroach_restart -2018-12-06 15:59:59,206 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,207 INFO sqlalchemy.engine.base.Engine SELECT accounts.id AS accounts_id, accounts.balance AS accounts_balance -FROM accounts -WHERE accounts.id = %(id_1)s -2018-12-06 15:59:59,207 INFO sqlalchemy.engine.base.Engine {'id_1': 769626} -2018-12-06 15:59:59,209 INFO sqlalchemy.engine.base.Engine UPDATE accounts SET balance=%(balance)s WHERE accounts.id = %(accounts_id)s -2018-12-06 15:59:59,209 INFO sqlalchemy.engine.base.Engine {'balance': 470580, 'accounts_id': 769626} -2018-12-06 15:59:59,212 INFO sqlalchemy.engine.base.Engine UPDATE accounts SET balance=(accounts.balance + %(balance_1)s) WHERE accounts.id = %(id_1)s -2018-12-06 15:59:59,247 INFO sqlalchemy.engine.base.Engine {'balance_1': 470580, 'id_1': 158447} -2018-12-06 15:59:59,249 INFO sqlalchemy.engine.base.Engine RELEASE SAVEPOINT cockroach_restart -2018-12-06 15:59:59,250 INFO sqlalchemy.engine.base.Engine {} -2018-12-06 15:59:59,251 INFO sqlalchemy.engine.base.Engine COMMIT -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT COUNT(*) FROM accounts; -~~~ - -~~~ - count -------- - 100 -(1 row) -~~~ - -
      - -## Best practices - -### Use the `run_transaction` function - -We strongly recommend using the [`sqlalchemy_cockroachdb.run_transaction()`](https://github.com/cockroachdb/sqlalchemy-cockroachdb/blob/master/sqlalchemy_cockroachdb/transaction.py) function as shown in the code samples on this page. This abstracts the details of [transaction retries](transactions.html#transaction-retries) away from your application code. Transaction retries are more frequent in CockroachDB than in some other databases because we use [optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) rather than locking. Because of this, a CockroachDB transaction may have to be tried more than once before it can commit. This is part of how we ensure that our transaction ordering guarantees meet the ANSI [SERIALIZABLE](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) isolation level. - -In addition to the above, using `run_transaction` has the following benefits: - -- Because it must be passed a [sqlalchemy.orm.session.sessionmaker](https://docs.sqlalchemy.org/en/latest/orm/session_api.html#session-and-sessionmaker) object (*not* a [session][session]), it ensures that a new session is created exclusively for use by the callback, which protects you from accidentally reusing objects via any sessions created outside the transaction. -- It abstracts away the [client-side transaction retry logic](transactions.html#client-side-intervention) from your application, which keeps your application code portable across different databases. For example, the sample code given on this page works identically when run against Postgres (modulo changes to the prefix and port number in the connection string). - - -For more information about how transactions (and retries) work, see [Transactions](transactions.html). - -### Avoid mutations of session and/or transaction state inside `run_transaction()` - -In general, this is in line with the recommendations of the [SQLAlchemy FAQs](https://docs.sqlalchemy.org/en/latest/orm/session_basics.html#session-frequently-asked-questions), which state (with emphasis added by the original author) that - -> As a general rule, the application should manage the lifecycle of the session *externally* to functions that deal with specific data. This is a fundamental separation of concerns which keeps data-specific operations agnostic of the context in which they access and manipulate that data. - -and - -> Keep the lifecycle of the session (and usually the transaction) **separate and external**. - -In keeping with the above recommendations from the official docs, we **strongly recommend** avoiding any explicit mutations of the transaction state inside the callback passed to `run_transaction`, since that will lead to breakage. Specifically, do not make calls to the following functions from inside `run_transaction`: - -- [`sqlalchemy.orm.Session.commit()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=commit#sqlalchemy.orm.session.Session.commit) (or other variants of `commit()`): This is not necessary because `cockroachdb.sqlalchemy.run_transaction` handles the savepoint/commit logic for you. -- [`sqlalchemy.orm.Session.rollback()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=rollback#sqlalchemy.orm.session.Session.rollback) (or other variants of `rollback()`): This is not necessary because `cockroachdb.sqlalchemy.run_transaction` handles the commit/rollback logic for you. -- [`Session.flush()`][session.flush]: This will not work as expected with CockroachDB because CockroachDB does not support nested transactions, which are necessary for `Session.flush()` to work properly. If the call to `Session.flush()` encounters an error and aborts, it will try to rollback. This will not be allowed by the currently-executing CockroachDB transaction created by `run_transaction()`, and will result in an error message like the following: `sqlalchemy.orm.exc.DetachedInstanceError: Instance is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)`. - -### Break up large transactions into smaller units of work - -If you see an error message like `transaction is too large to complete; try splitting into pieces`, you are trying to commit too much data in a single transaction. As described in our [Cluster Settings](cluster-settings.html) docs, the size limit for transactions is defined by the `kv.transaction.max_intents_bytes` setting, which defaults to 256 KiB. Although this setting can be changed by an admin, we strongly recommend against it in most cases. - -Instead, we recommend breaking your transaction into smaller units of work (or "chunks"). A pattern that works for inserting large numbers of objects using `run_transaction` to handle retries automatically for you is shown below. - -{% include_cached copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/sqlalchemy-large-txns.py %} -~~~ - -### Use `IMPORT` to read in large data sets - -If you are trying to get a large data set into CockroachDB all at once (a bulk import), avoid writing client-side code that uses an ORM and use the [`IMPORT`](import.html) statement instead. It is much faster and more efficient than making a series of [`INSERT`s](insert.html) and [`UPDATE`s](update.html) such as are generated by calls to [`session.bulk_save_objects()`](https://docs.sqlalchemy.org/en/latest/orm/session_api.html?highlight=bulk_save_object#sqlalchemy.orm.session.Session.bulk_save_objects). - -For more information about importing data from Postgres, see [Migrate from Postgres](migrate-from-postgres.html). - -For more information about importing data from MySQL, see [Migrate from MySQL](migrate-from-mysql.html). - -### Prefer the query builder - -In general, we recommend using the query-builder APIs of SQLAlchemy (e.g., [`Engine.execute()`](https://docs.sqlalchemy.org/en/latest/core/connections.html?highlight=execute#sqlalchemy.engine.Engine.execute)) in your application over the [Session][session]/ORM APIs if at all possible. That way, you know exactly what SQL is being generated and sent to CockroachDB, which has the following benefits: - -- It's easier to debug your SQL queries and make sure they are working as expected. -- You can more easily tune SQL query performance by issuing different statements, creating and/or using different indexes, etc. For more information, see [SQL Performance Best Practices](performance-best-practices-overview.html). - -## See also - -- The [SQLAlchemy](https://docs.sqlalchemy.org/en/latest/) docs -- [Transactions](transactions.html) - -{% include {{page.version.version}}/app/see-also-links.md %} - - - -[session.flush]: https://docs.sqlalchemy.org/en/latest/orm/session_api.html#sqlalchemy.orm.session.Session.flush -[session]: https://docs.sqlalchemy.org/en/latest/orm/session.html diff --git a/src/current/v19.1/build-a-python-app-with-cockroachdb.md b/src/current/v19.1/build-a-python-app-with-cockroachdb.md deleted file mode 100644 index cb4d3702571..00000000000 --- a/src/current/v19.1/build-a-python-app-with-cockroachdb.md +++ /dev/null @@ -1,138 +0,0 @@ ---- -title: Build a Python App with CockroachDB -summary: Learn how to use CockroachDB from a simple Python application with the psycopg2 driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Python application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Python psycopg2 driver](http://initd.org/psycopg/docs/) and the [SQLAlchemy ORM](https://docs.sqlalchemy.org/en/latest/) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the psycopg2 driver - -To install the Python psycopg2 driver, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ pip install psycopg2 -~~~ - -For other ways to install psycopg2, see the [official documentation](http://initd.org/psycopg/docs/install.html). - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Python code - -Now that you have a database and a user, you'll run the code shown below to: - -- Create an `accounts` table and insert some rows. -- Transfer funds between two accounts inside a [transaction](transactions.html). To ensure that we [handle transaction retry errors](transactions.html#client-side-intervention), we write an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another retry error, it sleeps for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). -- Finally, we delete the accounts from the table before exiting so we can re-run the example code. - -{{site.data.alerts.callout_success}} -To clone a version of the code below that connects to insecure clusters, run the command below. Note that you will need to edit the connection string to use the certificates that you generated when you set up your secure cluster. - -`git clone https://github.com/cockroachlabs/hello-world-python-psycopg2/` -{{site.data.alerts.end}} - -Copy the code or download it directly. - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/basic-sample.py %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python basic-sample.py -~~~ - -The output should show the account balances before and after the funds transfer: - -~~~ -Balances at Wed Aug 7 12:11:23 2019 -['1', '1000'] -['2', '250'] -Balances at Wed Aug 7 12:11:23 2019 -['1', '900'] -['2', '350'] -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Python code - -Now that you have a database and a user, you'll run the code shown below to: - -- Create an `accounts` table and insert some rows. -- Transfer funds between two accounts inside a [transaction](transactions.html). To ensure that we [handle transaction retry errors](transactions.html#client-side-intervention), we write an application-level retry loop that, in case of error, sleeps before trying the funds transfer again. If it encounters another retry error, it sleeps for a longer interval, implementing [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff). -- Finally, we delete the accounts from the table before exiting so we can re-run the example code. - -To get the code below, clone the `hello-world-python-psycopg2` repo to your machine: - -{% include copy-clipboard.html %} -~~~ shell -git clone https://github.com/cockroachlabs/hello-world-python-psycopg2/ -~~~ - -{% include copy-clipboard.html %} -~~~ python -{% include {{page.version.version}}/app/insecure/basic-sample.py %} -~~~ - -Change to the directory where you cloned the repo and run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ python example.py -~~~ - -The output should show the account balances before and after the funds transfer: - -~~~ -Balances at Wed Jul 24 15:58:40 2019 -['1', '1000'] -['2', '250'] -Balances at Wed Jul 24 15:58:40 2019 -['1', '900'] -['2', '350'] -~~~ - -
      - -## What's next? - -Read more about using the [Python psycopg2 driver](http://initd.org/psycopg/docs/). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-ruby-app-with-cockroachdb-activerecord.md b/src/current/v19.1/build-a-ruby-app-with-cockroachdb-activerecord.md deleted file mode 100644 index 5956a5f9561..00000000000 --- a/src/current/v19.1/build-a-ruby-app-with-cockroachdb-activerecord.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB -summary: Learn how to use CockroachDB from a simple Ruby application with the ActiveRecord ORM. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -{{site.data.alerts.callout_success}} -For a more realistic use of ActiveRecord with CockroachDB, see our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. -{{site.data.alerts.end}} - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the ActiveRecord ORM - -To install ActiveRecord as well as the [pg driver](https://rubygems.org/gems/pg) and a [CockroachDB Ruby package](https://github.com/cockroachdb/activerecord-cockroachdb-adapter) that accounts for some minor differences between CockroachDB and PostgreSQL, run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ gem install activerecord pg activerecord-cockroachdb-adapter -~~~ - -{{site.data.alerts.callout_info}} -The exact command above will vary depending on the desired version of ActiveRecord. Specifically, version 4.2.x of ActiveRecord requires version 0.1.x of the adapter; version 5.1.x of ActiveRecord requires version 0.2.x of the adapter. -{{site.data.alerts.end}} - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Ruby code - -The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/activerecord-basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby activerecord-basic-sample.rb -~~~ - -The output should be: - -~~~ shell --- create_table(:accounts, {:force=>true}) - -> 0.0361s -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Ruby code - -The following code uses the [ActiveRecord](http://guides.rubyonrails.org/active_record_basics.html) ORM to map Ruby-specific objects to SQL operations. Specifically, `Schema.new.change()` creates an `accounts` table based on the Account model (or drops and recreates the table if it already exists), `Account.create()` inserts rows into the table, and `Account.all` selects from the table so that balances can be printed. - -Copy the code or -download it directly. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/activerecord-basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby activerecord-basic-sample.rb -~~~ - -The output should be: - -~~~ shell --- create_table(:accounts, {:force=>true}) - -> 0.0361s -1 1000 -2 250 -~~~ - -To verify that the table and rows were created successfully, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -Then, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000 | -| 2 | 250 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html), or check out a more realistic implementation of ActiveRecord with CockroachDB in our [`examples-orms`](https://github.com/cockroachdb/examples-orms) repository. - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-ruby-app-with-cockroachdb.md b/src/current/v19.1/build-a-ruby-app-with-cockroachdb.md deleted file mode 100644 index 6008d3971ba..00000000000 --- a/src/current/v19.1/build-a-ruby-app-with-cockroachdb.md +++ /dev/null @@ -1,207 +0,0 @@ ---- -title: Build a Ruby App with CockroachDB -summary: Learn how to use CockroachDB from a simple Ruby application with the pg client driver. -toc: true -twitter: false ---- - - - -This tutorial shows you how build a simple Ruby application with CockroachDB using a PostgreSQL-compatible driver or ORM. - -We have tested the [Ruby pg driver](https://rubygems.org/gems/pg) and the [ActiveRecord ORM](http://guides.rubyonrails.org/active_record_basics.html) enough to claim **beta-level** support, so those are featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Ruby pg driver - -To install the [Ruby pg driver](https://rubygems.org/gems/pg), run the following command: - -{% include copy-clipboard.html %} -~~~ shell -$ gem install pg -~~~ - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Ruby code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.rb file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby basic-sample.rb -~~~ - -The output should be: - -~~~ -Initial balances: -{"id"=>"1", "balance"=>"1000"} -{"id"=>"2", "balance"=>"250"} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rb file, or create the file yourself and copy the code into it. - -{% include v19.1/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/txn-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby txn-sample.rb -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` user and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Run the Ruby code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -The following code connects as the `maxroach` user and executes some basic SQL statements: creating a table, inserting rows, and reading and printing the rows. - -Download the basic-sample.rb file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/basic-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby basic-sample.rb -~~~ - -The output should be: - -~~~ -Initial balances: -{"id"=>"1", "balance"=>"1000"} -{"id"=>"2", "balance"=>"250"} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rb file, or create the file yourself and copy the code into it. - -{% include v19.1/client-transaction-retry.md %} - -{% include copy-clipboard.html %} -~~~ ruby -{% include {{page.version.version}}/app/insecure/txn-sample.rb %} -~~~ - -Then run the code: - -{% include copy-clipboard.html %} -~~~ shell -$ ruby txn-sample.rb -~~~ - -To verify that funds were transferred from one account to another, start the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=bank -~~~ - -To check the account balances, issue the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id, balance FROM accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the [Ruby pg driver](https://rubygems.org/gems/pg). - -{% include {{page.version.version}}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-a-rust-app-with-cockroachdb.md b/src/current/v19.1/build-a-rust-app-with-cockroachdb.md deleted file mode 100644 index a6228b3210a..00000000000 --- a/src/current/v19.1/build-a-rust-app-with-cockroachdb.md +++ /dev/null @@ -1,155 +0,0 @@ ---- -title: Build a Rust App with CockroachDB -summary: Learn how to use CockroachDB from a simple Rust application with a low-level client driver. -toc: true -twitter: false ---- - -This tutorial shows you how build a simple Rust application with CockroachDB using a PostgreSQL-compatible driver. - -We have tested the Rust Postgres driver enough to claim **beta-level** support, so that driver is featured here. If you encounter problems, please [open an issue](https://github.com/cockroachdb/cockroach/issues/new) with details to help us make progress toward full support. - -## Before you begin - -{% include {{page.version.version}}/app/before-you-begin.md %} - -## Step 1. Install the Rust Postgres driver - -Install the Rust Postgres driver as described in the official documentation. - -
      - -## Step 2. Create the `maxroach` users and `bank` database - -{% include {{page.version.version}}/app/create-maxroach-user-and-bank-database.md %} - -## Step 3. Generate a certificate for the `maxroach` user - -Create a certificate and key for the `maxroach` user by running the following command. The code samples will run as this user. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client maxroach --certs-dir=certs --ca-key=my-safe-directory/ca.key -~~~ - -## Step 4. Run the Rust code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.rs file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/basic-sample.rs %} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rs file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/txn-sample.rs %} -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -
      - -## Step 2. Create the `maxroach` users and `bank` database - -{% include {{page.version.version}}/app/insecure/create-maxroach-user-and-bank-database.md %} - -## Step 3. Create a table in the new database - -As the `maxroach` user, use the [built-in SQL client](use-the-built-in-sql-client.html) to create an `accounts` table in the new database. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---database=bank \ ---user=maxroach \ --e 'CREATE TABLE accounts (id INT PRIMARY KEY, balance INT)' -~~~ - -## Step 4. Run the Rust code - -Now that you have a database and a user, you'll run code to create a table and insert some rows, and then you'll run code to read and update values as an atomic [transaction](transactions.html). - -### Basic statements - -First, use the following code to connect as the `maxroach` user and execute some basic SQL statements, inserting rows and reading and printing the rows. - -Download the basic-sample.rs file, or create the file yourself and copy the code into it. - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/insecure/basic-sample.rs %} -~~~ - -### Transaction (with retry logic) - -Next, use the following code to again connect as the `maxroach` user but this time execute a batch of statements as an atomic transaction to transfer funds from one account to another, where all included statements are either committed or aborted. - -Download the txn-sample.rs file, or create the file yourself and copy the code into it. - -{{site.data.alerts.callout_info}} -CockroachDB may require the [client to retry a transaction](transactions.html#transaction-retries) in case of read/write contention. CockroachDB provides a generic retry function that runs inside a transaction and retries it as needed. You can copy and paste the retry function from here into your code. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ rust -{% include {{ page.version.version }}/app/insecure/txn-sample.rs %} -~~~ - -After running the code, use the [built-in SQL client](use-the-built-in-sql-client.html) to verify that funds were transferred from one account to another: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e 'SELECT id, balance FROM accounts' --database=bank -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 900 | -| 2 | 350 | -+----+---------+ -(2 rows) -~~~ - -
      - -## What's next? - -Read more about using the Rust Postgres driver. - -{% include {{ page.version.version }}/app/see-also-links.md %} diff --git a/src/current/v19.1/build-an-app-with-cockroachdb.md b/src/current/v19.1/build-an-app-with-cockroachdb.md deleted file mode 100644 index 7549d7e5561..00000000000 --- a/src/current/v19.1/build-an-app-with-cockroachdb.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Build an App with CockroachDB -summary: The tutorials in this section show you how to build a simple application with CockroachDB, using PostgreSQL-compatible client drivers and ORMs. -tags: golang, python, java -toc: true -twitter: false ---- - -The tutorials in this section show you how to build a simple application with CockroachDB using PostgreSQL-compatible client drivers and ORMs. - -## Tutorials - -{% include {{page.version.version}}/misc/drivers.md %} - -## See also - -- [Client drivers](install-client-drivers.html) -- [Third party database tools](third-party-database-tools.html) -- [Connection parameters](connection-parameters.html) -- [Transactions](transactions.html) -- [Performance best practices](performance-best-practices-overview.html) diff --git a/src/current/v19.1/bytes.md b/src/current/v19.1/bytes.md deleted file mode 100644 index 88de70a795e..00000000000 --- a/src/current/v19.1/bytes.md +++ /dev/null @@ -1,126 +0,0 @@ ---- -title: BYTES -summary: The BYTES data type stores binary strings of variable length. -toc: true ---- - -The `BYTES` [data type](data-types.html) stores binary strings of variable length. - - -## Aliases - -In CockroachDB, the following are aliases for `BYTES`: - -- `BYTEA` -- `BLOB` - -## Syntax - -To express a byte array constant, see the section on -[byte array literals](sql-constants.html#byte-array-literals) for more -details. For example, the following three are equivalent literals for the same -byte array: `b'abc'`, `b'\141\142\143'`, `b'\x61\x62\x63'`. - -In addition to this syntax, CockroachDB also supports using -[string literals](sql-constants.html#string-literals), including the -syntax `'...'`, `e'...'` and `x'....'` in contexts where a byte array -is otherwise expected. - -## Size - -The size of a `BYTES` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Example - -~~~ sql -> CREATE TABLE bytes (a INT PRIMARY KEY, b BYTES); - -> -- explicitly typed BYTES literals -> INSERT INTO bytes VALUES (1, b'\141\142\143'), (2, b'\x61\x62\x63'), (3, b'\141\x62\c'); - -> -- string literal implicitly typed as BYTES -> INSERT INTO bytes VALUES (4, 'abc'); - - -> SELECT * FROM bytes; -~~~ -~~~ -+---+-----+ -| a | b | -+---+-----+ -| 1 | abc | -| 2 | abc | -| 3 | abc | -| 4 | abc | -+---+-----+ -(4 rows) -~~~ - -## Supported conversions - -`BYTES` values can be -[cast](data-types.html#data-type-conversions-and-casts) explicitly to -[`STRING`](string.html). This conversion always succeeds. Two -conversion modes are supported, controlled by the -[session variable](set-vars.html#supported-variables) `bytea_output`: - -- `hex` (default): The output of the conversion starts with the two - characters `\`, `x` and the rest of the string is composed by the - hexadecimal encoding of each byte in the input. For example, - `x'48AA'::STRING` produces `'\x48AA'`. - -- `escape`: The output of the conversion contains each byte in the - input, as-is if it is an ASCII character, or encoded using the octal - escape format `\NNN` otherwise. For example, `x'48AA'::STRING` - produces `'0\252'`. - -`STRING` values can be cast explicitly to `BYTES`. This conversion -will fail if the hexadecimal digits are not valid, or if there is an -odd number of them. Two conversion modes are supported: - -- If the string starts with the two special characters `\` and `x` - (e.g., `\xAABB`), the rest of the string is interpreted as a sequence - of hexadecimal digits. The string is then converted to a byte array - where each pair of hexadecimal digits is converted to one byte. - -- Otherwise, the string is converted to a byte array that contains its - UTF-8 encoding. - -### `STRING` vs. `BYTES` - -While both `STRING` and `BYTES` can appear to have similar behavior in many situations, one should understand their nuance before casting one into the other. - -`STRING` treats all of its data as characters, or more specifically, Unicode code points. `BYTES` treats all of its data as a byte string. This difference in implementation can lead to dramatically different behavior. For example, let's take a complex Unicode character such as ☃ ([the snowman emoji](https://emojipedia.org/snowman/)): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT length('☃'::string); -~~~ - -~~~ - length -+--------+ - 1 -~~~ - -~~~ sql -> SELECT length('☃'::bytes); -~~~ -~~~ - length -+--------+ - 3 -~~~ - -In this case, [`LENGTH(string)`](functions-and-operators.html#string-and-byte-functions) measures the number of Unicode code points present in the string, whereas [`LENGTH(bytes)`](functions-and-operators.html#string-and-byte-functions) measures the number of bytes required to store that value. Each character (or Unicode code point) can be encoded using multiple bytes, hence the difference in output between the two. - -#### Translating literals to `STRING` vs. `BYTES` - -A literal entered through a SQL client will be translated into a different value based on the type: - -+ `BYTES` give a special meaning to the pair `\x` at the beginning, and translates the rest by substituting pairs of hexadecimal digits to a single byte. For example, `\xff` is equivalent to a single byte with the value of 255. For more information, see [SQL Constants: String literals with character escapes](sql-constants.html#string-literals-with-character-escapes). -+ `STRING` does not give a special meaning to `\x`, so all characters are treated as distinct Unicode code points. For example, `\xff` is treated as a `STRING` with length 4 (`\`, `x`, `f`, and `f`). - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/cancel-job.md b/src/current/v19.1/cancel-job.md deleted file mode 100644 index 314a39fd919..00000000000 --- a/src/current/v19.1/cancel-job.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: CANCEL JOB -summary: The CANCEL JOB statement stops long-running jobs such as imports, backups, and schema changes. -toc: true ---- - -The `CANCEL JOB` [statement](sql-statements.html) lets you stop long-running jobs, which include [`IMPORT`](import.html) jobs, enterprise [`BACKUP`](backup.html) and [`RESTORE`](restore.html) jobs, schema changes, [user-created table statistics](create-statistics.html) jobs, [automatic table statistics](cost-based-optimizer.html#table-statistics) jobs, and [changefeeds](change-data-capture.html). - - -## Limitations - -When an enterprise [`RESTORE`](restore.html) is canceled, partially restored data is properly cleaned up. This can have a minor, temporary impact on cluster performance. - -## Required privileges - -Only members of the `admin` role can cancel a job. By default, the `root` user belongs to the `admin` role. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/cancel_job.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`job_id` | The ID of the job you want to cancel, which can be found with [`SHOW JOBS`](show-jobs.html). -`select_stmt` | A [selection query](selection-queries.html) that returns `job_id`(s) to cancel. - -## Examples - -### Cancel a single job - -~~~ sql -> SHOW JOBS; -~~~ -~~~ -+----------------+---------+-------------------------------------------+... -| id | type | description |... -+----------------+---------+-------------------------------------------+... -| 27536791415282 | RESTORE | RESTORE db.* FROM 'azure://backup/db/tbl' |... -+----------------+---------+-------------------------------------------+... -~~~ -~~~ sql -> CANCEL JOB 27536791415282; -~~~ - -### Cancel multiple jobs - -To cancel multiple jobs, nest a [`SELECT` clause](select-clause.html) that retrieves `job_id`(s) inside the `CANCEL JOBS` statement: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL JOBS (SELECT job_id FROM [SHOW JOBS] - WHERE user_name = 'maxroach'); -~~~ - -All jobs created by `maxroach` will be cancelled. - -### Cancel automatic table statistics jobs - -Canceling an automatic table statistics job is not useful since the system will automatically restart the job immediately. To permanently disable automatic table statistics jobs, disable the `sql.stats.automatic_collection.enabled` [cluster setting](cluster-settings.html): - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; -~~~ - -## See also - -- [`SHOW JOBS`](show-jobs.html) -- [`BACKUP`](backup.html) -- [`RESTORE`](restore.html) -- [`IMPORT`](import.html) -- [`CREATE CHANGEFEED`](create-changefeed.html) diff --git a/src/current/v19.1/cancel-query.md b/src/current/v19.1/cancel-query.md deleted file mode 100644 index c8f4a84d3c3..00000000000 --- a/src/current/v19.1/cancel-query.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: CANCEL QUERY -summary: The CANCEL QUERY statement cancels a running SQL query. -toc: true ---- - -The `CANCEL QUERY` [statement](sql-statements.html) cancels a running SQL query. - - -## Considerations - -- Schema changes are treated differently than other SQL queries. You can use SHOW JOBS to monitor the progress of schema changes and CANCEL JOB to cancel schema changes that are taking longer than expected. -- In rare cases where a query is close to completion when a cancellation request is issued, the query may run to completion. - -## Required privileges - -Members of the `admin` role (include `root`, which belongs to `admin` by default) can cancel any currently active. User that are not members of the `admin` role can cancel only their own currently active queries. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/cancel_query.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`query_id` | A [scalar expression](scalar-expressions.html) that produces the ID of the query to cancel.

      `CANCEL QUERY` accepts a single query ID. If a subquery is used and returns multiple IDs, the `CANCEL QUERY` statement will fail. To cancel multiple queries, use `CANCEL QUERIES`. -`select_stmt` | A [selection query](selection-queries.html) whose result you want to cancel. - -## Response - -When a query is successfully cancelled, CockroachDB sends a `query execution canceled` error to the client that issued the query. - -- If the canceled query was a single, stand-alone statement, no further action is required by the client. -- If the canceled query was part of a larger, multi-statement [transaction](transactions.html), the client should then issue a [`ROLLBACK`](rollback-transaction.html) statement. - -## Examples - -### Cancel a query via the query ID - -In this example, we use the [`SHOW QUERIES`](show-queries.html) statement to get the ID of a query and then pass the ID into the `CANCEL QUERY` statement: - -~~~ sql -> SHOW QUERIES; -~~~ - -~~~ -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| query_id | node_id | username | start | query | client_address | application_name | distributed | phase | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc1f9a781e3d0000000000000001 | 2 | mroach | 2017-08-10 14:08:22.878113+00:00 | SELECT * FROM test.kv ORDER BY k | 192.168.0.72:56194 | test_app | false | executing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -| 14dacc206c47a9690000000000000002 | 2 | root | 2017-08-14 19:11:05.309119+00:00 | SHOW CLUSTER QUERIES | 127.0.0.1:50921 | | NULL | preparing | -+----------------------------------+---------+----------+----------------------------------+----------------------------------+--------------------+------------------+-------------+-----------+ -~~~ - -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -### Cancel a query via a subquery - -In this example, we nest a [`SELECT` clause](select-clause.html) that retrieves the ID of a query inside the `CANCEL QUERY` statement: - -~~~ sql -> CANCEL QUERY (SELECT query_id FROM [SHOW CLUSTER QUERIES] - WHERE client_address = '192.168.0.72:56194' - AND username = 'mroach' - AND query = 'SELECT * FROM test.kv ORDER BY k'); -~~~ - -{{site.data.alerts.callout_info}}CANCEL QUERY accepts a single query ID. If a subquery is used and returns multiple IDs, the CANCEL QUERY statement will fail. To cancel multiple queries, use CANCEL QUERIES.{{site.data.alerts.end}} - -## See also - -- [Manage Long-Running Queries](manage-long-running-queries.html) -- [`SHOW QUERIES`](show-queries.html) -- [`CANCEL SESSION`](cancel-session.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/cancel-session.md b/src/current/v19.1/cancel-session.md deleted file mode 100644 index c50142f1ca9..00000000000 --- a/src/current/v19.1/cancel-session.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: CANCEL SESSION -summary: The CANCEL SESSION statement stops long-running sessions. -toc: true ---- - -The `CANCEL SESSION` [statement](sql-statements.html) lets you stop long-running sessions. `CANCEL SESSION` will attempt to cancel the currently active query and end the session. - - -## Required privileges - -Only members of the `admin` role and the user that the session belongs to can cancel a session. By default, the `root` user belongs to the `admin` role. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/cancel_session.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`session_id` | The ID of the session you want to cancel, which can be found with [`SHOW SESSIONS`](show-sessions.html).

      `CANCEL SESSION` accepts a single session ID. If a subquery is used and returns multiple IDs, the `CANCEL SESSION` statement will fail. To cancel multiple sessions, use `CANCEL SESSIONS`. -`select_stmt` | A [selection query](selection-queries.html) that returns `session_id`(s) to cancel. - -## Example - -### Cancel a single session - -In this example, we use the [`SHOW SESSIONS`](show-sessions.html) statement to get the ID of a session and then pass the ID into the `CANCEL SESSION` statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW SESSIONS; -~~~ -~~~ -+---------+----------------------------------+-----------+... -| node_id | session_id | user_name |... -+---------+----------------------------------+-----------+... -| 1 | 1530c309b1d8d5f00000000000000001 | root |... -+---------+----------------------------------+-----------+... -| 1 | 1530fe0e46d2692e0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL SESSION '1530fe0e46d2692e0000000000000001'; -~~~ - -You can also cancel a session using a subquery that returns a single session ID: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL SESSIONS (SELECT session_id FROM [SHOW SESSIONS] - WHERE username = 'root'); -~~~ - -### Cancel multiple sessions - -Use the [`SHOW SESSIONS`](show-sessions.html) statement to view all active sessions: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW SESSIONS; -~~~ -~~~ -+---------+----------------------------------+-----------+... -| node_id | session_id | user_name |... -+---------+----------------------------------+-----------+... -| 1 | 1530c309b1d8d5f00000000000000001 | root |... -+---------+----------------------------------+-----------+... -| 1 | 1530fe0e46d2692e0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -| 1 | 15310cc79671fc6a0000000000000001 | maxroach |... -+---------+----------------------------------+-----------+... -~~~ - -To cancel multiple sessions, nest a [`SELECT` clause](select-clause.html) that retrieves `session_id`(s) inside the `CANCEL SESSIONS` statement: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL SESSIONS (SELECT session_id FROM [SHOW SESSIONS] - WHERE user_name = 'maxroach'); -~~~ - -All sessions created by `maxroach` will be cancelled. - -## See also - -- [`SHOW SESSIONS`](show-sessions.html) -- [`SET` (session variable)](set-vars.html) -- [`SHOW` (session variable)](show-vars.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/change-data-capture.md b/src/current/v19.1/change-data-capture.md deleted file mode 100644 index 85ab0e95cc9..00000000000 --- a/src/current/v19.1/change-data-capture.md +++ /dev/null @@ -1,700 +0,0 @@ ---- -title: Change Data Capture (CDC) -summary: Change data capture (CDC) provides efficient, distributed, row-level change subscriptions. -toc: true ---- - -Change data capture (CDC) provides efficient, distributed, row-level change feeds into a configurable sink for downstream processing such as reporting, caching, or full-text indexing. - -## What is change data capture? - -While CockroachDB is an excellent system of record, it also needs to coexist with other systems. For example, you might want to keep your data mirrored in full-text indexes, analytics engines, or big data pipelines. - -The main feature of CDC is the changefeed, which targets an allowlist of tables, called the "watched rows". There are two implementations of changefeeds: - -- [Core changefeeds](#create-a-core-changefeed), which stream row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. -- [Enterprise changefeeds](#configure-a-changefeed-enterprise), where every change to a watched row is emitted as a record in a configurable format (`JSON` or Avro) to a configurable sink ([Kafka](https://kafka.apache.org/)). - -## Ordering guarantees - -- In most cases, each version of a row will be emitted once. However, some infrequent conditions (e.g., node failures, network partitions) will cause them to be repeated. This gives our changefeeds an **at-least-once delivery guarantee**. - -- Once a row has been emitted with some timestamp, no previously unseen versions of that row will be emitted with a lower timestamp. That is, you will never see a _new_ change for that row at an earlier timestamp. - - For example, if you ran the following: - - ~~~ sql - > CREATE TABLE foo (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); - > CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://localhost:9092' WITH UPDATED; - > INSERT INTO foo VALUES (1, 'Carl'); - > UPDATE foo SET name = 'Petee' WHERE id = 1; - ~~~ - - You'd expect the changefeed to emit: - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - ~~~ - - It is also possible that the changefeed emits an out of order duplicate of an earlier value that you already saw: - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - ~~~ - - However, you will **never** see an output like the following (i.e., an out of order row that you've never seen before): - - ~~~ shell - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Petee"} - [1] {"__crdb__": {"updated": }, "id": 1, "name": "Carl"} - ~~~ - -- If a row is modified more than once in the same transaction, only the last change will be emitted. - -- Rows are sharded between Kafka partitions by the row’s [primary key](primary-key.html). - -- The `UPDATED` option adds an "updated" timestamp to each emitted row. You can also use the `RESOLVED` option to emit periodic "resolved" timestamp messages to each Kafka partition. A "resolved" timestamp is a guarantee that no (previously unseen) rows with a lower update timestamp will be emitted on that partition. - - For example: - - ~~~ shell - {"__crdb__": {"updated": "1532377312562986715.0000000000"}, "id": 1, "name": "Petee H"} - {"__crdb__": {"updated": "1532377306108205142.0000000000"}, "id": 2, "name": "Carl"} - {"__crdb__": {"updated": "1532377358501715562.0000000000"}, "id": 3, "name": "Ernie"} - {"__crdb__":{"resolved":"1532379887442299001.0000000000"}} - {"__crdb__":{"resolved":"1532379888444290910.0000000000"}} - {"__crdb__":{"resolved":"1532379889448662988.0000000000"}} - ... - {"__crdb__":{"resolved":"1532379922512859361.0000000000"}} - {"__crdb__": {"updated": "1532379923319195777.0000000000"}, "id": 4, "name": "Lucky"} - ~~~ - -- With duplicates removed, an individual row is emitted in the same order as the transactions that updated it. However, this is not true for updates to two different rows, even two rows in the same table. - - To compare two different rows for [happens-before](https://en.wikipedia.org/wiki/Happened-before), compare the "updated" timestamp. This works across anything in the same cluster (e.g., tables, nodes, etc.). - - Resolved timestamp notifications on every Kafka partition can be used to provide strong ordering and global consistency guarantees by buffering records in between timestamp closures. Use the "resolved" timestamp to see every row that changed at a certain time. - - The complexity with timestamps is necessary because CockroachDB supports transactions that can affect any part of the cluster, and it is not possible to horizontally divide the transaction log into independent changefeeds. For more information about this, [read our blog post on CDC](https://www.cockroachlabs.com/blog/change-data-capture/). - -## Avro schema changes - -To ensure that the Avro schemas that CockroachDB publishes will work with the schema compatibility rules used by the Confluent schema registry, CockroachDB emits all fields in Avro as nullable unions. This ensures that Avro and Confluent consider the schemas to be both backward- and forward-compatible, since the Confluent Schema Registry has a different set of rules than Avro for schemas to be backward- and forward-compatible. - -Note that the original CockroachDB column definition is also included in the schema as a doc field, so it's still possible to distinguish between a `NOT NULL` CockroachDB column and a `NULL` CockroachDB column. - -## Schema changes with column backfill - -When schema changes with column backfill (e.g., adding a column with a default, adding a computed column, adding a `NOT NULL` column, dropping a column) are made to watched rows, the changefeed will emit some duplicates during the backfill. When it finishes, CockroachDB outputs all watched rows using the new schema. When using Avro, rows that have been backfilled by a schema change are always re-emitted. - -For an example of a schema change with column backfill, start with the changefeed created in the [example below](#create-a-changefeed-connected-to-kafka): - -~~~ shell -[1] {"id": 1, "name": "Petee H"} -[2] {"id": 2, "name": "Carl"} -[3] {"id": 3, "name": "Ernie"} -~~~ - -Add a column to the watched table: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE office_dogs ADD COLUMN likes_treats BOOL DEFAULT TRUE; -~~~ - -The changefeed emits duplicate records 1, 2, and 3 before outputting the records using the new schema: - -~~~ shell -[1] {"id": 1, "name": "Petee H"} -[2] {"id": 2, "name": "Carl"} -[3] {"id": 3, "name": "Ernie"} -[1] {"id": 1, "name": "Petee H"} # Duplicate -[2] {"id": 2, "name": "Carl"} # Duplicate -[3] {"id": 3, "name": "Ernie"} # Duplicate -[1] {"id": 1, "likes_treats": true, "name": "Petee H"} -[2] {"id": 2, "likes_treats": true, "name": "Carl"} -[3] {"id": 3, "likes_treats": true, "name": "Ernie"} -~~~ - -## Enable rangefeeds to reduce latency - -New in v19.1: Previously created changefeeds collect changes by periodically sending a request for any recent changes. Newly created changefeeds now behave differently: they connect a long-lived request (i.e., a rangefeed), which pushes changes as they happen. This reduces the latency of row changes, as well as reduces transaction restarts on tables being watched by a changefeed for some workloads. - -To enable rangefeeds, set the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html) to `true`. Any created changefeed will error until this setting is enabled. Note that enabling rangefeeds currently has a small performance cost (about a 5-10% increase in latencies), whether or not the rangefeed is being used in a changefeed. - -If you are experiencing an issue, you can revert back to the previous behavior by setting `changefeed.push.enabled` to `false`. Note that this setting will be removed in a future release; if you have to use the fallback, please [file a Github issue](file-an-issue.html). - -{{site.data.alerts.callout_info}} -To enable rangefeeds for an existing changefeed, you must also restart the changefeed. For an enterprise changefeed, [pause](#pause) and [resume](#resume) the changefeed. For a core changefeed, cut the connection (**CTRL+C**) and reconnect using the `cursor` option. -{{site.data.alerts.end}} - -The `kv.closed_timestamp.target_duration` [cluster setting](cluster-settings.html) can be used with push changefeeds. Resolved timestamps will always be behind by at least this setting's duration; however, decreasing the duration leads to more transaction restarts in your cluster, which can affect performance. - -## Create a changefeed (Core) - -New in v19.1: To create a core changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> EXPERIMENTAL CHANGEFEED FOR name; -~~~ - -For more information, see [`CHANGEFEED FOR`](changefeed-for.html). - -## Configure a changefeed (Enterprise) - -An enterprise changefeed streams row-level changes in a configurable format to a configurable sink (i.e., Kafka or a cloud storage sink). You can [create](#create), [pause](#pause), [resume](#resume), [cancel](#cancel), [monitor](#monitor-a-changefeed), and [debug](#debug-a-changefeed) an enterprise changefeed. - -### Create - -To create an enterprise changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name INTO 'scheme://host:port'; -~~~ - -For more information, see [`CREATE CHANGEFEED`](create-changefeed.html). - -### Pause - -To pause an enterprise changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> PAUSE JOB job_id; -~~~ - -For more information, see [`PAUSE JOB`](pause-job.html). - -### Resume - -To resume a paused enterprise changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> RESUME JOB job_id; -~~~ - -For more information, see [`RESUME JOB`](resume-job.html). - -### Cancel - -To cancel an enterprise changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL JOB job_id; -~~~ - -For more information, see [`CANCEL JOB`](cancel-job.html). - -## Monitor a changefeed - -{{site.data.alerts.callout_info}} -Monitoring is only available for enterprise changefeeds. -{{site.data.alerts.end}} - -Changefeed progress is exposed as a high-water timestamp that advances as the changefeed progresses. This is a guarantee that all changes before or at the timestamp have been emitted. You can monitor a changefeed: - -- On the [Changefeed Dashboard](admin-ui-cdc-dashboard.html) of the Admin UI. -- On the [Jobs page](admin-ui-jobs-page.html) of the Admin UI. Hover over the high-water timestamp to view the [system time](as-of-system-time.html). -- Using `crdb_internal.jobs`: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM crdb_internal.jobs WHERE job_id = ; - ~~~ - ~~~ - job_id | job_type | description | ... | high_water_timestamp | error | coordinator_id - +--------------------+------------+------------------------------------------------------------------------+ ... +--------------------------------+-------+----------------+ - 383870400694353921 | CHANGEFEED | CREATE CHANGEFEED FOR TABLE office_dogs INTO 'kafka://localhost:9092' | ... | 1537279405671006870.0000000000 | | 1 - (1 row) - ~~~ - -- Setting up an alert on the `changefeed.max_behind_nanos` metric to track when a changefeed's high-water mark timestamp is at risk of falling behind the cluster's [garbage collection window](configure-replication-zones.html#replication-zone-variables). For more information, see [Monitoring and Alerting](monitoring-and-alerting.html#changefeed-is-experiencing-high-latency). - -{{site.data.alerts.callout_info}} -You can use the high-water timestamp to [start a new changefeed where another ended](create-changefeed.html#start-a-new-changefeed-where-another-ended). -{{site.data.alerts.end}} - -## Debug a changefeed - -For enterprise changefeeds connected to Kafka, [use log information](debug-and-error-logs.html) to debug connection issues (i.e., `kafka: client has run out of available brokers to talk to (Is your cluster reachable?)`). Debug by looking for lines in the logs with `[kafka-producer]` in them: - -~~~ -I190312 18:56:53.535646 585 vendor/github.com/Shopify/sarama/client.go:123 [kafka-producer] Initializing new client -I190312 18:56:53.535714 585 vendor/github.com/Shopify/sarama/client.go:724 [kafka-producer] client/metadata fetching metadata for all topics from broker localhost:9092 -I190312 18:56:53.536730 569 vendor/github.com/Shopify/sarama/broker.go:148 [kafka-producer] Connected to broker at localhost:9092 (unregistered) -I190312 18:56:53.537661 585 vendor/github.com/Shopify/sarama/client.go:500 [kafka-producer] client/brokers registered new broker #0 at 172.16.94.87:9092 -I190312 18:56:53.537686 585 vendor/github.com/Shopify/sarama/client.go:170 [kafka-producer] Successfully initialized new client -~~~ - -## Usage examples - -### Create a core changefeed - -New in v19.1: {% include {{ page.version.version }}/cdc/create-core-changefeed.md %} - -### Create a core changefeed using Avro - -New in v19.1: {% include {{ page.version.version }}/cdc/create-core-changefeed-avro.md %} - -### Create a changefeed connected to Kafka - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [enterprise-only](enterprise-licensing.html) feature. For the core version, see [the `CHANGEFEED FOR` example above](#create-a-core-changefeed). -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink. - -1. If you do not already have one, [request a trial enterprise license](enterprise-licensing.html). - -2. In a terminal window, start `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure --listen-addr=localhost --background - ~~~ - -3. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/) (which includes Kafka). - -4. Move into the extracted `confluent-` directory and start Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent start - ~~~ - - Only `zookeeper` and `kafka` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives). - -5. Create a Kafka topic: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic office_dogs - ~~~ - - {{site.data.alerts.callout_info}} - You are expected to create any Kafka topics with the necessary number of replications and partitions. [Topics can be created manually](https://kafka.apache.org/documentation/#basic_ops_add_topic) or [Kafka brokers can be configured to automatically create topics](https://kafka.apache.org/documentation/#topicconfigs) with a default partition count and replication factor. - {{site.data.alerts.end}} - -6. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -7. Set your organization name and [enterprise license](enterprise-licensing.html) key that you received via email: - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -8. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -9. Create a database called `cdc_demo`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE cdc_demo; - ~~~ - -10. Set the database as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = cdc_demo; - ~~~ - -11. Create a table and add data: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES - (1, 'Petee'), - (2, 'Carl'); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1; - ~~~ - -12. Start the changefeed: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'kafka://localhost:9092'; - ~~~ - ~~~ - - job_id - +--------------------+ - 360645287206223873 - (1 row) - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to Kafka. - -13. In a new terminal, move into the extracted `confluent-` directory and start watching the Kafka topic: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/kafka-console-consumer \ - --bootstrap-server=localhost:9092 \ - --property print.key=true \ - --from-beginning \ - --topic=office_dogs - ~~~ - - ~~~ shell - [1] {"id": 1, "name": "Petee H"} - [2] {"id": 2, "name": "Carl"} - ~~~ - - Note that the initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of `"Petee"` is omitted). - -14. Back in the SQL client, insert more data: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES (3, 'Ernie'); - ~~~ - -15. Back in the terminal where you're watching the Kafka topic, the following output has appeared: - - ~~~ shell - [3] {"id": 3, "name": "Ernie"} - ~~~ - -16. When you are done, exit the SQL shell (`\q`). - -17. To stop `cockroach`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ - -18. To stop Kafka, move into the extracted `confluent-` directory and stop Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent stop - ~~~ - -### Create a changefeed connected to Kafka using Avro - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [enterprise-only](enterprise-licensing.html) feature. For the core version, see [the `CHANGEFEED FOR` example above](#create-a-core-changefeed-using-avro). -{{site.data.alerts.end}} - -In this example, you'll set up a changefeed for a single-node cluster that is connected to a Kafka sink and emits [Avro](https://avro.apache.org/docs/1.8.2/spec.html) records. - -1. If you do not already have one, [request a trial enterprise license](enterprise-licensing.html). - -2. In a terminal window, start `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure --listen-addr=localhost --background - ~~~ - -3. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/) (which includes Kafka). - -4. Move into the extracted `confluent-` directory and start Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent start - ~~~ - - Only `zookeeper`, `kafka`, and `schema-registry` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives). - -5. Create a Kafka topic: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/kafka-topics \ - --create \ - --zookeeper localhost:2181 \ - --replication-factor 1 \ - --partitions 1 \ - --topic office_dogs - ~~~ - - {{site.data.alerts.callout_info}} - You are expected to create any Kafka topics with the necessary number of replications and partitions. [Topics can be created manually](https://kafka.apache.org/documentation/#basic_ops_add_topic) or [Kafka brokers can be configured to automatically create topics](https://kafka.apache.org/documentation/#topicconfigs) with a default partition count and replication factor. - {{site.data.alerts.end}} - -6. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -7. Set your organization name and [enterprise license](enterprise-licensing.html) key that you received via email: - - {% include copy-clipboard.html %} - ~~~ shell - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -8. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -9. Create a database called `cdc_demo`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE cdc_demo; - ~~~ - -10. Set the database as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = cdc_demo; - ~~~ - -11. Create a table and add data: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES - (1, 'Petee'), - (2, 'Carl'); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1; - ~~~ - -12. Start the changefeed: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'kafka://localhost:9092' WITH format = experimental_avro, confluent_schema_registry = 'http://localhost:8081'; - ~~~ - - ~~~ - job_id - +--------------------+ - 360645287206223873 - (1 row) - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to Kafka. - -13. In a new terminal, move into the extracted `confluent-` directory and start watching the Kafka topic: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/kafka-avro-console-consumer \ - --bootstrap-server=localhost:9092 \ - --property print.key=true \ - --from-beginning \ - --topic=office_dogs - ~~~ - - ~~~ shell - {"id":{"long":1}} {"after":{"office_dogs":{"id":{"long":1},"name":{"string":"Petee H"}}}} - {"id":{"long":2}} {"after":{"office_dogs":{"id":{"long":2},"name":{"string":"Carl"}}}} - ~~~ - - Note that the initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of `"Petee"` is omitted). - -14. Back in the SQL client, insert more data: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES (3, 'Ernie'); - ~~~ - -15. Back in the terminal where you're watching the Kafka topic, the following output has appeared: - - ~~~ shell - {"id":{"long":3}} {"after":{"office_dogs":{"id":{"long":3},"name":{"string":"Ernie"}}}} - ~~~ - -16. When you are done, exit the SQL shell (`\q`). - -17. To stop `cockroach`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ - -18. To stop Kafka, move into the extracted `confluent-` directory and stop Confluent: - - {% include copy-clipboard.html %} - ~~~ shell - $ ./bin/confluent stop - ~~~ - -### Create a changefeed connected to a cloud storage sink - -{{site.data.alerts.callout_info}} -[`CREATE CHANGEFEED`](create-changefeed.html) is an [enterprise-only](enterprise-licensing.html) feature. For the core version, see [the `CHANGEFEED FOR` example above](#create-a-core-changefeed). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/cdc/correctness-warning.md %} - -New in v19.1: In this example, you'll set up a changefeed for a single-node cluster that is connected to an AWS S3 sink. Note that you can set up changefeeds for any of [these cloud storage providers](create-changefeed.html#cloud-storage-sink). - -1. If you do not already have one, [request a trial enterprise license](enterprise-licensing.html). - -2. In a terminal window, start `cockroach`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure --listen-addr=localhost --background - ~~~ - -3. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -4. Set your organization name and [enterprise license](enterprise-licensing.html) key that you received via email: - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -5. Enable the `kv.rangefeed.enabled` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING kv.rangefeed.enabled = true; - ~~~ - -6. Create a database called `cdc_demo`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE cdc_demo; - ~~~ - -7. Set the database as the default: - - {% include copy-clipboard.html %} - ~~~ sql - > SET DATABASE = cdc_demo; - ~~~ - -8. Create a table and add data: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE office_dogs ( - id INT PRIMARY KEY, - name STRING); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO office_dogs VALUES - (1, 'Petee'), - (2, 'Carl'); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE office_dogs SET name = 'Petee H' WHERE id = 1; - ~~~ - -9. Start the changefeed: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE CHANGEFEED FOR TABLE office_dogs INTO 'experimental-s3://example-bucket-name/test?AWS_ACCESS_KEY_ID=enter_key-here&AWS_SECRET_ACCESS_KEY=enter_key_here' with updated, resolved='10s'; - ~~~ - - ~~~ - job_id - +--------------------+ - 360645287206223873 - (1 row) - ~~~ - - This will start up the changefeed in the background and return the `job_id`. The changefeed writes to AWS. - -10. Monitor your changefeed on the Admin UI. For more information, see [Changefeeds Dashboard](admin-ui-cdc-dashboard.html). - -11. When you are done, exit the SQL shell (`\q`). - -12. To stop `cockroach`, run: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure - ~~~ - -## Known limitations - -{% include {{ page.version.version }}/known-limitations/cdc.md %} - -## See also - -- [`CREATE CHANGEFEED`](create-changefeed.html) -- [`CHANGEFEED FOR`](changefeed-for.html) -- [`PAUSE JOB`](pause-job.html) -- [`CANCEL JOB`](cancel-job.html) -- [Other SQL Statements](sql-statements.html) -- [Changefeed Dashboard](admin-ui-cdc-dashboard.html) diff --git a/src/current/v19.1/changefeed-for.md b/src/current/v19.1/changefeed-for.md deleted file mode 100644 index a484ef03831..00000000000 --- a/src/current/v19.1/changefeed-for.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: EXPERIMENTAL CHANGEFEED FOR -summary: The EXPERIMENTAL CHANGEFEED FOR statement creates a new core changefeed, which streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. -toc: true ---- - -{{site.data.alerts.callout_info}} -`EXPERIMENTAL CHANGEFEED FOR` is the core implementation of changefeeds. For the [enterprise-only](enterprise-licensing.html) version, see [`CREATE CHANGEFEED`](create-changefeed.html). -{{site.data.alerts.end}} - -New in v19.1: The `EXPERIMENTAL CHANGEFEED FOR` [statement](sql-statements.html) creates a new core changefeed, which streams row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. - -For more information, see [Change Data Capture](change-data-capture.html). - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - -## Required privileges - -Changefeeds can only be created by superusers, i.e., [members of the `admin` role](create-and-manage-users.html). The admin role exists by default with `root` as the member. - -## Considerations - -Because core changefeeds return results differently than other SQL statements, they require a dedicated database connection with specific settings around result buffering. In normal operation, CockroachDB improves performance by buffering results server-side before returning them to a client; however, result buffering is automatically turned off for core changefeeds. Core changefeeds also have different cancellation behavior than other queries: they can only be canceled by closing the underlying connection or issuing a [`CANCEL QUERY`](cancel-query.html) statement on a separate connection. Combined, these attributes of changefeeds mean that applications should explicitly create dedicated connections to consume changefeed data, instead of using a connection pool as most client drivers do by default. - -This cancellation behavior also extends to client driver usage; in particular, when a client driver calls `Rows.Close()` after encountering errors for a stream of rows. The pgwire protocol requires that the rows be consumed before the connection is again usable, but in the case of a core changefeed, the rows are never consumed. It is therefore critical that you close the connection, otherwise the application will be blocked forever on `Rows.Close()`. - -## Synopsis - -~~~ -> EXPERIMENTAL CHANGEFEED FOR table_name [ WITH (option [= value] [, ...]) ]; -~~~ - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table (or tables in a comma separated list) to create a changefeed for. -`option` / `value` | For a list of available options and their values, see [Options](#options) below. - - - -### Options - -Option | Value | Description --------|-------|------------ -`updated` | N/A | Include updated timestamps with each row. -`resolved` | [`INTERVAL`](interval.html) | Periodically emit resolved timestamps to the changefeed. Optionally, set a minimum duration between emitting resolved timestamps. If unspecified, all resolved timestamps are emitted.

      Example: `resolved='10s'` -`envelope` | `key_only` / `row` | Use `key_only` to emit only the key and no value, which is faster if you only want to know when the key changes.

      Default: `envelope=row` -`cursor` | [Timestamp](as-of-system-time.html#parameters) | Emits any changes after the given timestamp, but does not output the current state of the table first. If `cursor` is not specified, the changefeed starts by doing a consistent scan of all the watched rows and emits the current value, then moves to emitting any changes that happen after the scan.

      `cursor` can be used to start a new changefeed where a previous changefeed ended.

      Example: `CURSOR=1536242855577149065.0000000000` -`format` | `json` / `experimental_avro` | Format of the emitted record. Currently, support for [Avro is limited and experimental](#avro-limitations).

      Default: `format=json`. -`confluent_schema_registry` | Schema Registry address | The [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html#sr) address is required to use `experimental_avro`. - -#### Avro limitations - -Currently, support for Avro is limited and experimental. Below is a list of unsupported SQL types and values for Avro changefeeds: - -- [Decimals](decimal.html) must have precision specified. -- [Decimals](decimal.html) with `NaN` or infinite values cannot be written in Avro. - - {{site.data.alerts.callout_info}} - To avoid `NaN` or infinite values, add a [`CHECK` constraint](check.html) to prevent these values from being inserted into decimal columns. - {{site.data.alerts.end}} - -- [`time`, `date`, `interval`](https://github.com/cockroachdb/cockroach/issues/32472), [`uuid`, `inet`](https://github.com/cockroachdb/cockroach/issues/34417), [`array`](https://github.com/cockroachdb/cockroach/issues/34420), and [`jsonb`](https://github.com/cockroachdb/cockroach/issues/34421) are not supported in Avro yet. - -## Examples - -### Create a changefeed - -{% include {{ page.version.version }}/cdc/create-core-changefeed.md %} - -### Create a changefeed with Avro - -{% include {{ page.version.version }}/cdc/create-core-changefeed-avro.md %} - - - -## See also - -- [Change Data Capture](change-data-capture.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/check.md b/src/current/v19.1/check.md deleted file mode 100644 index 5e7ee4c59d7..00000000000 --- a/src/current/v19.1/check.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: CHECK Constraint -summary: The CHECK constraint specifies that values for the column in INSERT or UPDATE statements must satisfy a Boolean expression. -toc: true ---- - -The `CHECK` [constraint](constraints.html) specifies that values for the column in [`INSERT`](insert.html) or [`UPDATE`](update.html) statements must return `TRUE` or `NULL` for a Boolean expression. If any values return `FALSE`, the entire statement is rejected. - -## Details - -- New in v19.1: If you add a `CHECK` constraint to an existing table, CockroachDB will run a background job to validate existing table data in the process of adding the constraint. If a row is found that violates the constraint during the validation step, the [`ADD CONSTRAINT`](add-constraint.html) statement will fail. This differs from previous versions of CockroachDB, which allowed you to add a check constraint that was enforced for writes but could be violated by rows that existed prior to adding the constraint. -- New in v19.1: Check constraints can be added to columns that were created earlier in the same transaction. For an example, see [Add the `CHECK` constraint](add-constraint.html#add-the-check-constraint). -- `CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled consistently. -- You can have multiple `CHECK` constraints on a single column but ideally, for performance optimization, these should be combined using the logical operators. For example: - - ~~~ sql - warranty_period INT CHECK (warranty_period >= 0) CHECK (warranty_period <= 24) - ~~~ - - should be specified as: - - ~~~ sql - warranty_period INT CHECK (warranty_period BETWEEN 0 AND 24) - ~~~ -- When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped. - -## Syntax - -`CHECK` constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -{{site.data.alerts.callout_info}}You can also add the CHECK constraint to existing tables through ADD CONSTRAINT.{{site.data.alerts.end}} - -### Column level - -
      - {% include {{ page.version.version }}/sql/diagrams/check_column_level.html %} -
      - - Parameter | Description ------------|------------- - `table_name` | The name of the table you're creating. - `column_name` | The name of the constrained column. - `column_type` | The constrained column's [data type](data-types.html). - `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted. - `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. - `column_def` | Definitions for any other columns in the table. - `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. - -**Example** - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); -~~~ - -### Table level - -
      - {% include {{ page.version.version }}/sql/diagrams/check_table_level.html %} -
      - - Parameter | Description ------------|------------- - `table_name` | The name of the table you're creating. - `column_def` | Definitions for any other columns in the table. - `name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). - `check_expr` | An expression that returns a Boolean value; if the expression evaluates to `FALSE`, the value cannot be inserted. - `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. - -**Example** - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL, - PRIMARY KEY (product_id, warehouse_id), - CONSTRAINT ok_to_supply CHECK (quantity_on_hand > 0 AND warehouse_id BETWEEN 100 AND 200) - ); -~~~ - -## Usage example - -`CHECK` constraints may be specified at the column or table level and can reference other columns within the table. Internally, all column-level `CHECK` constraints are converted to table-level constraints so they can be handled in a consistent fashion. - -~~~ sql -> CREATE TABLE inventories ( - product_id INT NOT NULL, - warehouse_id INT NOT NULL, - quantity_on_hand INT NOT NULL CHECK (quantity_on_hand > 0), - PRIMARY KEY (product_id, warehouse_id) - ); - -> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (1, 2, 0); -~~~ -~~~ -pq: failed to satisfy CHECK constraint (quantity_on_hand > 0) -~~~ - -## See also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`DEFAULT` constraint](default-value.html) -- [`REFERENCES` constraint (Foreign Key)](foreign-key.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY KEY` constraint](primary-key.html) -- [`UNIQUE` constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v19.1/cluster-settings.md b/src/current/v19.1/cluster-settings.md deleted file mode 100644 index 6fe6f7de45a..00000000000 --- a/src/current/v19.1/cluster-settings.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Cluster Settings -summary: Learn about cluster settings that apply to all nodes of a CockroachDB cluster. -toc: false ---- - -Cluster settings apply to all nodes of a CockroachDB cluster and control, for example, whether or not to share diagnostic details with Cockroach Labs as well as advanced options for debugging and cluster tuning. - -They can be updated anytime after a cluster has been started, but only by a member of the `admin` role, to which the `root` user belongs by default. - -{{site.data.alerts.callout_info}} -In contrast to cluster-wide settings, node-level settings apply to a single node. They are defined by flags passed to the `cockroach start` command when starting a node and cannot be changed without stopping and restarting the node. For more details, see [Start a Node](start-a-node.html). -{{site.data.alerts.end}} - -## Settings - -{{site.data.alerts.callout_danger}} -Many cluster settings are intended for tuning CockroachDB internals. Before changing these settings, we strongly encourage you to discuss your goals with Cockroach Labs; otherwise, you use them at your own risk. -{{site.data.alerts.end}} - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/settings/settings.html %} - -## View current cluster settings - -Use the [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement. - -## Change a cluster setting - -Use the [`SET CLUSTER SETTING`](set-cluster-setting.html) statement. - -Before changing a cluster setting, please note the following: - -- Changing a cluster setting is not instantaneous, as the change must be propagated to other nodes in the cluster. - -- Do not change cluster settings while [upgrading to a new version of CockroachDB](upgrade-cockroach-version.html). Wait until all nodes have been upgraded before you make the change. - -## See also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Diagnostics Reporting](diagnostics-reporting.html) -- [Start a Node](start-a-node.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) diff --git a/src/current/v19.1/cluster-setup-troubleshooting.md b/src/current/v19.1/cluster-setup-troubleshooting.md deleted file mode 100644 index 5794b6eba65..00000000000 --- a/src/current/v19.1/cluster-setup-troubleshooting.md +++ /dev/null @@ -1,442 +0,0 @@ ---- -title: Troubleshoot Cluster Setup -summary: Learn how to troubleshoot issues with starting CockroachDB clusters -toc: true ---- - -If you're having trouble starting or scaling your cluster, this page will help you troubleshoot the issue. - -To use this guide, it's important to understand some of CockroachDB's terminology: - - - A **Cluster** acts as a single logical database, but is actually made up of many cooperating nodes. - - **Nodes** are single instances of the `cockroach` binary running on a machine. It's possible (though atypical) to have multiple nodes running on a single machine. - -## Cannot run a single-node CockroachDB cluster - -Try running: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure --logtostderr -~~~ - -If the process exits prematurely, check for the following: - -### An existing storage directory - -When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit `cockroach`, and then tried to start another cluster using the same directory. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot start. - -**Solution:** Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following: - -- Choose a different directory to store the CockroachDB data: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --store= --insecure - ~~~ -- Remove the existing directory and start the node again: - {% include copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --insecure --logtostderr - ~~~ - -### Toolchain incompatibility - -The components of the toolchain might have some incompatibilities that need to be resolved. For example, a few months ago, there was an incompatibility between Xcode 8.3 and Go 1.8 that caused any Go binaries created with that toolchain combination to crash immediately. - -### Incompatible CPU - -If the `cockroach` process had exit status `132 (SIGILL)`, it attempted to use an instruction that is not supported by your CPU. Non-release builds of CockroachDB may not be able to run on older hardware platforms than the one used to build them. Release builds should run on any x86-64 CPU. - -### Default ports already in use - -Other services may be running on port 26257 or 8080 (CockroachDB's default `--listen-addr` port and `--http-addr` port respectively). You can either stop those services or start your node with different ports, specified in the [`--listen-addr` and `--http-addr` flags](start-a-node.html#networking). - - If you change the port, you will need to include the `--port=` flag in each subsequent cockroach command or change the `COCKROACH_PORT` environment variable. - -### Networking issues - -Networking issues might prevent the node from communicating with itself on its hostname. You can control the hostname CockroachDB uses with the [`--listen-addr` flag](start-a-node.html#networking). - - If you change the host, you will need to include `--host=` in each subsequent cockroach command. - -### CockroachDB process hangs when trying to start a node in the background - -See [Why is my process hanging when I try to start it in the background?](operational-faqs.html#why-is-my-process-hanging-when-i-try-to-start-it-in-the-background) - -## Cannot run SQL statements using built-in SQL client - -If the CockroachDB node appeared to [start successfully](start-a-local-cluster.html), in a separate terminal run: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -e "show databases" -~~~ - -You should see a list of the built-in databases: - -~~~ - database_name -+---------------+ - defaultdb - postgres - system -(3 rows) -~~~ - -If you’re not seeing the output above, check for the following: - -- `connection refused` error, which indicates you have not included some flag that you used to start the node. We have additional troubleshooting steps for this error [here](common-errors.html#connection-refused). -- The node crashed. To ascertain if the node crashed, run `ps | grep cockroach` to look for the `cockroach` process. If you cannot locate the `cockroach` process (i.e., it crashed), [file an issue](file-an-issue.html), including the logs from your node and any errors you received. - -## Cannot run a multi-node CockroachDB cluster on the same machine - -{{site.data.alerts.callout_info}} -Running multiple nodes on a single host is useful for testing out CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see [Manual Deployment](manual-deployment.html) or [Orchestrated Deployment](orchestration.html). Also be sure to review the [Production Checklist](recommended-production-settings.html). -{{site.data.alerts.end}} - -If you are trying to run all nodes on the same machine, you might get the following errors: - -### Store directory already exists - -~~~ -ERROR: could not cleanup temporary directories from record file: could not lock temporary directory /Users/amruta/go/src/github.com/cockroachdb/cockroach/cockroach-data/cockroach-temp301343769, may still be in use: IO error: While lock file: /Users/amruta/go/src/github.com/cockroachdb/cockroach/cockroach-data/cockroach-temp301343769/TEMP_DIR.LOCK: Resource temporarily unavailable -~~~ - -**Explanation:** When starting a new node on the same machine, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server and then tried to start another cluster using the same directory. - -**Solution:** Choose a different directory to store the CockroachDB data. - -### Port already in use - -~~~ -ERROR: cockroach server exited with error: consider changing the port via --listen-addr: listen tcp 127.0.0.1:26257: bind: address already in use -~~~ - -**Solution:** Change the `--port`, `--http-port` flags for each new node that you want to run on the same machine. - -## Cannot join a node to an existing CockroachDB cluster - -### Store directory already exists - -When joining a node to a cluster, you might receive one of the following errors: - -~~~ -no resolvers found; use --join to specify a connected node - -node belongs to cluster {"cluster hash"} but is attempting to connect to a gossip network for cluster {"another cluster hash"} -~~~ - -**Explanation:** When starting a node, the directory you choose to store the data in also contains metadata identifying the cluster the data came from. This causes conflicts when you've already started a node on the server, have quit the `cockroach` process, and then tried to join another cluster. Because the existing directory's cluster ID doesn't match the new cluster ID, the node cannot join it. - -**Solution:** Disassociate the node from the existing directory where you've stored CockroachDB data. For example, you can do either of the following: - -- Choose a different directory to store the CockroachDB data: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --store= --join= - ~~~ -- Remove the existing directory and start a node joining the cluster again: - {% include copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start --join=:26257 - ~~~ - -### Incorrect `--join` address - -If you try to add another node to the cluster, but the `--join` address is not pointing at any of the existing nodes, then the process will never complete, and you'll see a continuous stream of warnings like this: - -~~~ -W180817 17:01:56.506968 886 vendor/google.golang.org/grpc/clientconn.go:942 Failed to dial localhost:20000: grpc: the connection is closing; please retry. -W180817 17:01:56.510430 914 vendor/google.golang.org/grpc/clientconn.go:1293 grpc: addrConn.createTransport failed to connect to {localhost:20000 0 }. Err :connection error: desc = "transport: Error while dialing dial tcp [::1]:20000: connect: connection refused". Reconnecting… -~~~ - -**Explanation:** These warnings tell you that the node cannot establish a connection with the address specified in the `--join` flag. Without a connection to the cluster, the node cannot join. - -**Solution:** To successfully join the node to the cluster, start the node again, but this time include a correct `--join` address. - -### Missing `--join` address - -If you try to add another node to the cluster, but the `--join` address is missing entirely, then the new node will initialize itself as a new cluster instead of joining the existing cluster. You can see this in the status field printed to stdout: - -~~~ -CockroachDB node starting at 2018-02-08 16:30:26.690638 +0000 UTC (took 0.2s) -build: CCL v2.2.0-alpha.20181217 @ 2018/01/08 17:30:06 (go1.8.3) -admin: https://localhost:8085 -sql: postgresql://root@localhost:26262?sslcert=certs%2Fclient.root.crt&sslkey=certs%2Fclient.root.key&sslmode=verify-full&sslrootcert=certs%2Fca.crt -logs: /Users/jesseseldess/cockroachdb-training/cockroach-v2.2.0-alpha.20181217.darwin-10.9-amd64/node6/logs -store[0]: path=/Users/jesseseldess/cockroachdb-training/cockroach-v2.2.0-alpha.20181217.darwin-10.9-amd64/node6 -status: initialized new cluster -clusterID: cfcd80ee-9005-4975-9ae9-9c36d9aaa57e -nodeID: 1 -~~~ - -If you then stop the node and start it again with a correct `--join` address, the startup process will fail because the cluster will notice that the node's cluster ID does not match the cluster ID of the nodes it is trying to join to: - -~~~ -W180815 17:21:00.316845 237 gossip/client.go:123 [n1] failed to start gossip client to localhost:26258: initial connection heartbeat failed: rpc error: code = Unknown desc = client cluster ID "9a6ed934-50e8-472a-9d55-c6ecf9130984" doesn't match server cluster ID "ab6960bb-bb61-4e6f-9190-992f219102c6" -~~~ - -**Solution:** To successfully join the node to the cluster, you need to remove the node's data directory, which is where its incorrect cluster ID is stored, and start the node again. - -## Client connection issues - -If a client cannot connect to the cluster, check basic network connectivity (`ping`), port connectivity (`telnet`), and certificate validity. - -### Networking issues - -Most networking-related issues are caused by one of two issues: - -- Firewall rules, which require your network administrator to investigate -- Inaccessible hostnames on your nodes, which can be controlled with the `--listen-addr` and `--advertise-addr` flags on [`cockroach start`](start-a-node.html#networking) - - -**Solution:** - -To check your networking setup: - -1. Use `ping`. Every machine you are using as a CockroachDB node should be able to ping every other machine, using the hostnames or IP addresses used in the `--join` flags (and the `--advertise-host` flag if you are using it). - -2. If the machines are all pingable, check if you can connect to the appropriate ports. With your CockroachDB nodes still running, log in to each node and use `telnet` or` nc` to verify machine to machine connectivity on the desired port. For instance, if you are running CockroachDB on the default port of 26257, run either: - - `telnet 26257` - - `nc 26257` - - Both `telnet` and `nc` will exit immediately if a connection cannot be established. If you are running in a firewalled environment, the firewall might be blocking traffic to the desired ports even though it is letting ping packets through. - -To efficiently troubleshoot the issue, it's important to understand where and why it's occurring. We recommend checking the following network-related issues: - -- By default, CockroachDB advertises itself to other nodes using its hostname. If your environment doesn't support DNS or the hostname is not resolvable, your nodes cannot connect to one another. In these cases, you can: - - Change the hostname each node uses to advertises itself with `--advertise-addr` - - Set `--listen-addr=` if the IP is a valid interface on the machine -- Every node in the cluster should be able to ping each other node on the hostnames or IP addresses you use in the `--join`, `--listen-addr`, or `--advertise-addr` flags. -- Every node should be able to connect to other nodes on the port you're using for CockroachDB (26257 by default) through `telnet` or `nc`: - - `telnet 26257` - - `nc 26257` - -Again, firewalls or hostname issues can cause any of these steps to fail. - -### Network partition - -If the Admin UI lists live nodes in the **Dead Nodes** table, then you might have a network partition. - -**Explanation:** A network partition indicates that the nodes cannot communicate with each other in one or both directions because of a configuration problem with the network itself. A symmetric partition is one where the communication is broken in both directions. An asymmetric partition means the connection works in one direction but not the other. An example of a scenario that can cause a network partition is when specific IP addresses or hostnames are allowed by the firewall, and then those addresses or names change after tearing down and rebuilding a node. - -The effect of a network partition depends on which nodes are partitioned and where the ranges are located. It depends to a large extent on whether localities have been defined. - -A partition is a lot like an outage, where all nodes in a smaller partition are down. If you don’t provide localities, a partition that cuts off at least (n-1)/2 nodes will cause data unavailability. - -**Solution:** - -To identify a network partition: - -1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). -2. Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. -3. Click **Network Latency**. -4. In the **Latencies** table, check if any cells are marked as “X”. If yes, it indicates that the nodes cannot communicate with those nodes, and might indicate a network partition. - -## CockroachDB authentication issues - -### Missing certificate - -If you try to add a node to a secure cluster without providing the node's security certificate, you will get the following error: - -~~~ -problem with CA certificate: not found -* -* ERROR: cannot load certificates. -* Check your certificate settings, set --certs-dir, or use --insecure for insecure clusters. -* -* problem with CA certificate: not found -* -Failed running "start" -~~~ - -**Explanation:** The error tells you that because the cluster is secure, it requires the new node to provide its security certificate in order to join. - -**Solution:** To successfully join the node to the cluster, start the node again, but this time include the `--certs-dir` flag - -### Certification expiration - -If you’re running a secure cluster, be sure to monitor your certificate expiration. If one of the inter-node certificates expires, nodes will no longer be able to communicate which can look like a network partition. - -To check the certificate expiration date: - -1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). -2. Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. -3. Scroll down to the **Even More Advanced Debugging** section. Click **All Nodes**. The **Node Diagnostics** page appears. Click the certificates for each node and check the expiration date for each certificate in the Valid Until field. - -### Client password not set - -While connecting to a secure cluster as a user, CockroachDB first checks if the client certificate exists in the `cert` directory. If the client certificate doesn’t exist, it prompts for a password. If password is not set and you press Enter, the connection attempt fails, and the following error is printed to `stderr`: - -~~~ -Error: pq: invalid password -Failed running "sql" -~~~ - -**Solution:** To successfully connect to the cluster, you must first either generate a client certificate or create a password for the user. - -## Clock sync issues - -### Node clocks are not properly synchronized - -See the following FAQs: - -- [What happens when node clocks are not properly synchronized](operational-faqs.html#what-happens-when-node-clocks-are-not-properly-synchronized) -- [How can I tell how well node clocks are synchronized](operational-faqs.html#how-can-i-tell-how-well-node-clocks-are-synchronized) - -## Capacity planning issues - -Following are some of the possible issues you might have while planning capacity for your cluster: - -- Running CPU at close to 100% utilization with high run queue will result in poor performance. -- Running RAM at close to 100% utilization triggers Linux OOM and/or swapping that will result in poor performance or stability issues. -- Running storage at 100% capacity causes writes to fail, which in turn can cause various processes to stop. -- Running storage at 100% utilization read/write will causes poor service time. -- Running network at 100% utilization causes response between databases and client to be poor. - -**Solution:** [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and navigate to **Metrics > Hardware** dashboard to monitor the following metrics: - -First, check adequate capacity was available for the incident for the following components. - -Type | Time Series | What to look for ---------|--------|--------| -RAM capacity | Memory Usage | Any non-zero value -CPU capacity | CPU Percent | Consistent non-zero values -Network capacity | Network Bytes Received
      Network Bytes Sent | Any non-zero value - -Check Near Out of Capacity Conditions: - -Type | Time Series | What to look for ---------|--------|--------| -RAM capacity | Memory Usage | Consistently more than 80% -CPU capacity | CPU Percent | Consistently less than 20% in idle (i.e.:80% busy) -Network capacity | Network Bytes Received
      Network Bytes Sent | Consistently more than 50% capacity for both - -## Storage issues - -### Disks filling up - -Like any database system, if you run out of disk space the system will no longer be able to accept writes. Additionally, a CockroachDB node needs a small amount of disk space (a few GBs to be safe) to perform basic maintenance functionality. For more information about this issue, see: - -- [Why is memory usage increasing despite lack of traffic?](operational-faqs.html#why-is-memory-usage-increasing-despite-lack-of-traffic) -- [Why is disk usage increasing despite lack of writes?](operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes) -- [Can I reduce or disable the storage of timeseries data?](operational-faqs.html#can-i-reduce-or-disable-the-storage-of-timeseries-data) - -## Memory issues - -### Suspected memory leak - -A CockroachDB node will grow to consume all of the memory allocated for its `cache`. The default size for the cache is ¼ of physical memory which can be substantial depending on your machine configuration. This growth will occur even if your cluster is otherwise idle due to the internal metrics that a CockroachDB cluster tracks. See the `--cache` flag in [`cockroach start`](start-a-node.html#general). - -CockroachDB memory usage has 3 components: - -- **Go allocated memory**: Memory allocated by the Go runtime to support query processing and various caches maintained in Go by CockroachDB. These caches are generally small in comparison to the RocksDB cache size. If Go allocated memory is larger than a few hundred megabytes, something concerning is going on. -- **CGo allocated memory**: Memory allocated by the C/C++ libraries linked into CockroachDB and primarily concerns RocksDB and the RocksDB block cache. This is the “cache” mentioned in the note above. The size of CGo allocated memory is usually very close to the configured cache size. -- **Overhead**: The process resident set size minus Go/CGo allocated memory. - -If Go allocated memory is larger than a few hundred megabytes, you might have encountered a memory leak. Go comes with a built-in heap profiler which is already enabled on your CockroachDB process. See this [excellent blog post](https://blog.golang.org/profiling-go-programs) on profiling Go programs. - -**Solution:** To determine Go/CGo allocated memory: - -1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). - -2. Navigate to **Metrics > Runtime** dashboard, and check the **Memory Usage** graph. - -3. On hovering over the graph, the values for the following metrics are displayed: - - Metric | Description - --------|---- - RSS | Total memory in use by CockroachDB. - Go Allocated | Memory allocated by the Go layer. - Go Total | Total memory managed by the Go layer. - CGo Allocated | Memory allocated by the C layer. - CGo Total | Total memory managed by the C layer. - - - If CGo allocated memory is larger than the configured `cache` size, [file an issue](file-an-issue.html). - - If the resident set size (RSS) minus Go/CGo total memory is larger than 100 megabytes, [file an issue](file-an-issue.html). - -### Node crashes because of insufficient memory - -Often when a node exits without a trace or logging any form of error message, we’ve found that it is the operating system stopping it suddenly due to low memory. So if you're seeing node crashes where the logs just end abruptly, it's probably because the node is running out of memory. On most Unix systems, you can verify if the `cockroach` process was stopped because the node ran out of memory by running: - -~~~ shell -$ sudo dmesg | grep -iC 3 "cockroach" -~~~ - -If the command returns the following message, then you know the node crashed due to insufficient memory: - -~~~ shell -$ host kernel: Out of Memory: Killed process (cockroach). -~~~ - -To rectify the issue, you can either run the cockroachdb process on another node with sufficient memory, or [reduce the cockroachdb memory usage](start-a-node.html#flags). - -## Decommissioning issues - -### Decommissioning process hangs indefinitely - -**Explanation:** Before decommissioning a node, you need to make sure other nodes are available to take over the range replicas from the node. If no other nodes are available, the decommission process will hang indefinitely. - -**Solution:** Confirm that there are enough nodes with sufficient storage space to take over the replicas from the node you want to remove. - -### Decommissioned nodes displayed in UI forever - -By design, decommissioned nodes are displayed in the Admin UI forever. We retain the list of decommissioned nodes for the following reasons: - -- Decommissioning is not entirely free, so showing those decommissioned nodes in the UI reminds you of the baggage your cluster will have to carry around forever. -- It also explains to future administrations why your node IDs have gaps (e.g., why the nodes are numbered n1, n2, and n8). - -You can follow the discussion here: [https://github.com/cockroachdb/cockroach/issues/24636](https://github.com/cockroachdb/cockroach/issues/24636) - -## Replication issues - -### Admin UI shows under-replicated/unavailable ranges - -When a CockroachDB node dies (or is partitioned) the under-replicated range count will briefly spike while the system recovers. - -**Explanation:** CockroachDB uses consensus replication and requires a quorum of the replicas to be available in order to allow both writes and reads to the range. The number of failures that can be tolerated is equal to (Replication factor - 1)/2. Thus CockroachDB requires (n-1)/2 nodes to achieve quorum. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. - -- Under-replicated Ranges: When a cluster is first initialized, the few default starting ranges will only have a single replica, but as soon as other nodes are available, they will replicate to them until they've reached their desired replication factor. If a range does not have enough replicas, the range is said to be "under-replicated". - -- Unavailable Ranges: If a majority of a range's replicas are on nodes that are unavailable, then the entire range is unavailable and will be unable to process queries. - -**Solution:** - -To identify under-replicated/unavailable ranges: - -1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). - -2. On the **Cluster Overview** page, check the **Replication Status**. If the **Under-replicated ranges** or **Unavailable ranges** count is non-zero, then you have under-replicated or unavailable ranges in your cluster. - -3. Check for a network partition: Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. On the Advanced Debugging page, click **Network Latency**. In the **Latencies** table, check if any cells are marked as “X”. If yes, it indicates that the nodes cannot communicate with those nodes, and might indicate a network partition. If there's no partition, and there's still no upreplication after 5 mins, then [file an issue](file-an-issue.html). - -**Add nodes to the cluster:** - -On the Admin UI’s Cluster Overview page, check if any nodes are down. If the number of nodes down is less than (n-1)/2, then that is most probably the cause of the under-replicated/unavailable ranges. Add nodes to the cluster such that the cluster has the required number of nodes to replicate ranges properly. - -If you still see under-replicated/unavailable ranges on the Cluster Overview page, investigate further: - -1. [Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) -2. Click the gear icon on the left-hand navigation bar to access the **Advanced Debugging** page. -2. Click **Problem Ranges**. -3. In the **Connections** table, identify the node with the under-replicated/unavailable ranges and click the node ID in the Node column. -4. To view the **Range Report** for a range, click on the range number in the **Under-replicated (or slow)** table or **Unavailable** table. -5. On the Range Report page, scroll down to the **Simulated Allocator Output** section. The table contains an error message which explains the reason for the under-replicated range. Follow the guidance in the message to resolve the issue. If you need help understanding the error or the guidance, [file an issue](file-an-issue.html). Please be sure to include the full range report and error message when you submit the issue. - -## Something else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v19.1/cockroach-commands.md b/src/current/v19.1/cockroach-commands.md deleted file mode 100644 index 8f2f2f3bdb7..00000000000 --- a/src/current/v19.1/cockroach-commands.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Cockroach Commands -summary: Learn the commands for configuring, starting, and managing a CockroachDB cluster. -toc: true ---- - -This page introduces the `cockroach` commands for configuring, starting, and managing a CockroachDB cluster, as well as environment variables that can be used in place of certain flags. - -You can run `cockroach help` in your shell to get similar guidance. - - -## Commands - -Command | Usage ---------|---- -[`cockroach start`](start-a-node.html) | Start a node. -[`cockroach init`](initialize-a-cluster.html) | Initialize a cluster. -[`cockroach cert`](create-security-certificates.html) | Create CA, node, and client certificates. -[`cockroach quit`](stop-a-node.html) | Temporarily stop a node or permanently remove a node. -[`cockroach sql`](use-the-built-in-sql-client.html) | Use the built-in SQL client. -[`cockroach sqlfmt`](use-the-query-formatter.html) | Reformat SQL queries for enhanced clarity. -[`cockroach user`](create-and-manage-users.html) | Get, set, list, and remove users. -[`cockroach zone`](configure-replication-zones.html) | **Deprecated** To configure the number and location of replicas for specific sets of data, use [`ALTER ... CONFIGURE ZONE`](configure-zone.html) and [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). -[`cockroach node`](view-node-details.html) | List node IDs, show their status, decommission nodes for removal, or recommission nodes. -[`cockroach dump`](sql-dump.html) | Back up a table by outputting the SQL statements required to recreate the table and all its rows. -[`cockroach demo`](cockroach-demo.html) | Start a temporary, in-memory, single-node CockroachDB cluster, and open an interactive SQL shell to it. -[`cockroach gen`](generate-cockroachdb-resources.html) | Generate manpages, a bash completion file, example SQL data, or an HAProxy configuration file for a running cluster. -[`cockroach version`](view-version-details.html) | Output CockroachDB version details. -[`cockroach debug ballast`](debug-ballast.html) | Create a large, unused file in a node's storage directory that you can delete if the node runs out of disk space. -[`cockroach debug encryption-active-key`](debug-encryption-active-key.html) | View the encryption algorithm and store key. -[`cockroach debug zip`](debug-zip.html) | Generate a `.zip` file that can help Cockroach Labs troubleshoot issues with your cluster. -[`cockroach debug merge-logs`](debug-merge-logs.html) | Merge multiple log files from different machines into a single stream. -[`cockroach workload`](cockroach-workload.html) | Run a built-in load generator against a cluster. - -## Environment variables - -For many common `cockroach` flags, such as `--port` and `--user`, you can set environment variables once instead of manually passing the flags each time you execute commands. - -- To find out which flags support environment variables, see the documentation for each [command](#commands). -- To output the current configuration of CockroachDB and other environment variables, run `env`. -- When a node uses environment variables on [startup](start-a-node.html), the variable names are printed to the node's logs; however, the variable values are not. - -CockroachDB prioritizes command flags, environment variables, and defaults as follows: - -1. If a flag is set for a command, CockroachDB uses it. -2. If a flag is not set for a command, CockroachDB uses the corresponding environment variable. -3. If neither the flag nor environment variable is set, CockroachDB uses the default for the flag. -4. If there's no flag default, CockroachDB gives an error. - -For more details, see [Client Connection Parameters](connection-parameters.html). diff --git a/src/current/v19.1/cockroach-demo.md b/src/current/v19.1/cockroach-demo.md deleted file mode 100644 index 675e400e149..00000000000 --- a/src/current/v19.1/cockroach-demo.md +++ /dev/null @@ -1,209 +0,0 @@ ---- -title: Open a SQL Shell to a Temporary Cluster -summary: Use cockroach demo to open a SQL shell to a temporary, in-memory, single-node CockroachDB cluster. -toc: true ---- - -The `cockroach demo` [command](cockroach-commands.html) starts a temporary, in-memory, single-node CockroachDB cluster, optionally with a pre-loaded dataset, and opens an [interactive SQL shell](use-the-built-in-sql-client.html) to the cluster. - -The in-memory cluster persists only as long as the SQL shell is open. As soon as the shell is exited, the cluster and all its data are permanently destroyed. This command is therefore recommended only as an easy way to experiment with the CockroachDB SQL dialect. - -## Synopsis - -Start an interactive SQL shell: - -~~~ shell -$ cockroach demo -~~~ - -Load a sample dataset and start an interactive SQL shell: - -~~~ shell -$ cockroach demo -~~~ - -Execute SQL from the command line: - -~~~ shell -$ cockroach demo --execute=";" --execute="" -~~~ - -Exit the interactive SQL shell: - -~~~ shell -$ \q -ctrl-d -~~~ - -View help: - -~~~ shell -$ cockroach demo --help -~~~ - -## Datasets - -Workload | Description ----------|------------ -`bank` | A `bank` database, with one `bank` table containing account details. -`intro` | An `intro` database, with one table, `mytable`, with a hidden message. -`startrek` | A `startrek` database, with two tables, `episodes` and `quotes`. -`tpcc` | A `tpcc` database, with a rich schema of multiple tables. - -## Flags - -The `demo` command supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility.

      This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](use-the-built-in-sql-client.html#commands). -`--execute`
      `-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons. If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options). -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

      **Default:** `table` for sessions that [output on a terminal](use-the-built-in-sql-client.html#session-and-output-types); `tsv` otherwise

      This flag corresponds to the `display_format` [client-side option](use-the-built-in-sql-client.html#client-side-options) for use in interactive sessions. -`--safe-updates` | Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

      **Default:** `true` for [interactive sessions](use-the-built-in-sql-client.html#session-and-output-types); `false` otherwise

      Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -`--set` | Set a [client-side option](use-the-built-in-sql-client.html#client-side-options) before starting the SQL shell or executing SQL statements from the command line via `--execute`. This flag may be specified multiple times, once per option.

      After starting the SQL shell, the `\set` and `unset` commands can be use to enable and disable client-side options as well. - -### Logging - -By default, the `demo` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## SQL shell - -All [SQL shell commands, client-side options, help, and shortcuts](use-the-built-in-sql-client.html#sql-shell) supported by the `cockroach sql` command are also supported by the `cockroach demo` command. - -## Web UI - -When the SQL shell connects to the in-memory cluster, it prints a welcome text with some tips and CockroachDB version and cluster details. Most of these details resemble the [welcome text](use-the-built-in-sql-client.html#welcome-message) that gets printed when connecting `cockroach sql` to a permanent cluster. However, one unique detail to note is the **Web UI** link. For the duration of the cluster, you can open the Web UI for the cluster at this link. - -~~~ shell -# -# Welcome to the CockroachDB demo database! -# -# You are connected to a temporary, in-memory CockroachDB -# instance. Your changes will not be saved! -# -# Web UI: http://127.0.0.1:60105 -# -# Server version: CockroachDB CCL v2.1.0-alpha.20180702-281-g07a11b8e8c-dirty (x86_64-apple-darwin17.6.0, built 2018/07/08 14:00:29, go1.10.1) (same version as client) -# Cluster ID: 61b41af6-fb2c-4d9a-8a91-0a31933b3d31 -# -# Enter \? for a brief introduction. -# -root@127.0.0.1:60104/defaultdb> -~~~ - -## Example - -In these examples, we demonstrate how to start a shell with `cockroach demo`. For more SQL shell features, see the [`cockroach sql` examples](use-the-built-in-sql-client.html#examples). - -### Start an interactive SQL shell - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach demo -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t1 (name) VALUES ('Tom Thumb'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+--------------------------------------+-----------+ -| id | name | -+--------------------------------------+-----------+ -| 5d2e6faa-a78f-4ef3-845f-6e174bbb41fa | Tom Thumb | -+--------------------------------------+-----------+ -(1 row) - -Time: 9.539973ms -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -### Load a sample dataset and start an interactive SQL shell - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach demo startrek -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.episodes WHERE stardate > 5500; -~~~ - -~~~ - id | season | num | title | stardate -+----+--------+-----+-----------------------------------+----------+ - 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7 - 62 | 3 | 7 | Day of the Dove | 5630.3 - 64 | 3 | 9 | The Tholian Web | 5693.2 - 65 | 3 | 10 | Plato's Stepchildren | 5784.2 - 66 | 3 | 11 | Wink of an Eye | 5710.5 - 69 | 3 | 14 | Whom Gods Destroy | 5718.3 - 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2 - 73 | 3 | 18 | The Lights of Zetar | 5725.3 - 74 | 3 | 19 | Requiem for Methuselah | 5843.7 - 75 | 3 | 20 | The Way to Eden | 5832.3 - 76 | 3 | 21 | The Cloud Minders | 5818.4 - 77 | 3 | 22 | The Savage Curtain | 5906.4 - 78 | 3 | 23 | All Our Yesterdays | 5943.7 - 79 | 3 | 24 | Turnabout Intruder | 5928.5 -(14 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -### Execute SQL from the command-line - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach demo \ ---execute="CREATE TABLE t1 (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING);" \ ---execute="INSERT INTO t1 (name) VALUES ('Tom Thumb');" \ ---execute="SELECT * FROM t1;" -~~~ - -~~~ -CREATE TABLE -INSERT 1 -+--------------------------------------+-----------+ -| id | name | -+--------------------------------------+-----------+ -| 53476f43-d737-4506-ad83-4469c977f77c | Tom Thumb | -+--------------------------------------+-----------+ -(1 row) -~~~ - -## See also - -- [`cockroach sql`](use-the-built-in-sql-client.html) -- [`cockroach workload`](cockroach-workload.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v19.1/cockroach-workload.md b/src/current/v19.1/cockroach-workload.md deleted file mode 100644 index 3dbee64af7d..00000000000 --- a/src/current/v19.1/cockroach-workload.md +++ /dev/null @@ -1,489 +0,0 @@ ---- -title: Run a Sample Workload -summary: Use cockroach workload to run a load generator against a CockroachDB cluster. -toc: true ---- - -CockroachDB comes with built-in load generators for simulating different types of client workloads, printing out per-operation statistics every second and totals after a specific duration or max number of operations. To run one of these load generators, use the `cockroach workload` [command](cockroach-commands.html) as described below. - -{{site.data.alerts.callout_danger}} -The `cockroach workload` command is experimental. The interface and output are subject to change. -{{site.data.alerts.end}} - -## Synopsis - -Create the schema for a workload: - -~~~ shell -$ cockroach workload init '' -~~~ - -Run a workload: - -~~~ shell -$ cockroach workload run '' -~~~ - -View help: - -~~~ shell -$ cockroach workload --help -~~~ -~~~ shell -$ cockroach workload init --help -~~~ -~~~ shell -$ cockroach workload init --help -~~~ -~~~ shell -$ cockroach workload run --help -~~~ -~~~ shell -$ cockroach workload run --help -~~~ - -## Subcommands - -Command | Usage ---------|------ -`init` | Load the schema for the workload. You run this command once for a given schema. -`run` | Run a workload. You can run this command multiple times from different machines to increase concurrency. See [Concurrency](#concurrency) for more details. - -## Concurrency - -There are two ways to increase the concurrency of a workload: - -- **Increase the concurrency of a single workload instance** by running `cockroach workload run ` with the `--concurrency` flag set to a value higher than the default. -- **Run multiple instances of a workload in parallel** by running `cockroach workload run ` multiple times from different machines. - -## Workloads - -Workload | Description ----------|------------ -`bank` | Models a set of accounts with currency balances.

      For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -`intro` | Loads an `intro` database, with one table, `mytable`, with a hidden message.

      For this workload, you run only `workload init` to load the data. The `workload run` subcommand is not applicable. -`kv` | Reads and writes to keys spread (by default, uniformly at random) across the cluster.

      For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -`startrek` | Loads a `startrek` database, with two tables, `episodes` and `quotes`.

      For this workload, you run only `workload init` to load the data. The `workload run` subcommand is not applicable. -`tpcc` | Simulates a transaction processing workload using a rich schema of multiple tables.

      For this workload, you run `workload init` to load the schema and then `workload run` to generate data. -`ycsb` | Simulates a high-scale key value workload, either read-heavy, write-heavy, or scan-based, with additional customizations.

      For this workload, you run `workload init` to load the schema and then `workload run` to generate data. - -## Flags - -{{site.data.alerts.callout_info}} -The `cockroach workload` command does not support connection or security flags like other [`cockroach` commands](cockroach-commands.html). Instead, you must use a [connection string](connection-parameters.html) at the end of the command. -{{site.data.alerts.end}} - -### `bank` workload - -Flag | Description ------|------------ -`--concurrency` | The number of concurrent workers.

      **Applicable commands:** `init` or `run`
      **Default:** 2 * number of CPUs -`--db` | The SQL database to use.

      **Applicable commands:** `init` or `run`
      **Default:** `bank` -`--drop` | Drop the existing database, if it exists.

      **Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid time units are `ns`, `us`, `ms`, `s`, `m`, and `h`.

      **Applicable commands:** `init` or `run`
      **Default:** `0`, which means run forever. -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

      **Applicable command:** `run` -`--init` | Automatically run the `init` command.

      **Applicable command:** `run` -`--max-ops` | The maximum number of operations to run.

      **Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

      **Applicable command:** `run`
      **Default:** `0`, which means unlimited. -`--payload-bytes` | The size of the payload field in each initial row.

      **Applicable commands:** `init` or `run`
      **Default:** `100` -`--ramp` | The duration over which to ramp up load.

      **Applicable command:** `run` -`--ranges` | The initial number of ranges in the `bank` table.

      **Applicable commands:** `init` or `run`
      **Default:** `10` -`--rows` | The initial number of accounts in the `bank` table.

      **Applicable commands:** `init` or `run`
      **Default:** `1000` -`--seed` | The key hash seed.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--tolerate-errors` | Keep running on error.

      **Applicable command:** `run` - -### `intro` and `startrek` workloads - -{{site.data.alerts.callout_info}} -These workloads generate data but do not offer the ability to run continuous load. Thus, only the `init` subcommand is supported. -{{site.data.alerts.end}} - -Flag | Description ------|------------ -`--drop` | Drop the existing database, if it exists, before loading the dataset. - -### `kv` workload - -Flag | Description ------|------------ -`--batch` | The number of blocks to insert in a single SQL statement.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--concurrency` | The number of concurrent workers.

      **Applicable commands:** `init` or `run`
      **Default:** `8` `--cycle-length`| The number of keys repeatedly accessed by each writer.**Applicable commands:** `init` or `run`
      **Default:** `9223372036854775807` -`--db` | The SQL database to use.

      **Applicable commands:** `init` or `run`
      **Default:** `kv` -`--drop` | Drop the existing database, if it exists.

      **Applicable commands:** `init` or `run` -`--duration` | The duration to run, with a required time unit suffix. Valid time units are `ns`, `us`, `ms`, `s`, `m`, and `h`.

      **Applicable command:** `run`
      **Default:** `0`, which means run forever. -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

      **Applicable command:** `run` -`--init` | Automatically run the `init` command.

      **Applicable command:** `run` -`--max-block-bytes` | The maximum amount of raw data written with each insertion.

      **Applicable commands:** `init` or `run`
      **Default:** `2` -`--max-ops` | The maximum number of operations to run.

      **Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

      **Applicable command:** `run`
      **Default:** `0`, which means unlimited. -`--min-block-bytes` | The minimum amount of raw data written with each insertion.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--ramp` | The duration over which to ramp up load.

      **Applicable command:** `run` -`--read-percent` | The percent (0-100) of operations that are reads of existing keys.

      **Applicable commands:** `init` or `run` -`--seed` | The key hash seed.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--sequential` | Pick keys sequentially instead of randomly.

      **Applicable commands:** `init` or `run` -`--splits` | The number of splits to perform before starting normal operations.

      **Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

      **Applicable command:** `run` -`--use-opt` | Use [cost-based optimizer](cost-based-optimizer.html).

      **Applicable commands:** `init` or `run`
      **Default:** `true` -`--write-seq` | Initial write sequence value.

      **Applicable commands:** `init` or `run` - -### `tpcc` workload - -Flag | Description ------|------------ -`--active-warehouses` | Run the load generator against a specific number of warehouses.

      **Applicable commands:** `init` or `run`
      **Defaults:** Value of `--warehouses` -`--db` | The SQL database to use.

      **Applicable commands:** `init` or `run`
      **Default:** `tpcc` -`--drop` | Drop the existing database, if it exists.

      **Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid time units are `ns`, `us`, `ms`, `s`, `m`, and `h`.

      **Applicable command:** `run`
      **Default:** `0`, which means run forever. -`--fks` | Add foreign keys.

      **Applicable commands:** `init` or `run`
      **Default:** `true` -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

      **Applicable command:** `run` -`--init` | Automatically run the `init` command.

      **Applicable command:** `run` -`--interleaved` | Use [interleaved tables](interleave-in-parent.html).

      **Applicable commands:** `init` or `run` -`--max-ops` | The maximum number of operations to run.

      **Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

      **Applicable command:** `run`
      **Default:** `0`, which means unlimited. -`--mix` | Weights for the transaction mix.

      **Applicable commands:** `init` or `run`
      **Default:** `newOrder=10,payment=10,orderStatus=1,delivery=1,stockLevel=1`, which matches the [TPC-C specification](http://tpc.org/tpc_documents_current_versions/current_specifications5.asp). -`--partition-affinity` | Run the load generator against a specific partition. This flag must be used in conjunction with `--partitions`.

      **Applicable commands:** `init` or `run`
      **Default:** `-1` -`--partitions` | Partition tables. This flag must be used in conjunction with `--split`.

      **Applicable commands:** `init` or `run` -`--ramp` | The duration over which to ramp up load.

      **Applicable command:** `run` -`--scatter` | Scatter ranges.

      **Applicable commands:** `init` or `run` -`--seed` | The random number generator seed.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--serializable` | Force serializable mode. CockroachDB only supports `SERIALIZABLE` isolation, so this flag is not necessary.

      **Applicable command:** `init` -`--split` | [Split tables](split-at.html).

      **Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

      **Applicable command:** `run` -`--wait` | Run in wait mode, i.e., include think/keying sleeps.

      **Applicable commands:** `init` or `run`
      **Default:** `true` -`--warehouses` | The number of warehouses for loading initial data, at approximately 200 MB per warehouse.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--workers` | The number of concurrent workers.

      **Applicable commands:** `init` or `run`
      **Default:** `--warehouses` * 10 -`--zones` | The number of [replication zones](configure-replication-zones.html) for partitioning. This number should match the number of `--partitions` and the zones used to start the cluster.

      **Applicable command:** `init` - -### `ycsb` workload - -Flag | Description ------|------------ -`--concurrency` | The number of concurrent workers.

      **Applicable commands:** `init` or `run`
      **Default:** `8` -`--db` | The SQL database to use.

      **Applicable commands:** `init` or `run`
      **Default:** `ycsb` -`--drop` | Drop the existing database, if it exists.

      **Applicable commands:** `init` or `run`. For the `run` command, this flag must be used in conjunction with `--init`. -`--duration` | The duration to run, with a required time unit suffix. Valid time units are `ns`, `us`, `ms`, `s`, `m`, and `h`.

      **Applicable command:** `run`
      **Default:** `0`, which means run forever. -`--families` | Place each column in its own [column family](column-families.html).

      **Applicable commands:** `init` or `run` -`--histograms` | The file to write per-op incremental and cumulative histogram data to.

      **Applicable command:** `run` -`--init` | Automatically run the `init` command.

      **Applicable command:** `run` -`--initial-rows` | Initial number of rows to sequentially insert before beginning random number generation.

      **Applicable commands:** `init` or `run`
      **Default:** `10000` -`--json` | Use JSONB rather than relational data.

      **Applicable commands:** `init` or `run` -`--max-ops` | The maximum number of operations to run.

      **Applicable command:** `run` -`--max-rate` | The maximum frequency of operations (reads/writes).

      **Applicable command:** `run`
      **Default:** `0`, which means unlimited. -`--method` | The SQL issue method (`prepare`, `noprepare`, `simple`).

      **Applicable commands:** `init` or `run`
      **Default:** `prepare` -`--ramp` | The duration over which to ramp up load.

      **Applicable command:** `run` -`--request-distribution` | Distribution for the random number generator (`zipfian`, `uniform`).

      **Applicable commands:** `init` or `run`.
      **Default:** `zipfian` -`--seed` | The random number generator seed.

      **Applicable commands:** `init` or `run`
      **Default:** `1` -`--splits` | Number of [splits](split-at.html) to perform before starting normal operations.

      **Applicable commands:** `init` or `run` -`--tolerate-errors` | Keep running on error.

      **Applicable command:** `run` -`--workload` | The type of workload to run (`A`, `B`, `C`, `D`, or `F`). For details about these workloads, see [YCSB Workloads](https://github.com/brianfrankcooper/YCSB/wiki/Core-Workloads).

      **Applicable commands:** `init` or `run`
      **Default:** `B` - -### Logging - -By default, the `cockroach workload` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -These examples assume that you have already [started an insecure cluster locally](start-a-local-cluster.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---listen-addr=localhost -~~~ - -### Run the `bank` workload - -1. Load the initial schema: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init bank \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 1 minute: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run bank \ - --duration=1m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1608.6 1702.2 4.5 7.3 12.6 65.0 transfer - 2s 0 1725.3 1713.8 4.5 7.9 13.1 19.9 transfer - 3s 0 1721.1 1716.2 4.5 7.3 11.5 21.0 transfer - 4s 0 1328.7 1619.2 5.5 10.5 17.8 39.8 transfer - 5s 0 1389.3 1573.3 5.2 11.5 16.3 23.1 transfer - 6s 0 1640.0 1584.4 5.0 7.9 12.1 16.3 transfer - 7s 0 1594.0 1585.8 5.0 7.9 10.5 15.7 transfer - 8s 0 1652.8 1594.2 4.7 7.9 11.5 29.4 transfer - 9s 0 1451.9 1578.4 5.2 10.0 15.2 26.2 transfer - 10s 0 1653.3 1585.9 5.0 7.6 10.0 18.9 transfer - ... - ~~~ - - After the specified duration (1 minute in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 60.0s 0 84457 1407.6 5.7 5.5 10.0 15.2 167.8 - ~~~ - -### Run the `kv` workload - -1. Load the initial schema: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init kv \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 1 minute: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run kv \ - --duration=1m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 5095.8 5123.7 1.5 2.5 3.3 7.3 write - 2s 0 4795.4 4959.6 1.6 2.8 3.5 8.9 write - 3s 0 3456.5 4458.5 2.0 4.5 7.3 24.1 write - 4s 0 2787.9 4040.8 2.4 6.3 12.6 30.4 write - 5s 0 3558.7 3944.4 2.0 4.2 6.8 11.5 write - 6s 0 3733.8 3909.3 1.9 4.2 6.0 12.6 write - 7s 0 3565.6 3860.1 2.0 4.7 7.9 25.2 write - 8s 0 3469.3 3811.4 2.0 5.0 6.8 22.0 write - 9s 0 3937.6 3825.4 1.8 3.7 7.3 29.4 write - 10s 0 3822.9 3825.1 1.8 4.7 8.9 37.7 write - ... - ~~~ - - After the specified duration (1 minute in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 60.0s 0 276067 4601.0 1.7 1.6 3.1 5.2 96.5 - ~~~ - -### Load the `intro` dataset - -1. Load the dataset: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init intro \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Launch the built-in SQL client to view it: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW TABLES FROM intro; - ~~~ - - ~~~ - table_name - +------------+ - mytable - (1 row) - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ SELECT * FROM intro.mytable WHERE (l % 2) = 0; - ~~~ - - ~~~ - l | v - +----+------------------------------------------------------+ - 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, - 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ - 4 | ! "?##mW##?"- - 6 | ! C O N G R A T S _am#Z??A#ma, Y - 8 | ! _ummY" "9#ma, A - 10 | ! vm#Z( )Xmms Y - 12 | ! .j####mmm#####mm#m##6. - 14 | ! W O W ! jmm###mm######m#mmm##6 - 16 | ! ]#me*Xm#m#mm##m#m##SX##c - 18 | ! dm#||+*$##m#mm#m#Svvn##m - 20 | ! :mmE=|+||S##m##m#1nvnnX##; A - 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M - 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A - 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z - 28 | ! U D 4##c|+|+|]m#kvnvnno##P E - 30 | ! I 4#ma+|++]mmhvnnvq##P` ! - 32 | ! D I ?$#q%+|dmmmvnnm##! - 34 | ! T -4##wu#mm#pw##7' - 36 | ! -?$##m####Y' - 38 | ! !! "Y##Y"- - 40 | ! - (21 rows) - ~~~ - -### Load the `startrek` dataset - -1. Load the dataset: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init startrek \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Launch the built-in SQL client to view it: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW TABLES FROM startrek; - ~~~ - - ~~~ - table_name - +------------+ - episodes - quotes - (2 rows) - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM startrek.episodes WHERE stardate > 5500; - ~~~ - - ~~~ - id | season | num | title | stardate - +----+--------+-----+-----------------------------------+----------+ - 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7 - 62 | 3 | 7 | Day of the Dove | 5630.3 - 64 | 3 | 9 | The Tholian Web | 5693.2 - 65 | 3 | 10 | Plato's Stepchildren | 5784.2 - 66 | 3 | 11 | Wink of an Eye | 5710.5 - 69 | 3 | 14 | Whom Gods Destroy | 5718.3 - 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2 - 73 | 3 | 18 | The Lights of Zetar | 5725.3 - 74 | 3 | 19 | Requiem for Methuselah | 5843.7 - 75 | 3 | 20 | The Way to Eden | 5832.3 - 76 | 3 | 21 | The Cloud Minders | 5818.4 - 77 | 3 | 22 | The Savage Curtain | 5906.4 - 78 | 3 | 23 | All Our Yesterdays | 5943.7 - 79 | 3 | 24 | Turnabout Intruder | 5928.5 - (14 rows) - ~~~ - -### Run the `tpcc` workload - -1. Load the initial schema and data: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 10 minutes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=10m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 1443.4 1494.8 4.7 9.4 27.3 67.1 transfer - 2s 0 1686.5 1590.9 4.7 8.1 15.2 28.3 transfer - 3s 0 1735.7 1639.0 4.7 7.3 11.5 28.3 transfer - 4s 0 1542.6 1614.9 5.0 8.9 12.1 21.0 transfer - 5s 0 1695.9 1631.1 4.7 7.3 11.5 22.0 transfer - 6s 0 1569.2 1620.8 5.0 8.4 11.5 15.7 transfer - 7s 0 1614.6 1619.9 4.7 8.1 12.1 16.8 transfer - 8s 0 1344.4 1585.6 5.8 10.0 15.2 31.5 transfer - 9s 0 1351.9 1559.5 5.8 10.0 16.8 54.5 transfer - 10s 0 1514.8 1555.0 5.2 8.1 12.1 16.8 transfer - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 823902 1373.2 5.8 5.5 10.0 15.2 209.7 - ~~~ - -### Run the `ycsb` workload - -1. Load the initial schema and data: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init ycsb \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. Run the workload for 10 minutes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run ycsb \ - --duration=10m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 9258.1 9666.6 0.7 1.3 2.0 8.9 read - 1s 0 470.1 490.9 1.7 2.9 4.1 5.0 update - 2s 0 10244.6 9955.6 0.7 1.2 2.0 6.6 read - 2s 0 559.0 525.0 1.6 3.1 6.0 7.3 update - 3s 0 9870.8 9927.4 0.7 1.4 2.4 10.0 read - 3s 0 500.0 516.6 1.6 4.2 7.9 15.2 update - 4s 0 9847.2 9907.3 0.7 1.4 2.4 23.1 read - 4s 0 506.8 514.2 1.6 3.7 7.6 17.8 update - 5s 0 10084.4 9942.6 0.7 1.3 2.1 7.1 read - 5s 0 537.2 518.8 1.5 3.5 10.0 15.2 update - ... - ~~~ - - After the specified duration (10 minutes in this case), the workload will stop and you'll see totals printed to standard output: - - ~~~ - _elapsed___errors_____ops(total)___ops/sec(cum)__avg(ms)__p50(ms)__p95(ms)__p99(ms)_pMax(ms)__result - 600.0s 0 4728286 7880.2 1.0 0.9 2.2 5.2 268.4 - ~~~ - -## See also - -- [`cockroach demo`](cockroach-demo.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Performance Benchmarking with TPC-C](performance-benchmarking-with-tpc-c.html) diff --git a/src/current/v19.1/cockroachdb-in-comparison.md b/src/current/v19.1/cockroachdb-in-comparison.md deleted file mode 100644 index d519261094a..00000000000 --- a/src/current/v19.1/cockroachdb-in-comparison.md +++ /dev/null @@ -1,351 +0,0 @@ ---- -title: CockroachDB in Comparison -summary: Learn how CockroachDB compares to other popular databases like PostgreSQL, Cassandra, MongoDB, Google Cloud Spanner, and more. -tags: mongodb, mysql, dynamodb -toc: false -comparison: true ---- - -This page shows you how the key features of CockroachDB stack up against other databases. Hover over the features for their intended meanings, and click CockroachDB answers to view related documentation. - -
      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      - - - - CockroachDB
      - Database horizontal scale - - tooltip icon - - - Manual Sharding - Add on configuration - Node based, automated read scale, limited write - Node based, automated for both reads and writes - - Manual Sharding - Add on configuration - Node based, automated read scale, limited write - Node based, automated for both reads and writes - Node based, automated for both reads and writes
      - Database load balancing (internal) - - tooltip icon - - - Manual - not part of database - None and full copies across regions - Even distribution to optimize storage - - Manual - not part of database - None and full copies across regions - Even distribution to optimize storage - Detailed options to optimize storage, compute and latency
      - Failover - - tooltip icon - - - Manual - not part of database - Automated for reads, limited for writes to one region - Automated for reads, limited guarantees for writes - Fully automated for both reads and writes - - Manual - not part of database - Automated for reads, limited for writes to one region - Automated for reads, limited guarantees for writes - Fully automated for both reads and writes - Fully automated for both reads and writes
      - Automated repair and RPO(Recovery Point Objective) - - tooltip icon - - - Manual repair RPO ~1-60 mins - Automated RPO ~1 -5 mins - Manual & automated repair RPO <1 min - "Automated repair RPO <10 sec" - - Manual repair RPO ~1-60 mins - Automated RPO ~1 -5 mins - Manual & automated repair RPO <1 min - "Automated repair RPO <10 sec" - Automated repair RPO = 0 sec
      - Distributed reads - - tooltip icon - - - Manual - asynchronous - Yes - - Manual - asynchronous - Yes - Yes
      - Distributed transactions - - tooltip icon - - - No - Lightweight transactions only - Yes - - No - Lightweight transactions only - Yes - Yes
      - Database isolation levels - - tooltip icon - - - Single region consistent default - Snapshot highest - Serializable - Eventual consistent default - Read uncommitted highest - Snapshot read - Eventual consistent - No transaction isolation guarantees - Default - Snapshot highest - Serializable - - Single region consistent default - Snapshot highest - Serializable - Eventual consistent default - Read uncommitted highest - Snapshot read - Eventual consistent - No transaction isolation guarantees - Default - Snapshot highest - Serializable - Guaranteed consistent default - Serializable highest - Serializable
      - Potential data issues (default) - - tooltip icon - - - Phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write conflicts - None - Phantom reads, non-repeatable reads - - Phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write skew - Dirty reads, phantom reads, non-repeatable reads, write conflicts - None - Phantom reads, non-repeatable reads - None
      - SQL - - tooltip icon - - - Yes - No - Yes - with limitations - - Yes - No - Yes - with limitations - Yes - wire compatible with PostgreSQL
      - Database schema change - - tooltip icon - - - Yes - Offline - Online, Active and Dynamic - - Yes - Offline - Online, Active and Dynamic - Online, Active and Dynamic
      - Cost based optimization - - tooltip icon - - - Yes - No - ? - No - - Yes - No - ? - No - Yes
      - Data Geo-partitioning - - tooltip icon - - - No - Yes, object level - Yes - No - - No - Yes, object level - Yes - No - Yes, row level
      - Upgrade method - - tooltip icon - - - Offline - Online, rolling - - Offline - Online, rolling - Online, rolling
      - Multi-region - - tooltip icon - - - Yes - manual - Yes, but not for writes - Yes, for both reads and writes - - Yes - manual - Yes, but not for writes - Yes, for both reads and writes - Yes for both reads and writes
      - Multi-cloud - - tooltip icon - - - No - Yes - - No - Yes - Yes
      - - diff --git a/src/current/v19.1/collate.md b/src/current/v19.1/collate.md deleted file mode 100644 index d410b7e92de..00000000000 --- a/src/current/v19.1/collate.md +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: COLLATE -summary: The COLLATE feature lets you sort strings according to language- and country-specific rules. -toc: true ---- - -The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations. - -Collated strings are important because different languages have [different rules for alphabetic order](https://en.wikipedia.org/wiki/Alphabetical_order#Language-specific_conventions), especially with respect to accented letters. For example, in German accented letters are sorted with their unaccented counterparts, while in Swedish they are placed at the end of the alphabet. A collation is a set of rules used for ordering and usually corresponds to a language, though some languages have multiple collations with different rules for sorting; for example Portuguese has separate collations for Brazilian and European dialects (`pt-BR` and `pt-PT` respectively). - - -## Details - -- Operations on collated strings cannot involve strings with a different collation or strings with no collation. However, it is possible to add or overwrite a collation on the fly. - -- Only use the collation feature when you need to sort strings by a specific collation. We recommend this because every time a collated string is constructed or loaded into memory, CockroachDB computes its collation key, whose size is linear in relationship to the length of the collated string, which requires additional resources. - -- Collated strings can be considerably larger than the corresponding uncollated strings, depending on the language and the string content. For example, strings containing the character `é` produce larger collation keys in the French locale than in Chinese. - -- Collated strings that are indexed require additional disk space as compared to uncollated strings. In case of indexed collated strings, collation keys must be stored in addition to the strings from which they are derived, creating a constant factor overhead. - -## Supported collations - -CockroachDB supports the collations provided by Go's [language package](https://godoc.org/golang.org/x/text/language#Tag). The `` argument is the BCP 47 language tag at the end of each line, immediately preceded by `//`. For example, Afrikaans is supported as the `af` collation. - -## SQL syntax - -Collated strings are used as normal strings in SQL, but have a `COLLATE` clause appended to them. - -- **Column syntax**: `STRING COLLATE `. For example: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a STRING COLLATE en PRIMARY KEY); - ~~~ - - {{site.data.alerts.callout_info}}You can also use any of the aliases for STRING.{{site.data.alerts.end}} - -- **Value syntax**: ` COLLATE `. For example: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO foo VALUES ('dog' COLLATE en); - ~~~ - -## Examples - -### Specify collation for a column - -You can set a default collation for all values in a `STRING` column. - -For example, you can set a column's default collation to German (`de`): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE de_names (name STRING COLLATE de PRIMARY KEY); -~~~ - -When inserting values into this column, you must specify the collation for every value: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO de_names VALUES ('Backhaus' COLLATE de), ('Bär' COLLATE de), ('Baz' COLLATE de); -~~~ - -The sort will now honor the `de` collation that treats *ä* as *a* in alphabetic sorting: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM de_names ORDER BY name; -~~~ -~~~ -+----------+ -| name | -+----------+ -| Backhaus | -| Bär | -| Baz | -+----------+ -~~~ - -### Order by non-default collation - -You can sort a column using a specific collation instead of its default. - -For example, you receive different results if you order results by German (`de`) and Swedish (`sv`) collations: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM de_names ORDER BY name COLLATE sv; -~~~ -~~~ -+----------+ -| name | -+----------+ -| Backhaus | -| Baz | -| Bär | -+----------+ -~~~ - -### Ad-hoc collation casting - -You can cast any string into a collation on the fly. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT 'A' COLLATE de < 'Ä' COLLATE de; -~~~ -~~~ -true -~~~ - -However, you cannot compare values with different collations: - -{% include copy-clipboard.html %} -~~~ sql -SELECT 'Ä' COLLATE sv < 'Ä' COLLATE de; -~~~ -~~~ -pq: unsupported comparison operator: < -~~~ - -You can also use casting to remove collations from values. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT CAST(name AS STRING) FROM de_names ORDER BY name; -~~~ - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/column-families.md b/src/current/v19.1/column-families.md deleted file mode 100644 index f5d713a577b..00000000000 --- a/src/current/v19.1/column-families.md +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Column Families -summary: A column family is a group of columns in a table that are stored as a single key-value pair in the underlying key-value store. -toc: true ---- - -A column family is a group of columns in a table that are stored as a single key-value pair in the [underlying key-value store](architecture/storage-layer.html). Column families reduce the number of keys stored in the key-value store, resulting in improved performance during [`INSERT`](insert.html), [`UPDATE`](update.html), and [`DELETE`](delete.html) operations. - -This page explains how CockroachDB organizes columns into families as well as cases in which you might want to manually override the default behavior. - -{{site.data.alerts.callout_info}} -[Secondary indexes](indexes.html) do not respect column families. All secondary indexes store values in a single column family. -{{site.data.alerts.end}} - -## Default behavior - -When a table is created, all columns are stored as a single column family. - -This default approach ensures efficient key-value storage and performance in most cases. However, when frequently updated columns are grouped with seldom updated columns, the seldom updated columns are nonetheless rewritten on every update. Especially when the seldom updated columns are large, it's more performant to split them into a distinct family. - -## Manual override - -### Assign column families on table creation - -To manually assign a column family on [table creation](create-table.html), use the `FAMILY` keyword. - -For example, let's say we want to create a table to store an immutable blob of data (`data BYTES`) with a last accessed timestamp (`last_accessed TIMESTAMP`). Because we know that the blob of data will never get updated, we use the `FAMILY` keyword to break it into a separate column family: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE test ( - id INT PRIMARY KEY, - last_accessed TIMESTAMP, - data BYTES, - FAMILY f1 (id, last_accessed), - FAMILY f2 (data) -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE users; -~~~ - -~~~ -+-------+---------------------------------------------+ -| Table | CreateTable | -+-------+---------------------------------------------+ -| test | CREATE TABLE test ( | -| | id INT NOT NULL, | -| | last_accessed TIMESTAMP NULL, | -| | data BYTES NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id), | -| | FAMILY f1 (id, last_accessed), | -| | FAMILY f2 (data) | -| | ) | -+-------+---------------------------------------------+ -(1 row) -~~~ - -{{site.data.alerts.callout_info}}Columns that are part of the primary index are always assigned to the first column family. If you manually assign primary index columns to a family, it must therefore be the first family listed in the CREATE TABLE statement.{{site.data.alerts.end}} - -### Assign column families when adding columns - -When using the [`ALTER TABLE .. ADD COLUMN`](add-column.html) statement to add a column to a table, you can assign the column to a new or existing column family. - -- Use the `CREATE FAMILY` keyword to assign a new column to a **new family**. For example, the following would add a `data2 BYTES` column to the `test` table above and assign it to a new column family: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN data2 BYTES CREATE FAMILY f3; - ~~~ - -- Use the `FAMILY` keyword to assign a new column to an **existing family**. For example, the following would add a `name STRING` column to the `test` table above and assign it to family `f1`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING FAMILY f1; - ~~~ - -- Use the `CREATE IF NOT EXISTS FAMILY` keyword to assign a new column to an **existing family or, if the family doesn't exist, to a new family**. For example, the following would assign the new column to the existing `f1` family; if that family didn't exist, it would create a new family and assign the column to it: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN name STRING CREATE IF NOT EXISTS FAMILY f1; - ~~~ - -- If a column is added to a table and the family is not specified, it will be added to the first column family. For example, the following would add the new column to the `f1` family, since that is the first column family: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE test ADD COLUMN last_name STRING; - ~~~ - -## See also - -- [`CREATE TABLE`](create-table.html) -- [`ADD COLUMN`](add-column.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/comment-on.md b/src/current/v19.1/comment-on.md deleted file mode 100644 index 3e4a368c96f..00000000000 --- a/src/current/v19.1/comment-on.md +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: COMMENT ON -summary: The COMMENT ON statement associates comments to databases, tables, or columns. -toc: true ---- - -New in v19.1: The `COMMENT ON` [statement](sql-statements.html) associates comments to [databases](create-database.html), [tables](create-table.html), or [columns](add-column.html). - -{{site.data.alerts.callout_success}} -Currently, `COMMENT ON` is best suited for use with database GUI navigation tools (e.g., [dBeaver](dbeaver.html)). -{{site.data.alerts.end}} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the object they are commenting on. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/comment.html %}
      - -## Parameters - - Parameter | Description -------------|-------------- -`database_name` | The name of the database you are commenting on. -`table_name` | The name of the table you are commenting on. -`column_name` | The name of the column you are commenting on. -`comment_text` | The comment ([`STRING`](string.html)) you are associating to the object. - -## Examples - -### Add a comment to a database - -To add a comment to a database: - -{% include copy-clipboard.html %} -~~~ sql -> COMMENT ON DATABASE customers IS 'This is a sample comment'; -~~~ - -~~~ -COMMENT ON DATABASE -~~~ - -To view database comments, use a database GUI navigation tool (e.g., [dBeaver](dbeaver.html)). - -### Add a comment to a table - -To add a comment to a table: - -{% include copy-clipboard.html %} -~~~ sql -> COMMENT ON TABLE dogs IS 'This is a sample comment'; -~~~ - -~~~ -COMMENT ON TABLE -~~~ - -To view table comments, use [`SHOW TABLES`](show-tables.html): - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM customers WITH COMMENT; -~~~ - -~~~ - table_name | comment -+------------+--------------------------+ - dogs | This is a sample comment -(1 row) -~~~ - -### Add a comment to a column - -To add a comment to a column: - -{% include copy-clipboard.html %} -~~~ sql -> COMMENT ON COLUMN dogs.name IS 'This is a sample comment'; -~~~ - -~~~ -COMMENT ON COLUMN -~~~ - -To view column comments, use a database GUI navigation tool (e.g., [dBeaver](dbeaver.html)). - -## See also - -- [`CREATE DATABASE`](create-database.html) -- [`CREATE TABLE`](create-table.html) -- [`ADD COLUMN`](add-column.html) -- [`SHOW TABLES`](show-tables.html) -- [Other SQL Statements](sql-statements.html) -- [dBeaver](dbeaver.html) diff --git a/src/current/v19.1/commit-transaction.md b/src/current/v19.1/commit-transaction.md deleted file mode 100644 index 14952b07b6c..00000000000 --- a/src/current/v19.1/commit-transaction.md +++ /dev/null @@ -1,83 +0,0 @@ ---- -title: COMMIT -summary: Commit a transaction with the COMMIT statement in CockroachDB. -toc: true ---- - -The `COMMIT` [statement](sql-statements.html) commits the current [transaction](transactions.html) or, when using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), clears the connection to allow new transactions to begin. - -When using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), statements issued after [`SAVEPOINT`](savepoint.html) are committed when [`RELEASE SAVEPOINT`](release-savepoint.html) is issued instead of `COMMIT`. However, you must still issue a `COMMIT` statement to clear the connection for the next transaction. - -For non-retryable transactions, if statements in the transaction [generated any errors](transactions.html#error-handling), `COMMIT` is equivalent to `ROLLBACK`, which aborts the transaction and discards *all* updates made by its statements. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/commit_transaction.html %}
      - -## Required privileges - -No [privileges](authorization.html#assign-privileges) are required to commit a transaction. However, privileges are required for each statement within a transaction. - -## Aliases - -In CockroachDB, `END` is an alias for the `COMMIT` statement. - -## Example - -### Commit a transaction - -How you commit transactions depends on how your application handles [transaction retries](transactions.html#transaction-retries). - -#### Client-side retryable transactions - -When using [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), statements are committed by [`RELEASE SAVEPOINT`](release-savepoint.html). `COMMIT` itself only clears the connection for the next transaction. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, sku, status) VALUES (1001, '8675309', 'new'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> RELEASE SAVEPOINT cockroach_restart; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> COMMIT; -~~~ - -{{site.data.alerts.callout_danger}}This example assumes you're using client-side intervention to handle transaction retries.{{site.data.alerts.end}} - -#### Automatically retried transactions - -If you are using transactions that CockroachDB will [automatically retry](transactions.html#automatic-retries) (i.e., all statements sent in a single batch), commit the transaction with `COMMIT`. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; UPDATE products SET inventory = 100 WHERE = '8675309'; UPDATE products SET inventory = 100 WHERE = '8675310'; COMMIT; -~~~ - -## See also - -- [Transactions](transactions.html) -- [`BEGIN`](begin-transaction.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) diff --git a/src/current/v19.1/common-errors.md b/src/current/v19.1/common-errors.md deleted file mode 100644 index c194bd1885e..00000000000 --- a/src/current/v19.1/common-errors.md +++ /dev/null @@ -1,259 +0,0 @@ ---- -title: Common Errors -summary: Understand and resolve common error messages written to stderr or logs. -toc: false ---- - -This page helps you understand and resolve error messages written to `stderr` or your [logs](debug-and-error-logs.html). - -| Topic | Message | -|----------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Client connection | [`connection refused`](#connection-refused) | -| Client connection | [`node is running secure mode, SSL connection required`](#node-is-running-secure-mode-ssl-connection-required) | -| Transaction retries | [`restart transaction`](#restart-transaction) | -| Node startup | [`node belongs to cluster but is attempting to connect to a gossip network for cluster `](#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) | -| Node configuration | [`clock synchronization error: this node is more than 500ms away from at least half of the known nodes`](#clock-synchronization-error-this-node-is-more-than-500ms-away-from-at-least-half-of-the-known-nodes) | -| Node configuration | [`open file descriptor limit of is under the minimum required `](#open-file-descriptor-limit-of-number-is-under-the-minimum-required-number) | -| Replication | [`replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster"`](#replicas-failing-with-0-of-1-store-with-an-attribute-matching-likely-not-enough-nodes-in-cluster) | -| Split failed | [`split failed while applying backpressure`](#split-failed-while-applying-backpressure) | -| Deadline exceeded | [`context deadline exceeded`](#context-deadline-exceeded) | -| Ambiguous results | [`result is ambiguous`](#result-is-ambiguous) | -| Time zone data | [`invalid value for parameter "TimeZone"`](#invalid-value-for-parameter-timezone) | - -## connection refused - -This message indicates a client is trying to connect to a node that is either not running or is not listening on the specified interfaces (i.e., hostname or port). - -To resolve this issue, do one of the following: - -- If the node hasn't yet been started, [start the node](start-a-node.html). -- If you specified a [`--listen-addr` and/or a `--advertise-addr` flag](start-a-node.html#networking) when starting the node, you must include the specified IP address/hostname and port with all other [`cockroach` commands](cockroach-commands.html) or change the `COCKROACH_HOST` environment variable. - -If you're not sure what the IP address/hostname and port values might have been, you can look in the node's [logs](debug-and-error-logs.html). If necessary, you can also terminate the `cockroach` process, and then restart the node: - -{% include copy-clipboard.html %} -~~~ shell -$ pkill cockroach -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start [flags] -~~~ - -## node is running secure mode, SSL connection required - -This message indicates that the cluster is using TLS encryption to protect network communication, and the client is trying to open a connection without using the required TLS certificates. - -To resolve this issue, use the [`cockroach cert create-client`](create-security-certificates.html) command to generate a client certificate and key for the user trying to connect. For a secure deployment walkthrough, including generating security certificates and connecting clients, see [Manual Deployment](manual-deployment.html). - -## restart transaction - -Messages with the error code `40001` and the string `restart transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](transactions.html#client-side-intervention) for more details. - -Several different types of transaction retry errors are described below: - -- [`read within uncertainty interval`](#read-within-uncertainty-interval) -- [`transaction deadline exceeded`](#transaction-deadline-exceeded) - -{{site.data.alerts.callout_info}} -Your application's retry logic does not need to distinguish between these types of errors. They are listed here for reference. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -To understand how transactions work in CockroachDB, and why transaction retries are necessary to maintain serializable isolation in a distributed database, see: - -- [Transaction Layer](architecture/transaction-layer.html) -- [Life of a Distributed Transaction](architecture/life-of-a-distributed-transaction.html) -{{site.data.alerts.end}} - -### read within uncertainty interval - -(Error string includes: `ReadWithinUncertaintyIntervalError`) - -Uncertainty errors can occur when two transactions which start on different gateway nodes attempt to operate on the same data at close to the same time. The uncertainty comes from the fact that we cannot tell which one started first - the clocks on the two gateway nodes may not be perfectly in sync. - -For example, if the clock on node A is ahead of the clock on node B, a transaction started on node A may be able to commit a write with a timestamp that is still in the "future" from the perspective of node B. A later transaction that starts on node B should be able to see the earlier write from node A, even if B's clock has not caught up to A. The "read within uncertainty interval" occurs if we discover this situation in the middle of a transaction, when it is too late for the database to handle it automatically. When node B's transaction retries, it will unambiguously occur after the transaction from node A. - -Note that as long as the [client-side retry protocol](transactions.html#client-side-intervention) is followed, a transaction that has restarted once is much less likely to hit another uncertainty error, and the [`--max-offset` option](start-a-node.html#flags) provides an upper limit on how long a transaction can continue to restart due to uncertainty. - -When errors like this occur, the application has the following options: - -- Prefer consistent historical reads using [AS OF SYSTEM TIME](as-of-system-time.html) to reduce contention. -- Design the schema and queries to reduce contention. For information on how to avoid contention, see [Understanding and Avoiding Transaction Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). -- Be prepared to retry on uncertainty (and other) errors. For more information, see [Transaction retries](transactions.html#transaction-retries). - -{{site.data.alerts.callout_info}} -Uncertainty errors are a form of transaction conflict. For more information about transaction conflicts, see [Transaction conflicts](architecture/transaction-layer.html#transaction-conflicts). -{{site.data.alerts.end}} - -### transaction deadline exceeded - -New in v19.1: Errors which were previously reported to the client as opaque `TransactionStatusError`s are now transaction retry errors with the error message "transaction deadline exceeded" and error code `40001`. - -This error can occur for long-running transactions (with execution time on the order of minutes) that also experience conflicts with other transactions and thus attempt to commit at a timestamp different than their original timestamp. If the timestamp at which the transaction attempts to commit is above a "deadline" imposed by the various schema elements that the transaction has used (i.e., table structures), then this error might get returned to the client. - -When this error occurs, the application must retry the transaction. For more information about how to retry transactions, see [Transaction retries](transactions.html#transaction-retries). - -{{site.data.alerts.callout_info}} -For more information about the mechanics of the transaction conflict resolution process described above, see [Life of a Distributed Transaction](architecture/life-of-a-distributed-transaction.html). -{{site.data.alerts.end}} - - - - - -## node belongs to cluster \ but is attempting to connect to a gossip network for cluster \ - -This message usually indicates that a node tried to connect to a cluster, but the node is already a member of a different cluster. This is determined by metadata in the node's data directory. To resolve this issue, do one of the following: - -- Choose a different directory to store the CockroachDB data: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start [flags] --store=[new directory] --join=[cluster host]:26257 - ~~~ - -- Remove the existing directory and start a node joining the cluster again: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm -r cockroach-data/ - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start [flags] --join=[cluster host]:26257 - ~~~ - -This message can also occur in the following scenario: - -1. The first node of a cluster is started without the `--join` flag. -2. Subsequent nodes are started with the `--join` flag pointing to the first node. -3. The first node is stopped and restarted after the node's data directory is deleted or using a new directory. This causes the first node to initialize a new cluster. -4. The other nodes, still communicating with the first node, notice that their cluster ID and the first node's cluster ID do not match. - -To avoid this scenario, update your scripts to use the new, recommended approach to initializing a cluster: - -1. Start each initial node of the cluster with the `--join` flag set to addresses of 3 to 5 of the initial nodes. -2. Run the `cockroach init` command against any node to perform a one-time cluster initialization. -3. When adding more nodes, start them with the same `--join` flag as used for the initial nodes. - -For more guidance, see this [example](start-a-node.html#start-a-multi-node-cluster). - -## clock synchronization error: this node is more than 500ms away from at least half of the known nodes - -This error indicates that a node has spontaneously shut down because it detected that its clock is out of sync with at least half of the other nodes in the cluster by 80% of the maximum offset allowed (500ms by default). CockroachDB requires moderate levels of [clock synchronization](recommended-production-settings.html#clock-synchronization) to preserve data consistency, so the node shutting down in this way avoids the risk of consistency anomalies. - -To prevent this from happening, you should run clock synchronization software on each node. For guidance on synchronizing clocks, see the tutorial for your deployment environment: - -Environment | Recommended Approach -------------|--------------------- -[Manual](deploy-cockroachdb-on-premises.html#step-1-synchronize-clocks) | Use NTP with Google's external NTP service. -[AWS](deploy-cockroachdb-on-aws.html#step-3-synchronize-clocks) | Use the Amazon Time Sync Service. -[Azure](deploy-cockroachdb-on-microsoft-azure.html#step-3-synchronize-clocks) | Disable Hyper-V time synchronization and use NTP with Google's external NTP service. -[Digital Ocean](deploy-cockroachdb-on-digital-ocean.html#step-2-synchronize-clocks) | Use NTP with Google's external NTP service. -[GCE](deploy-cockroachdb-on-google-cloud-platform.html#step-3-synchronize-clocks) | Use NTP with Google's internal NTP service. - -## open file descriptor limit of \ is under the minimum required \ - -CockroachDB can use a large number of open file descriptors, often more than is available by default. This message indicates that the machine on which a CockroachDB node is running is under CockroachDB's recommended limits. - -For more details on CockroachDB's file descriptor limits and instructions on increasing the limit on various platforms, see [File Descriptors Limit](recommended-production-settings.html#file-descriptors-limit). - -## replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster - -### When running a single-node cluster - -When running a single-node CockroachDB cluster, an error about replicas failing will eventually show up in the node's log files, for example: - -~~~ shell -E160407 09:53:50.337328 storage/queue.go:511 [replicate] 7 replicas failing with "0 of 1 store with an attribute matching []; likely not enough nodes in cluster" -~~~ - -This happens because CockroachDB expects three nodes by default. If you do not intend to add additional nodes, you can stop this error by using [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) to update your default zone configuration to expect only one node: - -{% include copy-clipboard.html %} -~~~ shell -# Insecure cluster: -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=1;" --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Secure cluster: -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=1;" --certs-dir=[path to certs directory] -~~~ - -The zone's replica count is reduced to 1. For more information, see [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) and [Configure Replication Zones](configure-replication-zones.html). - -### When running a multi-node cluster - -When running a multi-node CockroachDB cluster, if you see an error like the one above about replicas failing, some nodes might not be able to talk to each other. For recommended actions, see [Cluster Setup Troubleshooting](cluster-setup-troubleshooting.html#replication-issues). - -## split failed while applying backpressure - -In CockroachDB, a table row is stored on disk as a key-value pair. Whenever the row is updated, CockroachDB also stores a distinct version of the key-value pair to enable concurrent request processing while guaranteeing consistency (see [multi-version concurrency control (MVCC)](architecture/storage-layer.html#mvcc)). All versions of a key-value pair belong to a larger ["range"](architecture/overview.html#terms) of the total key space, and the historical versions remain until the garbage collection period defined by the `gc.ttlseconds` variable in the applicable [zone configuration](configure-replication-zones.html#gc-ttlseconds) has passed (25 hours by default). Once a range reaches a size threshold (64 MiB by default), CockroachDB splits the range into two ranges. However, this message indicates that a range cannot be split as intended. - -One possible cause is that the range consists only of MVCC version data due to a row being repeatedly updated, and the range cannot be split because doing so would spread MVCC versions for a single row across multiple ranges. - -To resolve this issue, make sure you are not repeatedly updating a single row. If frequent updates of a row are necessary, consider one of the following: - -- Reduce the `gc.ttlseconds` variable in the applicable [zone configuration](configure-replication-zones.html#gc-ttlseconds) to reduce the garbage collection period and prevent such a large build-up of historical values. -- If a row contains large columns that are not being updated with other columns, put the large columns in separate [column families](column-families.html). - -## context deadline exceeded - -This message occurs when a component of CockroachDB gives up because it was relying on another component that has not behaved as expected, for example, another node dropped a network connection. To investigate further, look in the node's logs for the primary failure that is the root cause. - -## result is ambiguous - -In a distributed system, some errors can have ambiguous results. For -example, if you receive a `connection closed` error while processing a -`COMMIT` statement, you cannot tell whether the transaction -successfully committed or not. These errors are possible in any -database, but CockroachDB is somewhat more likely to produce them than -other databases because ambiguous results can be caused by failures -between the nodes of a cluster. These errors are reported with the -PostgreSQL error code `40003` (`statement_completion_unknown`) and the -message `result is ambiguous`. - -Ambiguous errors can be caused by nodes crashing, network failures, or -timeouts. If you experience a lot of these errors when things are -otherwise stable, look for performance issues. Note that ambiguity is -only possible for the last statement of a transaction (`COMMIT` or -`RELEASE SAVEPOINT`) or for statements outside a transaction. If a connection drops during a transaction that has not yet tried to commit, the transaction will definitely be aborted. - -In general, you should handle ambiguous errors the same way as -`connection closed` errors. If your transaction is -[idempotent](https://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning), -it is safe to retry it on ambiguous errors. `UPSERT` operations are -typically idempotent, and other transactions can be written to be -idempotent by verifying the expected state before performing any -writes. Increment operations such as `UPDATE my_table SET x=x+1 WHERE -id=$1` are typical examples of operations that cannot easily be made -idempotent. If your transaction is not idempotent, then you should -decide whether to retry or not based on whether it would be better for -your application to apply the transaction twice or return an error to -the user. - -## invalid value for parameter "TimeZone" - -This error means that the machine running the CockroachDB node is missing time zone data and therefore cannot resolve location-based time zone names. - -To resolve this issue on Linux, install the [`tzdata`](https://www.iana.org/time-zones) library (sometimes called `tz` or `zoneinfo`). - -To resolve this issue on Windows, download Go's official [zoneinfo.zip](https://github.com/golang/go/raw/master/lib/time/zoneinfo.zip) and set the `ZONEINFO` environment variable to point to the zip file. For step-by-step guidance on setting environment variables on Windows, see this [external article](https://www.techjunkie.com/environment-variables-windows-10/). - -It's important for all nodes to have the same version of this data, so make sure to do this across all nodes in the cluster. - -For details about other libraries the CockroachDB binary depends on, see [Dependencies](recommended-production-settings.html#dependencies). - -## Something else? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v19.1/common-table-expressions.md b/src/current/v19.1/common-table-expressions.md deleted file mode 100644 index 3be1fef8c2c..00000000000 --- a/src/current/v19.1/common-table-expressions.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -title: Common Table Expressions -summary: Common Table Expressions (CTEs) simplify the definition and use of subqueries -toc: true -toc_not_nested: true ---- - -Common Table Expressions, or CTEs, provide a shorthand name to a -possibly complex [subquery](subqueries.html) before it is used in a -larger query context. This improves readability of the SQL code. - -CTEs can be used in combination with [`SELECT` -clauses](select-clause.html) and [`INSERT`](insert.html), -[`DELETE`](delete.html), [`UPDATE`](update.html) and -[`UPSERT`](upsert.html) statements. - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/with_clause.html %}
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_alias_name` | The name to use to refer to the common table expression from the accompanying query or statement. -`name` | A name for one of the columns in the newly defined common table expression. -`preparable_stmt` | The statement or subquery to use as common table expression. - -## Overview - -A query or statement of the form `WITH x AS y IN z` creates the -temporary table name `x` for the results of the subquery `y`, to be -reused in the context of the query `z`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112)) - SELECT * - FROM customers AS c, o - WHERE o.customer_id = c.id; -~~~ - -In this example, the `WITH` clause defines the temporary name `o` for -the subquery over `orders`, and that name becomes a valid table name -for use in any [table expression](table-expressions.html) of the -subsequent `SELECT` clause. - -This query is equivalent to, but arguably simpler to read than: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * - FROM customers AS c, (SELECT * FROM orders WHERE id IN (33, 542, 112)) AS o - WHERE o.customer_id = c.id; -~~~ - -It is also possible to define multiple common table expressions -simultaneously with a single `WITH` clause, separated by commas. Later -subqueries can refer to earlier subqueries by name. For example, the -following query is equivalent to the two examples above: - -{% include copy-clipboard.html %} -~~~ sql -> WITH o AS (SELECT * FROM orders WHERE id IN (33, 542, 112)), - results AS (SELECT * FROM customers AS c, o WHERE o.customer_id = c.id) - SELECT * FROM results; -~~~ - -In this example, the second CTE `results` refers to the first CTE `o` -by name. The final query refers to the CTE `results`. - -## Nested `WITH` clauses - -It is possible to use a `WITH` clause in a subquery, or even a `WITH` clause within another `WITH` clause. For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (SELECT * FROM (WITH b AS (SELECT * FROM c) - SELECT * FROM b)) - SELECT * FROM a; -~~~ - -When analyzing [table expressions](table-expressions.html) that -mention a CTE name, CockroachDB will choose the CTE definition that is -closest to the table expression. For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (TABLE x), - b AS (WITH a AS (TABLE y) - SELECT * FROM a) - SELECT * FROM b; -~~~ - -In this example, the inner subquery `SELECT * FROM a` will select from -table `y` (closest `WITH` clause), not from table `x`. - -## Data modifying statements - -It is possible to use a data-modifying statement (`INSERT`, `DELETE`, -etc.) as a common table expression. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x) - SELECT x+1 FROM v -~~~ - -However, the following restriction applies: only `WITH` sub-clauses at -the top level of a SQL statement can contain data-modifying -statements. The example above is valid, but the following is not: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT x+1 FROM - (WITH v AS (INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x) - SELECT * FROM v); -~~~ - -This is not valid because the `WITH` clause that defines an `INSERT` -common table expression is not at the top level of the query. - -{{site.data.alerts.callout_info}} -If a common table expression contains -a data-modifying statement (INSERT, DELETE, -etc.), the modifications are performed fully even if only part -of the results are used, e.g., with LIMIT. See Data -Writes in Subqueries for details. -{{site.data.alerts.end}} - -
      - -## Known limitations - -{{site.data.alerts.callout_info}} -The following limitations may be lifted -in a future version of CockroachDB. -{{site.data.alerts.end}} - -
      - -### Referring to a CTE by name more than once - -{% include {{ page.version.version }}/known-limitations/cte-by-name.md %} - -## See also - -- [Subqueries](subqueries.html) -- [Selection Queries](selection-queries.html) -- [Table Expressions](table-expressions.html) -- [`EXPLAIN`](explain.html) diff --git a/src/current/v19.1/computed-columns.md b/src/current/v19.1/computed-columns.md deleted file mode 100644 index 40c6d40aaf4..00000000000 --- a/src/current/v19.1/computed-columns.md +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Computed Columns -summary: A computed column stores data generated by an expression included in the column definition. -toc: true ---- - -A computed column stores data generated from other columns by a [scalar expression](scalar-expressions.html) included in the column definition. - - -## Why use computed columns? - -Computed columns are especially useful when used with [partitioning](partitioning.html), [`JSONB`](jsonb.html) columns, or [secondary indexes](indexes.html). - -- **Partitioning** requires that partitions are defined using columns that are a prefix of the [primary key](primary-key.html). In the case of geo-partitioning, some applications will want to collapse the number of possible values in this column, to make certain classes of queries more performant. For example, if a users table has a country and state column, then you can make a stored computed column locality with a reduced domain for use in partitioning. For more information, see the [partitioning example](#create-a-table-with-geo-partitions-and-a-computed-column) below. - -- **JSONB** columns are used for storing semi-structured `JSONB` data. When the table's primary information is stored in `JSONB`, it's useful to index a particular field of the `JSONB` document. In particular, computed columns allow for the following use case: a two-column table with a `PRIMARY KEY` column and a `payload` column, whose primary key is computed as some field from the `payload` column. This alleviates the need to manually separate your primary keys from your JSON blobs. For more information, see the [`JSONB` example](#create-a-table-with-a-jsonb-column-and-a-computed-column) below. - -- **Secondary indexes** can be created on computed columns, which is especially useful when a table is frequently sorted. See the [secondary indexes example](#create-a-table-with-a-secondary-index-on-a-computed-column) below. - -## Considerations - -Computed columns: - -- Cannot be used to generate other computed columns. -- Cannot be a [foreign key](foreign-key.html) reference. -- Behave like any other column, with the exception that they cannot be written to directly. -- Are mutually exclusive with [`DEFAULT`](default-value.html). - -## Creation - -To define a computed column, use the following syntax: - -~~~ -column_name AS () STORED -~~~ - -Parameter | Description -----------|------------ -`column_name` | The [name/identifier](keywords-and-identifiers.html#identifiers) of the computed column. -`` | The [data type](data-types.html) of the computed column. -`` | The pure [scalar expression](scalar-expressions.html) used to compute column values. Any functions marked as `impure`, such as `now()` or `nextval()` cannot be used. -`STORED` | _(Required)_ The computed column is stored alongside other columns. - -## Examples - -### Create a table with a computed column - -{% include {{ page.version.version }}/computed-columns/simple.md %} - -### Create a table with geo-partitions and a computed column - -{% include {{ page.version.version }}/computed-columns/partitioning.md %} The `locality` values can then be used for geo-partitioning. - -### Create a table with a `JSONB` column and a computed column - -{% include {{ page.version.version }}/computed-columns/jsonb.md %} - -### Create a table with a secondary index on a computed column - -{% include {{ page.version.version }}/computed-columns/secondary-index.md %} - -### Add a computed column to an existing table - -{% include {{ page.version.version }}/computed-columns/add-computed-column.md %} - -For more information, see [`ADD COLUMN`](add-column.html). - -### Convert a computed column into a regular column - -{% include {{ page.version.version }}/computed-columns/convert-computed-column.md %} - -## See also - -- [Scalar Expressions](scalar-expressions.html) -- [Information Schema](information-schema.html) -- [`CREATE TABLE`](create-table.html) -- [`JSONB`](jsonb.html) -- [Define Table Partitions (Enterprise)](partitioning.html) diff --git a/src/current/v19.1/configure-replication-zones.md b/src/current/v19.1/configure-replication-zones.md deleted file mode 100644 index 7f6ce11f07b..00000000000 --- a/src/current/v19.1/configure-replication-zones.md +++ /dev/null @@ -1,670 +0,0 @@ ---- -title: Configure Replication Zones -summary: In CockroachDB, you use replication zones to control the number and location of replicas for specific sets of data. -keywords: ttl, time to live, availability zone -toc: true ---- - -Replication zones give you the power to control what data goes where in your CockroachDB cluster. Specifically, they are used to control the number and location of replicas for data belonging to the following objects: - -- Databases -- Tables -- Rows ([enterprise-only](enterprise-licensing.html)) -- Indexes ([enterprise-only](enterprise-licensing.html)) -- All data in the cluster, including internal system data ([via the default replication zone](#view-the-default-replication-zone)) - -For each of the above objects you can control: - -- How many copies of each range to spread through the cluster. -- Which constraints are applied to which data, e.g., "table X's data can only be stored in the German datacenters". -- The maximum size of ranges (how big ranges get before they are split). -- How long old data is kept before being garbage collected. -- Where you would like the leaseholders for certain ranges to be located, e.g., "for ranges that are already constrained to have at least one replica in `region=us-west`, also try to put their leaseholders in `region=us-west`". - -This page explains how replication zones work and how to use the [`CONFIGURE ZONE`](configure-zone.html) statement to manage them. - -{{site.data.alerts.callout_info}} -Currently, only members of the `admin` role can configure replication zones. By default, the `root` user belongs to the `admin` role. -{{site.data.alerts.end}} - -## Overview - -Every [range](architecture/overview.html#glossary) in the cluster is part of a replication zone. Each range's zone configuration is taken into account as ranges are rebalanced across the cluster to ensure that any constraints are honored. - -When a cluster starts, there are two categories of replication zone: - -1. Pre-configured replication zones that apply to internal system data. -2. A single default replication zone that applies to the rest of the cluster. - -You can adjust these pre-configured zones as well as add zones for individual databases, tables, rows, and secondary indexes as needed. Note that adding zones for rows and secondary indexes is [enterprise-only](enterprise-licensing.html). - -For example, you might rely on the [default zone](#view-the-default-replication-zone) to spread most of a cluster's data across all of your datacenters, but [create a custom replication zone for a specific database](#create-a-replication-zone-for-a-database) to make sure its data is only stored in certain datacenters and/or geographies. - -## Replication zone levels - -There are five replication zone levels for [**table data**](architecture/distribution-layer.html#table-data) in a cluster, listed from least to most granular: - -Level | Description -------|------------ -Cluster | CockroachDB comes with a pre-configured `.default` replication zone that applies to all table data in the cluster not constrained by a database, table, or row-specific replication zone. This zone can be adjusted but not removed. See [View the Default Replication Zone](#view-the-default-replication-zone) and [Edit the Default Replication Zone](#edit-the-default-replication-zone) for more details. -Database | You can add replication zones for specific databases. See [Create a Replication Zone for a Database](#create-a-replication-zone-for-a-database) for more details. -Table | You can add replication zones for specific tables. See [Create a Replication Zone for a Table](#create-a-replication-zone-for-a-table). -Index ([Enterprise-only](enterprise-licensing.html)) | The [secondary indexes](indexes.html) on a table will automatically use the replication zone for the table. However, with an enterprise license, you can add distinct replication zones for secondary indexes. See [Create a Replication Zone for a Secondary Index](#create-a-replication-zone-for-a-secondary-index) for more details. -Row ([Enterprise-only](enterprise-licensing.html)) | You can add replication zones for specific rows in a table or secondary index by [defining table partitions](partitioning.html). See [Create a Replication Zone for a Table Partition](#create-a-replication-zone-for-a-table-or-secondary-index-partition) for more details. - -### For system data - -In addition, CockroachDB stores internal [**system data**](architecture/distribution-layer.html#monolithic-sorted-map-structure) in what are called system ranges. There are two replication zone levels for this internal system data, listed from least to most granular: - -Level | Description -------|------------ -Cluster | The `.default` replication zone mentioned above also applies to all system ranges not constrained by a more specific replication zone. -System Range | CockroachDB comes with pre-configured replication zones for important system ranges, such as the "meta" and "liveness" ranges. If necessary, you can add replication zones for the "timeseries" range and other system ranges as well. Editing replication zones for system ranges may override settings from `.default`. See [Create a Replication Zone for a System Range](#create-a-replication-zone-for-a-system-range) for more details.

      CockroachDB also comes with pre-configured replication zones for the internal `system` database and the `system.jobs` table, which stores metadata about long-running jobs such as schema changes and backups. - -### Level priorities - -When replicating data, whether table or system, CockroachDB always uses the most granular replication zone available. For example, for a piece of user data: - -1. If there's a replication zone for the row, CockroachDB uses it. -2. If there's no applicable row replication zone and the row is from a secondary index, CockroachDB uses the secondary index replication zone. -3. If the row isn't from a secondary index or there is no applicable secondary index replication zone, CockroachDB uses the table replication zone. -4. If there's no applicable table replication zone, CockroachDB uses the database replication zone. -5. If there's no applicable database replication zone, CockroachDB uses the `.default` cluster-wide replication zone. - -## Manage replication zones - -Use the [`CONFIGURE ZONE`](configure-zone.html) statement to [add](#create-a-replication-zone-for-a-system-range), [modify](#edit-the-default-replication-zone), [reset](#reset-a-replication-zone), and [remove](#remove-a-replication-zone) replication zones. - -### Replication zone variables - -Use the [`ALTER ... CONFIGURE ZONE`](configure-zone.html) [statement](sql-statements.html) to set a replication zone: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE USING range_min_bytes = 0, range_max_bytes = 90000, gc.ttlseconds = 89999, num_replicas = 5, constraints = '[-region=west]'; -~~~ - -{% include {{page.version.version}}/zone-configs/variables.md %} - -### Replication constraints - -The location of replicas, both when they are first added and when they are rebalanced to maintain cluster equilibrium, is based on the interplay between descriptive attributes assigned to nodes and constraints set in zone configurations. - -{{site.data.alerts.callout_success}}For demonstrations of how to set node attributes and replication constraints in different scenarios, see Scenario-based Examples below.{{site.data.alerts.end}} - -#### Descriptive attributes assigned to nodes - -When starting a node with the [`cockroach start`](start-a-node.html) command, you can assign the following types of descriptive attributes: - -Attribute Type | Description ----------------|------------ -**Node Locality** | Using the [`--locality`](start-a-node.html#locality) flag, you can assign arbitrary key-value pairs that describe the locality of the node. Locality might include country, region, datacenter, rack, etc. The key-value pairs should be ordered from most inclusive to least inclusive (e.g., country before datacenter before rack), and the keys and the order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. For example:

      `--locality=region=east,datacenter=us-east-1`
      `--locality=region=east,datacenter=us-east-2`
      `--locality=region=west,datacenter=us-west-1`

      CockroachDB attempts to spread replicas evenly across the cluster based on locality, with the order determining the priority. However, locality can be used to influence the location of data replicas in various ways using replication zones.

      When there is high latency between nodes, CockroachDB also uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance. See [Follow-the-workload](demo-follow-the-workload.html) for more details. -**Node Capability** | Using the `--attrs` flag, you can specify node capability, which might include specialized hardware or number of cores, for example:

      `--attrs=ram:64gb` -**Store Type/Capability** | Using the `attrs` field of the `--store` flag, you can specify disk type or capability, for example:

      `--store=path=/mnt/ssd01,attrs=ssd`
      `--store=path=/mnt/hda1,attrs=hdd:7200rpm` - -#### Types of constraints - -The node-level and store-level descriptive attributes mentioned above can be used as the following types of constraints in replication zones to influence the location of replicas. However, note the following general guidance: - -- When locality is the only consideration for replication, it's recommended to set locality on nodes without specifying any constraints in zone configurations. In the absence of constraints, CockroachDB attempts to spread replicas evenly across the cluster based on locality. -- Required and prohibited constraints are useful in special situations where, for example, data must or must not be stored in a specific country or on a specific type of machine. - -Constraint Type | Description | Syntax -----------------|-------------|------- -**Required** | When placing replicas, the cluster will consider only nodes/stores with matching attributes or localities. When there are no matching nodes/stores, new replicas will not be added. | `+ssd` -**Prohibited** | When placing replicas, the cluster will ignore nodes/stores with matching attributes or localities. When there are no alternate nodes/stores, new replicas will not be added. | `-ssd` - -#### Scope of constraints - -Constraints can be specified such that they apply to all replicas in a zone or such that different constraints apply to different replicas, meaning you can effectively pick the exact location of each replica. - -Constraint Scope | Description | Syntax ------------------|-------------|------- -**All Replicas** | Constraints specified using JSON array syntax apply to all replicas in every range that's part of the replication zone. | `constraints = '[+ssd, -region=west]'` -**Per-Replica** | Multiple lists of constraints can be provided in a JSON object, mapping each list of constraints to an integer number of replicas in each range that the constraints should apply to.

      The total number of replicas constrained cannot be greater than the total number of replicas for the zone (`num_replicas`). However, if the total number of replicas constrained is less than the total number of replicas for the zone, the non-constrained replicas will be allowed on any nodes/stores.

      Note that per-replica constraints must be "required" (e.g., `'{"+region=west": 1}'`); they cannot be "prohibited" (e.g., `'{"-region=west": 1}'`). Also, when defining per-replica constraints on a database or table, `num_replicas` must be specified as well, but not when defining per-replica constraints on an index or partition.

      See the [Per-replica constraints](#per-replica-constraints-to-specific-datacenters) example for more details. | `constraints = '{"+ssd,+region=west": 2, "+region=east": 1}', num_replicas = 3` - -### Node/replica recommendations - -See [Cluster Topography](recommended-production-settings.html#topology) recommendations for production deployments. - -## View replication zones - -Use the [`SHOW ZONE CONFIGURATIONS`](#view-all-replication-zones) statement to view details about existing replication zones. - -## Basic examples - -These examples focus on the basic approach and syntax for working with zone configuration. For examples demonstrating how to use constraints, see [Scenario-based examples](#scenario-based-examples). - -For more examples, see [`CONFIGURE ZONE`](configure-zone.html) and [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### View all replication zones - -{% include v19.1/zone-configs/view-all-replication-zones.md %} - -For more information, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### View the default replication zone - -{% include v19.1/zone-configs/view-the-default-replication-zone.md %} - -For more information, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -### Edit the default replication zone - -{% include v19.1/zone-configs/edit-the-default-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a system range - -{% include v19.1/zone-configs/create-a-replication-zone-for-a-system-range.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a database - -{% include v19.1/zone-configs/create-a-replication-zone-for-a-database.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a table - -{% include v19.1/zone-configs/create-a-replication-zone-for-a-table.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a secondary index - -{% include v19.1/zone-configs/create-a-replication-zone-for-a-secondary-index.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Create a replication zone for a table or secondary index partition - -{% include v19.1/zone-configs/create-a-replication-zone-for-a-table-partition.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Reset a replication zone - -{% include v19.1/zone-configs/reset-a-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Remove a replication zone - -{% include v19.1/zone-configs/remove-a-replication-zone.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -### Constrain leaseholders to specific datacenters - -{% include v19.1/zone-configs/constrain-leaseholders-to-specific-datacenters.md %} - -For more information, see [`CONFIGURE ZONE`](configure-zone.html). - -## Scenario-based examples - -### Even replication across datacenters - -**Scenario:** - -- You have 6 nodes across 3 datacenters, 2 nodes in each datacenter. -- You want data replicated 3 times, with replicas balanced evenly across all three datacenters. - -**Approach:** - -1. Start each node with its datacenter location specified in the [`--locality`](start-a-node.html#locality) flag: - - Datacenter 1: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,, - ~~~ - - Datacenter 2: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,, - ~~~ - - Datacenter 3: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-3 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-3 \ - --join=,, - ~~~ - -2. Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -There's no need to make zone configuration changes; by default, the cluster is configured to replicate data three times, and even without explicit constraints, the cluster will aim to diversify replicas across node localities. - -### Per-replica constraints to specific datacenters - -**Scenario:** - -- You have 5 nodes across 5 datacenters in 3 regions, 1 node in each datacenter. -- You want data replicated 3 times, with a quorum of replicas for a database holding West Coast data centered on the West Coast and a database for nation-wide data replicated across the entire country. - -**Approach:** - -1. Start each node with its region and datacenter location specified in the [`--locality`](start-a-node.html#locality) flag: - - Start the ffive nodes: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=region=us-west1,datacenter=us-west1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-west1,datacenter=us-west1-b \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-central1,datacenter=us-central1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-east1,datacenter=us-east1-a \ - --join=,,,, - $ cockroach start --insecure --advertise-addr= --locality=region=us-east1,datacenter=us-east1-b \ - --join=,,,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create the database for the West Coast application: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE west_app_db; - ~~~ - -4. Configure a replication zone for the database: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE west_app_db \ - CONFIGURE ZONE USING constraints = '{"+region=us-west1": 2, "+region=us-central1": 1}', num_replicas = 3; - ~~~ - - ~~~ - CONFIGURE ZONE 1 - ~~~ - -5. View the replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE west_app_db; - ~~~ - - ~~~ - zone_name | config_sql - +-------------+--------------------------------------------------------------------+ - west_app_db | ALTER DATABASE west_app_db CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '{+region=us-west1: 2, +region=us-central1: 1}', - | lease_preferences = '[]' - (1 row) - ~~~ - - Two of the database's three replicas will be put in `region=us-west1` and its remaining replica will be put in `region=us-central1`. This gives the application the resilience to survive the total failure of any one datacenter while providing low-latency reads and writes on the West Coast because a quorum of replicas are located there. - -6. No configuration is needed for the nation-wide database. The cluster is configured to replicate data 3 times and spread them as widely as possible by default. Because the first key-value pair specified in each node's locality is considered the most significant part of each node's locality, spreading data as widely as possible means putting one replica in each of the three different regions. - -### Multiple applications writing to different databases - -**Scenario:** - -- You have 2 independent applications connected to the same CockroachDB cluster, each application using a distinct database. -- You have 6 nodes across 2 datacenters, 3 nodes in each datacenter. -- You want the data for application 1 to be replicated 5 times, with replicas evenly balanced across both datacenters. -- You want the data for application 2 to be replicated 3 times, with all replicas in a single datacenter. - -**Approach:** - -1. Start each node with its datacenter location specified in the [`--locality`](start-a-node.html#locality) flag: - - Three nodes in datacenter 1: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,,,,, - ~~~ - - Three nodes in datacenter 2: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,,,,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,,,,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create the database for application 1: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE app1_db; - ~~~ - -4. Configure a replication zone for the database used by application 1: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE app1_db CONFIGURE ZONE USING num_replicas = 5; - ~~~ - - ~~~ - CONFIGURE ZONE 1 - ~~~ - -5. View the replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE app1_db; - ~~~ - - ~~~ - zone_name | config_sql - +--------------+---------------------------------------------+ - app1_db | ALTER DATABASE app1_db CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - Nothing else is necessary for application 1's data. Since all nodes specify their datacenter locality, the cluster will aim to balance the data in the database used by application 1 between datacenters 1 and 2. - -6. Still in the SQL client, create a database for application 2: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE app2_db; - ~~~ - -7. Configure a replication zone for the database used by application 2: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER DATABASE app2_db CONFIGURE ZONE USING constraints = '[+datacenter=us-2]'; - ~~~ - -8. View the replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR DATABASE app2_db; - ~~~ - - ~~~ - zone_name | config_sql - +--------------+---------------------------------------------+ - app2_db | ALTER DATABASE app2_db CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[+datacenter=us-2]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The required constraint will force application 2's data to be replicated only within the `us-2` datacenter. - -### Stricter replication for a table and its secondary indexes - -**Scenario:** - -- You have 7 nodes, 5 with SSD drives and 2 with HDD drives. -- You want data replicated 3 times by default. -- Speed and availability are important for a specific table and its indexes, which are queried very frequently, however, so you want the data in the table and secondary indexes to be replicated 5 times, preferably on nodes with SSD drives. - -**Approach:** - -1. Start each node with `ssd` or `hdd` specified as store attributes: - - 5 nodes with SSD storage: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --store=path=node1,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node2,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node3,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node4,attrs=ssd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node5,attrs=ssd \ - --join=,, - ~~~ - - 2 nodes with HDD storage: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --store=path=node6,attrs=hdd \ - --join=,, - $ cockroach start --insecure --advertise-addr= --store=path=node7,attrs=hdd \ - --join=,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Create a database and table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE db; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE db.important_table; - ~~~ - -4. Configure a replication zone for the table that must be replicated more strictly: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE db.important_table CONFIGURE ZONE USING num_replicas = 5, constraints = '[+ssd]' - ~~~ - -5. View the replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR TABLE db.important_table; - ~~~ - - ~~~ - zone_name | config_sql - +-------------------------+---------------------------------------------+ - db.important_table | ALTER DATABASE app2_db CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[+ssd]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The secondary indexes on the table will use the table's replication zone, so all data for the table will be replicated 5 times, and the required constraint will place the data on nodes with `ssd` drives. - -### Tweaking the replication of system ranges - -**Scenario:** - -- You have nodes spread across 7 datacenters. -- You want data replicated 5 times by default. -- For better performance, you want a copy of the meta ranges in all of the datacenters. -- To save disk space, you only want the internal timeseries data replicated 3 times by default. - -**Approach:** - -1. Start each node with a different [locality](start-a-node.html#locality) attribute: - - ~~~ shell - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-1 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-2 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-3 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-4 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-5 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-6 \ - --join=,, - $ cockroach start --insecure --advertise-addr= --locality=datacenter=us-7 \ - --join=,, - ~~~ - - Initialize the cluster: - - ~~~ shell - $ cockroach init --insecure --host= - ~~~ - -2. On any node, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure - ~~~ - -3. Configure the default replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER RANGE default CONFIGURE ZONE USING num_replicas = 5; - ~~~ - -4. View the replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW ZONE CONFIGURATION FOR RANGE default; - ~~~ - ~~~ - zone_name | config_sql - +-----------+------------------------------------------+ - .default | ALTER RANGE default CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 5, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - All data in the cluster will be replicated 5 times, including both SQL data and the internal system data. - -5. Configure the `.meta` replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER RANGE meta CONFIGURE ZONE USING num_replicas = 7; - ~~~ - - ~~~ - zone_name | config_sql - +-----------+---------------------------------------+ - .meta | ALTER RANGE meta CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 3600, - | num_replicas = 7, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The `.meta` addressing ranges will be replicated such that one copy is in all 7 datacenters, while all other data will be replicated 5 times. - -6. Configure the `.timeseries` replication zone: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER RANGE timeseries CONFIGURE ZONE USING num_replicas = 3; - ~~~ - - ~~~ - zone_name | config_sql - +-------------+---------------------------------------------+ - .timeseries | ALTER RANGE timeseries CONFIGURE ZONE USING - | range_min_bytes = 1048576, - | range_max_bytes = 67108864, - | gc.ttlseconds = 90000, - | num_replicas = 3, - | constraints = '[]', - | lease_preferences = '[]' - (1 row) - ~~~ - - The timeseries data will only be replicated 3 times without affecting the configuration of all other data. - -## See also - -- [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) -- [`CONFIGURE ZONE`](configure-zone.html) -- [SQL Statements](sql-statements.html) -- [Table Partitioning](partitioning.html) diff --git a/src/current/v19.1/configure-zone.md b/src/current/v19.1/configure-zone.md deleted file mode 100644 index 66d58d9d548..00000000000 --- a/src/current/v19.1/configure-zone.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: CONFIGURE ZONE -summary: Use the CONFIGURE ZONE statement to add, modify, reset, and remove replication zones. -toc: true ---- - -Use `CONFIGURE ZONE` to add, modify, reset, and remove [replication zones](configure-replication-zones.html). To view details about existing replication zones, see [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html). - -In CockroachDB, you can use **replication zones** to control the number and location of replicas for specific sets of data, both when replicas are first added and when they are rebalanced to maintain cluster equilibrium. - -{{site.data.alerts.callout_info}} -Adding replication zones for rows and secondary indexes is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -## Synopsis - -**alter_zone_range_stmt ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/alter_zone_range.html %} -
      - -**alter_zone_database_stmt ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/alter_zone_database.html %} -
      - -**alter_zone_table_stmt ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/alter_zone_table.html %} -
      - -**alter_zone_index_stmt ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/alter_zone_index.html %} -
      - -## Required privileges - -Currently, only the `root` user can configure replication zones. - -## Parameters - - Parameter | Description ------------+------------- -`range_name` | The name of the system [range](architecture/overview.html#glossary) for which to show [replication zone configurations](configure-replication-zones.html). -`database_name` | The name of the [database](create-database.html) for which to show [replication zone configurations](configure-replication-zones.html). -`table_name` | The name of the [table](create-table.html) for which to show [replication zone configurations](configure-replication-zones.html). -`partition_name` | The name of the [partition](partitioning.html) for which to show [replication zone configurations](configure-replication-zones.html). -`index_name` | The name of the [index](indexes.html) for which to show [replication zone configurations](configure-replication-zones.html). -`variable` | The name of the [variable](#variables) to change. -`value` | The value of the variable to change. -`DISCARD` | Remove a replication zone. - -### Variables - -{% include {{ page.version.version }}/zone-configs/variables.md %} - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Edit a replication zone - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t CONFIGURE ZONE USING range_min_bytes = 0, range_max_bytes = 90000, gc.ttlseconds = 89999, num_replicas = 4, constraints = '[-region=west]'; -~~~ - -~~~ -CONFIGURE ZONE 1 -~~~ - -### Edit the default replication zone - -{% include {{ page.version.version }}/zone-configs/edit-the-default-replication-zone.md %} - -### Create a replication zone for a database - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-database.md %} - -### Create a replication zone for a table - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table.md %} - -### Create a replication zone for a secondary index - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-secondary-index.md %} - -### Create a replication zone for a table or secondary index partition - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-table-partition.md %} - -### Create a replication zone for a system range - -{% include {{ page.version.version }}/zone-configs/create-a-replication-zone-for-a-system-range.md %} - -### Reset a replication zone - -{% include {{ page.version.version }}/zone-configs/reset-a-replication-zone.md %} - -### Remove a replication zone - -{% include {{ page.version.version }}/zone-configs/remove-a-replication-zone.md %} - -## See also - -- [Configure Replication Zones](configure-replication-zones.html) -- [`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) -- [`ALTER DATABASE`](alter-database.html) -- [`ALTER TABLE`](alter-table.html) -- [`ALTER INDEX`](alter-index.html) -- [`ALTER RANGE`](alter-range.html) -- [`SHOW JOBS`](show-jobs.html) -- [Table Partitioning](partitioning.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/connection-parameters.md b/src/current/v19.1/connection-parameters.md deleted file mode 100644 index 6e595527a2e..00000000000 --- a/src/current/v19.1/connection-parameters.md +++ /dev/null @@ -1,233 +0,0 @@ ---- -title: Client Connection Parameters -summary: This page describes the parameters used to establish a client connection. -toc: true ---- - -Client applications, including [`cockroach` client -commands](cockroach-commands.html), work by establishing a network -connection to a CockroachDB cluster. The client connection parameters -determine which CockroachDB cluster they connect to, and how to -establish this network connection. - -## Supported connection parameters - -Most client apps, including `cockroach` client commands, determine -which CockroachDB server to connect to using a [PostgreSQL connection -URL](#connect-using-a-url). When using a URL, a client can also -specify additional SQL-level parameters. This mode provides the most -configuration flexibility. - -In addition, all `cockroach` client commands also accept [discrete -connection parameters](#connect-using-discrete-parameters) that can -specify the connection parameters separately from a URL. - -## When to use a URL and when to use discrete parameters - -Specifying client parameters using a URL may be more convenient during -experimentation, as it facilitates copy-pasting the connection -parameters (the URL) between different tools: the output of `cockroach -start`, other `cockroach` commands, GUI database visualizer, -programming tools, etc. - - -Discrete parameters may be more convenient in automation, where the -components of the configuration are filled in separately from -different variables in a script or a service manager. - -## Connect using a URL - -A connection URL has the following format: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://:@:/? -~~~ - - Component | Description | Required -----------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------- - `` | The [SQL user](create-and-manage-users.html) that will own the client session. | ✗ - `` | The user's password. It is not recommended to pass the password in the URL directly.

      [Find more detail about how CockroachDB handles passwords](authentication.html#client-authentication). | ✗ - `` | The host name or address of a CockroachDB node or load balancer. | Required by most client drivers. - `` | The port number of the SQL interface of the CockroachDB node or load balancer. The default port number for CockroachDB is 26257. Use this value when in doubt. | Required by most client drivers. - `` | A database name to use as [current database](sql-name-resolution.html#current-database). Defaults to `defaultdb`. | ✗ - `` | [Additional connection parameters](#additional-connection-parameters), including SSL/TLS certificate settings. | ✗ - -{{site.data.alerts.callout_info}} -For cockroach commands that accept a URL, you can specify the URL with the command-line flag `--url`. -If `--url` is not specified but -the environment variable `COCKROACH_URL` is defined, the environment -variable is used. Otherwise, the `cockroach` command will use -[discrete connection parameters](#connect-using-discrete-parameters) -as described below. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -The `` part is not used for [`cockroach` -commands](cockroach-commands.html) other than [`cockroach -sql`](use-the-built-in-sql-client.html). A warning -is currently printed if it is mistakenly specified, and -future versions of CockroachDB may return an error in that case. -{{site.data.alerts.end}} - -### Additional connection parameters - -The following additional parameters can be passed after the `?` character in the URL: - -Parameter | Description | Default value -----------|-------------|--------------- -`application_name` | An initial value for the [`application_name` session variable](set-vars.html).

      Note: For [Java JDBC](build-a-java-app-with-cockroachdb.html), use `ApplicationName`. | Empty string. -`sslmode` | Which type of secure connection to use: `disable`, `allow`, `prefer`, `require`, `verify-ca` or `verify-full`. See [Secure Connections With URLs](#secure-connections-with-urls) for details. | `disable` -`sslrootcert` | Path to the [CA certificate](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. -`sslcert` | Path to the [client certificate](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. -`sslkey` | Path to the [client private key](create-security-certificates.html), when `sslmode` is not `disable`. | Empty string. - -### Secure connections with URLs - -The following values are supported for `sslmode`, although only the first and the last are recommended for use. - -Parameter | Description | Recommended for use -----------|-------------|-------------------- -`sslmode=disable` | Do not use an encrypted, secure connection at all. | Use during development. -`sslmode=allow` | Enable a secure connection only if the server requires it.

      **Not supported in all clients.** | -`sslmode=prefer` | Try to establish a secure connection, but accept an insecure connection if the server does not support secure connections.

      **Not supported in all clients.** | -`sslmode=require` | Force a secure connection. An error occurs if the secure connection cannot be established. | -`sslmode=verify-ca` | Force a secure connection and verify that the server certificate is signed by a known CA. | -`sslmode=verify-full` | Force a secure connection, verify that the server certificate is signed by a known CA, and verify that the server address matches that specified in the certificate. | Use for [secure deployments](secure-a-cluster.html). - -{{site.data.alerts.callout_danger}} -Some client drivers and the `cockroach` commands do not support -`sslmode=allow` and `sslmode=prefer`. Check the documentation of your -SQL driver to determine whether these options are supported. -{{site.data.alerts.end}} - -### Example URL for an insecure connection - -The following URL is suitable to connect to a CockroachDB node using an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=disable -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port), with `mydb` set as -current database. `sslmode=disable` makes the connection insecure. - -### Example URL for a secure connection - -The following URL is suitable to connect to a CockroachDB node using a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/mydb?sslmode=verify-full&sslrootcert=path/to/ca.crt&sslcert=path/to/client.username.crt&sslkey=path/to/client.username.key -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- Current database `mydb` -- SSL/TLS mode `verify-full`: - - Root CA certificate `path/to/ca.crt` - - Client certificate `path/to/client.username.crt` - - Client key `path/to/client.username.key` - -For details about how to create and manage SSL/TLS certificates, see -[Create Security Certificates](create-security-certificates.html) and -[Rotate Certificates](rotate-certificates.html). - -## Connect using discrete parameters - -Most [`cockroach` commands](cockroach-commands.html) accept connection -parameters as separate, discrete command-line flags, in addition (or -in replacement) to `--url` which [specifies all parameters as a -URL](#connect-using-a-url). - -For each command-line flag that directs a connection parameter, -CockroachDB also recognizes an environment variable. The environment -variable is used when the command-line flag is not specified. - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -### Example command-line flags for an insecure connection - -The following command-line flags establish an insecure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user=root \ ---host= ---insecure -~~~ - -This specifies a connection for the `root` user to server `servername` -on port 26257 (the default CockroachDB SQL port). `--insecure` makes -the connection insecure. - -### Example command-line flags for a secure connection - -The following command-line flags establish a secure connection: - -{% include_cached copy-clipboard.html %} -~~~ ---user=root \ ---host= ---certs-dir=path/to/certs -~~~ - -This uses the following components: - -- User `root` -- Host name `servername`, port number 26257 (the default CockroachDB SQL port) -- SSL/TLS enabled, with settings: - - Root CA certificate `path/to/certs/ca.crt` - - Client certificate `path/to/client..crt` (`path/to/certs/client.root.crt` with `--user root`) - - Client key `path/to/client..key` (`path/to/certs/client.root.key` with `--user root`) - -{{site.data.alerts.callout_info}} -When using discrete connection parameters, the file names of the CA -and client certificates and client key are derived automatically from -the value of `--certs-dir`. -{{site.data.alerts.end}} - -## Using both URL and client parameters - -Most `cockroach` commands accept both a URL and client parameters. -The information contained therein is combined in the order it appears -in the command line. - -This combination is useful so that discrete command-line flags can -override settings not otherwise set in the URL. - -### Example override of the current database - -The `cockroach start` command prints out the following connection URL, which connects to the `defaultdb` database: - -{% include_cached copy-clipboard.html %} -~~~ -postgres://root@servername:26257/?sslmode=disable -~~~ - -To specify `mydb` as the current database using [`cockroach sql`](use-the-built-in-sql-client.html), run the following command: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql \ ---url "postgres://root@servername:26257/?sslmode=disable" \ ---database mydb -~~~ - -This is equivalent to: - -{% include_cached copy-clipboard.html %} -~~~ -cockroach sql --url "postgres://root@servername:26257/mydb?sslmode=disable" -~~~ - -## See also - -- [`cockroach` commands](cockroach-commands.html) -- [Create Security Certificates](create-security-certificates.html) -- [Secure a Cluster](secure-a-cluster.html) -- [Create and Manage Users](create-and-manage-users.html) diff --git a/src/current/v19.1/constraints.md b/src/current/v19.1/constraints.md deleted file mode 100644 index 6bc9ede9167..00000000000 --- a/src/current/v19.1/constraints.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: Constraints -summary: Constraints offer additional data integrity by enforcing conditions on the data within a column. -toc: true ---- - -Constraints offer additional data integrity by enforcing conditions on the data within a column. Whenever values are manipulated (inserted, deleted, or updated), constraints are checked and modifications that violate constraints are rejected. - -For example, the `UNIQUE` constraint requires that all values in a column be unique from one another (except *NULL* values). If you attempt to write a duplicate value, the constraint rejects the entire statement. - - -## Supported constraints - - Constraint | Description -------------|------------- - [`CHECK`](check.html) | Values must return `TRUE` or `NULL` for a Boolean expression. - [`DEFAULT` value](default-value.html) | If a value is not defined for the constrained column in an `INSERT` statement, the `DEFAULT` value is written to the column. - [`FOREIGN KEY`](foreign-key.html) | Values must exactly match existing values from the column it references. - [`NOT NULL`](not-null.html) | Values may not be *NULL*. - [`PRIMARY KEY`](primary-key.html) | Values must uniquely identify each row *(one per table)*. This behaves as if the `NOT NULL` and `UNIQUE` constraints are applied, as well as automatically creates an [index](indexes.html) for the table using the constrained columns. - [`UNIQUE`](unique.html) | Each non-*NULL* value must be unique. This also automatically creates an [index](indexes.html) for the table using the constrained columns. - -## Using constraints - -### Add constraints - -How you add constraints depends on the number of columns you want to constrain, as well as whether or not the table is new. - -- **One column of a new table** has its constraints defined after the column's data type. For example, this statement applies the `PRIMARY KEY` constraint to `foo.a`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE foo (a INT PRIMARY KEY); - ~~~ -- **Multiple columns of a new table** have their constraints defined after the table's columns. For example, this statement applies the `PRIMARY KEY` constraint to `foo`'s columns `a` and `b`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE bar (a INT, b INT, PRIMARY KEY (a,b)); - ~~~ - - {{site.data.alerts.callout_info}} - The `DEFAULT` and `NOT NULL` constraints cannot be applied to multiple columns. - {{site.data.alerts.end}} - -- **Existing tables** can have the following constraints added: - - `CHECK`, `FOREIGN KEY`, and `UNIQUE` constraints can be added through [`ALTER TABLE...ADD CONSTRAINT`](add-constraint.html). For example, this statement adds the `UNIQUE` constraint to `baz.id`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE baz ADD CONSTRAINT id_unique UNIQUE (id); - ~~~ - - - `DEFAULT` values can be added through [`ALTER TABLE...ALTER COLUMN`](alter-column.html#set-or-change-a-default-value). For example, this statement adds the Default Value constraint to `baz.bool`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE baz ALTER COLUMN bool SET DEFAULT true; - ~~~ - - - `PRIMARY KEY` and `NOT NULL` constraints cannot be added or changed. However, you can go through [this process](#table-migrations-to-add-or-change-immutable-constraints) to migrate data from your current table to a new table with the constraints you want to apply. - -#### Order of constraints - -The order in which you list constraints is not important because constraints are applied to every modification of their respective tables or columns. - -#### Name constraints on new tables - -You can name constraints applied to new tables using the `CONSTRAINT` clause before defining the constraint: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE foo (a INT CONSTRAINT another_name PRIMARY KEY); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bar (a INT, b INT, CONSTRAINT yet_another_name PRIMARY KEY (a,b)); -~~~ - -### View constraints - -To view a table's constraints, use [`SHOW CONSTRAINTS`](show-constraints.html) or [`SHOW CREATE`](show-create.html). - -### Remove constraints - -The procedure for removing a constraint depends on its type: - -Constraint Type | Procedure ------------------|----------- -[`CHECK`](check.html) | Use [`DROP CONSTRAINT`](drop-constraint.html) -[`DEFAULT` value](default-value.html) | Use [`ALTER COLUMN`](alter-column.html#remove-default-constraint) -[`FOREIGN KEY`](foreign-key.html) | Use [`DROP CONSTRAINT`](drop-constraint.html) -[`NOT NULL`](not-null.html) | Use [`ALTER COLUMN`](alter-column.html#remove-not-null-constraint) -[`PRIMARY KEY`](primary-key.html) | Primary Keys cannot be removed. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). -[`UNIQUE`](unique.html) | The `UNIQUE` constraint cannot be dropped directly. To remove the constraint, [drop the index](drop-index.html) that was created by the constraint, e.g., `DROP INDEX my_unique_constraint CASCADE` (note that `CASCADE` is required for dropping indexes used by unique constraints). - -### Change constraints - -The procedure for changing a constraint depends on its type: - -Constraint Type | Procedure ------------------|----------- -[`CHECK`](check.html) | [Issue a transaction](transactions.html#syntax) that adds a new `CHECK` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). -[`DEFAULT` value](default-value.html) | The `DEFAULT` value can be changed through [`ALTER COLUMN`](alter-column.html). -[`FOREIGN KEY`](foreign-key.html) | [Issue a transaction](transactions.html#syntax) that adds a new `FOREIGN KEY` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). -[`NOT NULL`](not-null.html) | The `NOT NULL` constraint cannot be changed, only removed. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). -[`PRIMARY KEY`](primary-key.html) | Primary Keys cannot be modified. However, you can move the table's data to a new table with [this process](#table-migrations-to-add-or-change-immutable-constraints). -[`UNIQUE`](unique.html) | [Issue a transaction](transactions.html#syntax) that adds a new `UNIQUE` constraint ([`ADD CONSTRAINT`](add-constraint.html)), and then remove the existing one ([`DROP CONSTRAINT`](drop-constraint.html)). - -#### Table migrations to add or change immutable constraints - -If you want to make a change to an immutable constraint, you can use the following process: - -1. [Create a new table](create-table.html) with the constraints you want to apply. -2. Move the data from the old table to the new one using [`INSERT` from a `SELECT` statement](insert.html#insert-from-a-select-statement). -3. [Drop the old table](drop-table.html), and then [rename the new table to the old name](rename-table.html). This cannot be done transactionally. - -## See also - -- [`CREATE TABLE`](create-table.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE`](show-create.html) diff --git a/src/current/v19.1/cost-based-optimizer.md b/src/current/v19.1/cost-based-optimizer.md deleted file mode 100644 index ae06c6a63c0..00000000000 --- a/src/current/v19.1/cost-based-optimizer.md +++ /dev/null @@ -1,658 +0,0 @@ ---- -title: Cost-Based Optimizer -summary: The cost-based optimizer seeks the lowest cost for a query, usually related to time. -toc: true ---- - -The cost-based optimizer seeks the lowest cost for a query, usually related to time. - -In versions prior to 2.1, a heuristic planner was used to generate query execution plans. The heuristic planner is only used in the following cases: - -- If your query uses functionality that is not yet supported by the cost-based optimizer. For more information about the types of queries that are supported, see [Types of statements supported by the cost-based optimizer](#types-of-statements-supported-by-the-cost-based-optimizer). -- If you explicitly turn off the optimizer. For more information, see [How to turn the optimizer off](#how-to-turn-the-optimizer-off). - -## How is cost calculated? - -A given SQL query can have thousands of equivalent query plans with vastly different execution times. The cost-based optimizer enumerates these plans and chooses the lowest cost plan. - -Cost is roughly calculated by: - -- Estimating how much time each node in the query plan will use to process all results -- Modeling how data flows through the query plan - -The most important factor in determining the quality of a plan is cardinality (i.e., the number of rows); the fewer rows each SQL operator needs to process, the faster the query will run. - -## View query plan - -To see whether a query will be run with the cost-based optimizer, run the query with [`EXPLAIN (OPT)`](explain.html#opt-option). The `OPT` option displays a query plan tree, along with some information that was used to plan the query. - -If the query is unsupported it will return an error message that starts with e.g., `pq: unsupported statement`. In such cases, the query will be run with the legacy heuristic planner. This should be rare since the optimizer [supports most SQL statements](#types-of-statements-supported-by-the-cost-based-optimizer). - -## Types of statements supported by the cost-based optimizer - -The cost-based optimizer supports most SQL statements. Specifically, the following types of statements are supported: - -- [`CREATE TABLE`](create-table.html) -- [`UPDATE`](update.html) -- [`INSERT`](insert.html), including: - - `INSERT .. ON CONFLICT DO NOTHING` - - `INSERT .. ON CONFLICT .. DO UPDATE` -- [`UPSERT`](upsert.html) -- [`DELETE`](delete.html) -- `FILTER` clauses on [aggregate functions](functions-and-operators.html#aggregate-functions) -- [Sequences](create-sequence.html) -- [Views](views.html) -- All [`SELECT`](select-clause.html) statements that do not include window functions -- All `UNION` statements that do not include window functions -- All `VALUES` statements that do not include window functions -- Most correlated subqueries — for exceptions, see [Correlated subqueries](subqueries.html#correlated-subqueries). - -This is not meant to be an exhaustive list. To check whether a particular query will be run with the cost-based optimizer, follow the instructions in the [View query plan](#view-query-plan) section. - -## Table statistics - -The cost-based optimizer can often find more performant query plans if it has access to statistical data on the contents of your tables. This data needs to be generated from scratch for new tables, and regenerated periodically for existing tables. - -New in v19.1: By default, CockroachDB generates table statistics automatically as tables are updated. It does this [using a background job](create-statistics.html#view-statistics-jobs) that automatically determines which columns to get statistics on — specifically, it chooses: - -- Columns that are part of the primary key or an index (in other words, all indexed columns). -- Up to 100 non-indexed columns. - -{{site.data.alerts.callout_info}} -[Schema changes](online-schema-changes.html) trigger automatic statistics collection for the affected table(s). -{{site.data.alerts.end}} - -### Controlling automatic statistics - -For best query performance, most users should leave automatic statistics enabled with the default settings. The information provided in this section is useful for troubleshooting or performance tuning by advanced users. - -#### Controlling statistics refresh rate - -Statistics are refreshed in the following cases: - -1. When there are no statistics. -2. When it's been a long time since the last refresh, where "long time" is defined according to a moving average of the time across the last several refreshes. -3. After each mutation operation ([`INSERT`](insert.html), [`UPDATE`](update.html), or [`DELETE`](delete.html)), the probability of a refresh is calculated using a formula that takes the [cluster settings](cluster-settings.html) shown below as inputs. These settings define the target number of rows in a table that should be stale before statistics on that table are refreshed. Increasing either setting will reduce the frequency of refreshes. In particular, `min_stale_rows` impacts the frequency of refreshes for small tables, while `fraction_stale_rows` has more of an impact on larger tables. - -| Setting | Default Value | Details | -|------------------------------------------------------+---------------+--------------------------------------------------------------------------------------| -| `sql.stats.automatic_collection.fraction_stale_rows` | 0.2 | Target fraction of stale rows per table that will trigger a statistics refresh | -| `sql.stats.automatic_collection.min_stale_rows` | 500 | Target minimum number of stale rows per table that will trigger a statistics refresh | - -{{site.data.alerts.callout_info}} -Because the formula for statistics refreshes is probabilistic, you should not expect to see statistics update immediately after changing these settings, or immediately after exactly 500 rows have been updated. -{{site.data.alerts.end}} - -#### Turning off statistics - -If you need to turn off automatic statistics collection, follow the steps below: - -1. Run the following statement to disable the automatic statistics [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING sql.stats.automatic_collection.enabled = false; - ~~~ - -2. Use the [`SHOW STATISTICS`](show-statistics.html) statement to view automatically generated statistics. - -3. Delete the automatically generated statistics using the following statement: - - {% include copy-clipboard.html %} - ~~~ sql - > DELETE FROM system.table_statistics WHERE true; - ~~~ - -4. Restart the nodes in your cluster to clear the statistics caches. - -For instructions showing how to manually generate statistics, see the examples in the [`CREATE STATISTICS` documentation](create-statistics.html). - -## Query plan cache - -New in v19.1: CockroachDB uses a cache for the query plans generated by the optimizer. This can lead to faster query execution since the database can reuse a query plan that was previously calculated, rather than computing a new plan each time a query is executed. - -The query plan cache is enabled by default. To disable it, execute the following statement: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.query_cache.enabled = false; -~~~ - -Finally, note that only the following statements use the plan cache: - -- [`SELECT`](select-clause.html) -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`DELETE`](delete.html) - -## Join reordering - -New in v19.1: The cost-based optimizer will explore additional join orderings in an attempt to find the lowest-cost execution plan for a query involving multiple joins, which can lead to significantly better performance in some cases. - -Because this process leads to an exponential increase in the number of possible execution plans for such queries, it's only used to reorder subtrees containing 4 or fewer joins by default. - -To change this setting, which is controlled by the `reorder_joins_limit` [session variable](set-vars.html), run the statement shown below. To disable this feature, set the variable to `0`. - -{% include copy-clipboard.html %} -~~~ sql -> SET reorder_joins_limit = 6; -~~~ - -{{site.data.alerts.callout_danger}} -We strongly recommend not setting this value higher than 8 to avoid performance degradation. If set too high, the cost of generating and costing execution plans can end up dominating the total execution time of the query. -{{site.data.alerts.end}} - -For more information about the difficulty of selecting an optimal join ordering, see our blog post [An Introduction to Join Ordering](https://www.cockroachlabs.com/blog/join-ordering-pt1/). - -## Join hints - -New in v19.1: The optimizer supports hint syntax to force the use of a specific join algorithm. The algorithm is specified between the join type (`INNER`, `LEFT`, etc.) and the `JOIN` keyword, for example: - -- `INNER HASH JOIN` -- `OUTER MERGE JOIN` -- `LEFT LOOKUP JOIN` -- `CROSS HASH JOIN` - -Note that the hint cannot be specified with a bare hint keyword (e.g., `MERGE`) - in that case, the `INNER` keyword must be added. For example, `a INNER MERGE JOIN b` will work, but `a MERGE JOIN b` will not work. - -{{site.data.alerts.callout_info}} -Join hints cannot be specified with a bare hint keyword (e.g., `MERGE`) due to SQL's implicit `AS` syntax. If you're not careful, you can make `MERGE` be an alias for a table; for example, `a MERGE JOIN b` will be interpreted as having an implicit `AS` and be executed as `a AS MERGE JOIN b`, which is just a long way of saying `a JOIN b`. Because the resulting query might execute without returning any hint-related error (because it is valid SQL), it will seem like the join hint "worked", but actually it didn't affect which join algorithm was used. In this case, the correct syntax is `a INNER MERGE JOIN b`. -{{site.data.alerts.end}} - -### Supported join algorithms - -- `HASH`: Forces a hash join; in other words, it disables merge and lookup joins. A hash join is always possible, even if there are no equality columns - CockroachDB considers the nested loop join with no index a degenerate case of the hash join (i.e., a hash table with one bucket). - -- `MERGE`: Forces a merge join, even if it requires re-sorting both sides of the join. - -- `LOOKUP`: Forces a lookup join into the right side; the right side must be a table with a suitable index. Note that `LOOKUP` can only be used with `INNER` and `LEFT` joins. - -If it is not possible to use the algorithm specified in the hint, an error is signaled. - -### Additional considerations - -- This syntax is consistent with the [SQL Server syntax for join hints](https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-join?view=sql-server-2017), except that: - - - SQL Server uses `LOOP` instead of `LOOKUP`. - - - CockroachDB does not support `LOOP` and instead supports `LOOKUP` for the specific case of nested loop joins with an index. - -- When a join hint is specified, the two tables will not be reordered by the optimizer. The reordering behavior has the following characteristics, which can be affected by hints: - - - Given `a JOIN b`, CockroachDB will not try to commute to `b JOIN a`. This means that you will need to pay attention to this ordering, which is especially important for lookup joins. Without a hint, `a JOIN b` might be executed as `b INNER LOOKUP JOIN a` using an index into `a`, whereas `a INNER LOOKUP JOIN b` requires an index into `b`. - - - `(a JOIN b) JOIN c` might be changed to `a JOIN (b JOIN c)`, but this does not happen if `a JOIN b` uses a hint; the hint forces that particular join to happen as written in the query. - -- Hint usage should be reconsidered with each new release of CockroachDB. Due to improvements in the optimizer, hints specified to work with an older version may cause decreased performance in a newer version. - -## Preferring the nearest index - -New in v19.1: Given multiple identical [indexes](indexes.html) that have different locality constraints using [replication zones](configure-replication-zones.html), the optimizer will prefer the index that is closest to the gateway node that is planning the query. In a properly configured geo-distributed cluster, this can lead to performance improvements due to improved data locality and reduced network traffic. - -This feature enables scenarios such as: - -- Reference data such as a table of postal codes that can be replicated to different regions, and queries will use the copy in the same region. See [Example - zone constraints](#zone-constraints) for more details. -- Optimizing for local reads (potentially at the expense of writes) by adding leaseholder preferences to your zone configuration. See [Example - leaseholder preferences](#leaseholder-preferences) for more details. - -{{site.data.alerts.callout_danger}} -This feature is only available to users with an [enterprise license](enterprise-licensing.html). -{{site.data.alerts.end}} - -To take advantage of this feature, you need to: - -1. Have an [enterprise license](enterprise-licensing.html). -2. Determine which data consists of reference tables that are rarely updated (such as postal codes) and can therefore be easily replicated to different regions. -3. Create multiple [secondary indexes](indexes.html) on the reference tables. **Note that these indexes must include (in key or using [`STORED`](create-index.html#store-columns)) *every* column that you wish to query**. For example, if you run `SELECT * from db.table` and not every column of `db.table` is in the set of secondary indexes you created, the optimizer will have no choice but to fall back to the primary index. -4. Create [replication zones](configure-replication-zones.html) for each index. - -With the above pieces in place, the optimizer will automatically choose the index nearest the gateway node that is planning the query. - -{{site.data.alerts.callout_info}} -The optimizer does not actually understand geographic locations, i.e., the relative closeness of the gateway node to other nodes that are located to its "east" or "west". It is matching against the [node locality constraints](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) you provided when you configured your replication zones. -{{site.data.alerts.end}} - -### Examples - -#### Zone constraints - -We can demonstrate the necessary configuration steps using a local cluster. The instructions below assume that you are already familiar with: - -- How to [start a local cluster](start-a-local-cluster.html). -- The syntax for [assigning node locality when configuring replication zones](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes). -- Using [the built-in SQL client](use-the-built-in-sql-client.html). - -First, start 3 local nodes as shown below. Use the [`--locality`](start-a-node.html#locality) flag to put them each in a different region as denoted by `region=usa`, `region=eu`, etc. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=usa --insecure --store=/tmp/node0 --listen-addr=localhost:26257 \ - --http-port=8888 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=eu --insecure --store=/tmp/node1 --listen-addr=localhost:26258 \ - --http-port=8889 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=apac --insecure --store=/tmp/node2 --listen-addr=localhost:26259 \ - --http-port=8890 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --insecure --host=localhost --port=26257 -~~~ - -Next, from the SQL client, add your organization name and enterprise license: - -{% include copy-clipboard.html %} -~~~ sh -$ cockroach sql --insecure --host=localhost --port=26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'FooCorp - Local Testing'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxx'; -~~~ - -Create a test database and table. The table will have 3 indexes into the same data. Later, we'll configure the cluster to associate each of these indexes with a different datacenter using replication zones. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE IF NOT EXISTS test; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> USE test; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -CREATE TABLE postal_codes ( - id INT PRIMARY KEY, - code STRING, - INDEX idx_eu (id) STORING (code), - INDEX idx_apac (id) STORING (code) -); -~~~ - -Next, we modify the replication zone configuration via SQL so that: - -- Nodes in the USA will use the primary key index. -- Nodes in the EU will use the `postal_codes@idx_eu` index (which is identical to the primary key index). -- Nodes in APAC will use the `postal_codes@idx_apac` index (which is also identical to the primary key index). - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE postal_codes CONFIGURE ZONE USING constraints='["+region=usa"]'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -ALTER INDEX postal_codes@idx_eu CONFIGURE ZONE USING constraints='["+region=eu"]'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -ALTER INDEX postal_codes@idx_apac CONFIGURE ZONE USING constraints='["+region=apac"]'; -~~~ - -To verify this feature is working as expected, we'll query the database from each of our local nodes as shown below. Each node has been configured to be in a different region, and it should now be using the index pinned to that region. - -{{site.data.alerts.callout_info}} -In a geo-distributed scenario with a cluster that spans multiple datacenters, it may take time for the optimizer to fetch schemas from other nodes the first time a query is planned; thereafter, the schema should be cached locally. - -For example, if you have 11 nodes, you may see 11 queries with high latency due to schema cache misses. Once all nodes have cached the schema locally, the latencies will drop. - -This behavior may also cause the [Statements page of the Web UI](admin-ui-statements-page.html) to show misleadingly high latencies until schemas are cached locally. -{{site.data.alerts.end}} - -As expected, the node in the USA region uses the primary key index. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost --port=26257 --database=test -e 'EXPLAIN SELECT * FROM postal_codes WHERE id=1;' -~~~ - -~~~ - tree | field | description -+------+-------+----------------------+ - scan | | - | table | postal_codes@primary - | spans | /1-/1/# -(3 rows) -~~~ - -As expected, the node in the EU uses the `idx_eu` index. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost --port=26258 --database=test -e 'EXPLAIN SELECT * FROM postal_codes WHERE id=1;' -~~~ - -~~~ - tree | field | description -+------+-------+---------------------+ - scan | | - | table | postal_codes@idx_eu - | spans | /1-/2 -(3 rows) -~~~ - -As expected, the node in APAC uses the `idx_apac` index. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost --port=26259 --database=test -e 'EXPLAIN SELECT * FROM postal_codes WHERE id=1;' -~~~ - -~~~ - tree | field | description -+------+-------+-----------------------+ - scan | | - | table | postal_codes@idx_apac - | spans | /1-/2 -(3 rows) -~~~ - -You'll need to make changes to the above configuration to reflect your [production environment](recommended-production-settings.html), but the concepts will be the same. - -#### Leaseholder preferences - -If you provide [leaseholder preferences](configure-replication-zones.html#lease_preferences) in addition to replication zone constraints, the optimizer will attempt to take your leaseholder preferences into account as well when selecting an index for your query. There are several factors to keep in mind: - -- Zone constraints are always respected (hard constraint), whereas lease preferences are taken into account as "additional information" -- as long as they do not contradict the zone constraints. - -- The optimizer does not consider the real-time location of leaseholders when selecting an index; it is pattern matching on the text values passed in the configuration (e.g., the [`ALTER INDEX`](alter-index.html) statements shown below). For the same reason, the optimizer only matches against the first locality in your `lease_preferences` array. - -- The optimizer may use an index that satisfies your leaseholder preferences even though that index has moved to a different node/region due to [leaseholder rebalancing](architecture/replication-layer.html#leaseholder-rebalancing). This can cause slower performance than you expected. Therefore, you should only use this feature if you’re confident you know where the leaseholders will end up based on your cluster's usage patterns. We recommend thoroughly testing your configuration to ensure the optimizer is selecting the index(es) you expect. - -In this example, we'll set up an authentication service using the access token / refresh token pattern from [OAuth 2](https://www.digitalocean.com/community/tutorials/an-introduction-to-oauth-2). To support fast local reads in our geo-distributed use case, we will have 3 indexes into the same authentication data: one for each region of our cluster. We configure each index using zone configurations and lease preferences so that the optimizer will use the local index for better performance. - -The instructions below assume that you are already familiar with: - -- How to [start a local cluster](start-a-local-cluster.html). -- The syntax for [assigning node locality when configuring replication zones](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes). -- Using [the built-in SQL client](use-the-built-in-sql-client.html). - -First, start 3 local nodes as shown below. Use the [`--locality`](start-a-node.html#locality) flag to put them each in a different region. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=us-east --insecure --store=/tmp/node0 --listen-addr=localhost:26257 \ - --http-port=8888 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=us-central --insecure --store=/tmp/node1 --listen-addr=localhost:26258 \ - --http-port=8889 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --locality=region=us-west --insecure --store=/tmp/node2 --listen-addr=localhost:26259 \ - --http-port=8890 --join=localhost:26257,localhost:26258,localhost:26259 --background -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --insecure --host=localhost --port=26257 -~~~ - -From the SQL client, add your organization name and enterprise license: - -{% include copy-clipboard.html %} -~~~ sh -$ cockroach sql --insecure --host=localhost --port=26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'FooCorp - Local Testing'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxx'; -~~~ - -Create an authentication database and table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE if NOT EXISTS auth; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> USE auth; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE token ( - token_id VARCHAR(100) NULL, - access_token VARCHAR(4000) NULL, - refresh_token VARCHAR(4000) NULL - ); -~~~ - -Create the indexes for each region: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX token_id_west_idx ON token (token_id) STORING (access_token, refresh_token); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX token_id_central_idx ON token (token_id) STORING (access_token, refresh_token); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX token_id_east_idx ON token (token_id) STORING (access_token, refresh_token); -~~~ - -Enter zone configurations to distribute replicas across the cluster as follows: - -- For the "East" index, store 2 replicas in the East, 2 in Central, and 1 in the West. Further, prefer that the leaseholders for that index live in the East or, failing that, in the Central region. -- Follow the same replica and leaseholder patterns for each of the Central and West regions. - -The idea is that, for example, `token_id_east_idx` will have sufficient replicas (2/5) so that even if one replica goes down, the leaseholder will stay in the East region. That way, if a query comes in that accesses the columns covered by that index from the East gateway node, the optimizer will select `token_id_east_idx` for fast reads. - -{{site.data.alerts.callout_info}} -The `ALTER TABLE` statement below is not required since it's later made redundant by the `token_id_west_idx` index. In production, you might go with the `ALTER TABLE` to put your table's lease preferences in the West, and then create only 2 indexes (for East and Central); however, the use of 3 indexes makes the example easier to understand. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE token CONFIGURE ZONE USING - num_replicas = 5, constraints = '{+region=us-east: 1, +region=us-central: 2, +region=us-west: 2}', lease_preferences = '[[+region=us-west], [+region=us-central]]'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX token_id_east_idx CONFIGURE ZONE USING num_replicas = 5, - constraints = '{+region=us-east: 2, +region=us-central: 2, +region=us-west: 1}', lease_preferences = '[[+region=us-east], [+region=us-central]]'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX token_id_central_idx CONFIGURE ZONE USING num_replicas = 5, - constraints = '{+region=us-east: 2, +region=us-central: 2, +region=us-west: 1}', lease_preferences = '[[+region=us-central], [+region=us-east]]'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER INDEX token_id_west_idx CONFIGURE ZONE USING num_replicas = 5, - constraints = '{+region=us-west: 2, +region=us-central: 2, +region=us-east: 1}', lease_preferences = '[[+region=us-west], [+region=us-central]]'; -~~~ - -Next let's [check our zone configurations](show-zone-configurations.html) to make sure they match our expectation: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW ZONE CONFIGURATIONS; -~~~ - -The output should include the following: - -~~~ -auth.token | ALTER TABLE auth.public.token CONFIGURE ZONE USING - | num_replicas = 5, - | constraints = '{+region=us-central: 2, +region=us-east: 1, +region=us-west: 2}', - | lease_preferences = '[[+region=us-west], [+region=us-central]]' -auth.token@token_id_east_idx | ALTER INDEX auth.public.token@token_id_east_idx CONFIGURE ZONE USING - | num_replicas = 5, - | constraints = '{+region=us-central: 2, +region=us-east: 2, +region=us-west: 1}', - | lease_preferences = '[[+region=us-east], [+region=us-central]]' -auth.token@token_id_central_idx | ALTER INDEX auth.public.token@token_id_central_idx CONFIGURE ZONE USING - | num_replicas = 5, - | constraints = '{+region=us-central: 2, +region=us-east: 2, +region=us-west: 1}', - | lease_preferences = '[[+region=us-central], [+region=us-east]]' -auth.token@token_id_west_idx | ALTER INDEX auth.public.token@token_id_west_idx CONFIGURE ZONE USING - | num_replicas = 5, - | constraints = '{+region=us-central: 2, +region=us-east: 1, +region=us-west: 2}', - | lease_preferences = '[[+region=us-west], [+region=us-central]]' -~~~ - -Now that we've set up our indexes the way we want them, we need to insert some data. The first statement below inserts 10,000 rows of placeholder data; the second inserts a row with a specific UUID string that we'll later query against to check which index is used. - -{{site.data.alerts.callout_info}} -On a freshly created cluster like this one, you may need to wait a moment after adding the data to give [automatic statistics](#table-statistics) time to update. Then, the optimizer can generate a query plan that uses the expected index. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> INSERT - INTO - token (token_id, access_token, refresh_token) - SELECT - gen_random_uuid()::STRING, - gen_random_uuid()::STRING, - gen_random_uuid()::STRING - FROM - generate_series(1, 10000); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT - INTO - token (token_id, access_token, refresh_token) - VALUES - ( - '2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9', - '49E36152-6152-11E9-8CDC-3682F23211D9', - '4E0E91B6-6152-11E9-BAC1-3782F23211D9' - ); -~~~ - -Finally, we [`EXPLAIN`](explain.html) a [selection query](selection-queries.html) from each node to verify which index is being queried against. For example, when running the query shown below against the `us-west` node, we expect it to use the `token_id_west_idx` index. - -{% include copy-clipboard.html %} -~~~ sh -$ cockroach sql --insecure --host=localhost --port=26259 --database=auth # "West" node -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN - SELECT - access_token, refresh_token - FROM - token - WHERE - token_id = '2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9'; -~~~ - -~~~ - tree | field | description -+-----------+-------+-------------------------------------------------------------------------------------------+ - render | | - └── scan | | - | table | token@token_id_west_idx - | spans | /"2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9"-/"2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9"/PrefixEnd -(4 rows) - -Time: 787µs -~~~ - -Similarly, queries from the `us-east` node should use the `token_id_east_idx` index (and the same should be true for `us-central`). - -{% include copy-clipboard.html %} -~~~ sh -$ cockroach sql --insecure --host=localhost --port=26257 --database=auth # "East" node -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN - SELECT - access_token, refresh_token - FROM - token - WHERE - token_id = '2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9'; -~~~ - -~~~ - tree | field | description -+-----------+-------+-------------------------------------------------------------------------------------------+ - render | | - └── scan | | - | table | token@token_id_east_idx - | spans | /"2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9"-/"2E1B5BFE-6152-11E9-B9FD-A7E0F13211D9"/PrefixEnd -(4 rows) - -Time: 619µs -~~~ - -You'll need to make changes to the above configuration to reflect your [production environment](recommended-production-settings.html), but the concepts will be the same. - -## How to turn the optimizer off - -With the optimizer turned on, the performance of some workloads may change. If your workload performs worse than expected (e.g., lower throughput or higher latency), you can turn off the cost-based optimizer and use the heuristic planner. - -To turn the cost-based optimizer off for the current session: - -{% include copy-clipboard.html %} -~~~ sql -> SET optimizer = 'off'; -~~~ - -To turn the cost-based optimizer off for all sessions: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING sql.defaults.optimizer = 'off'; -~~~ - -{{site.data.alerts.callout_info}} -Changing the cluster setting does not immediately turn the optimizer off; instead, it changes the default session setting to `off`. To see the change, restart your session. -{{site.data.alerts.end}} - -## Known limitations - -- Some features are not supported by the cost-based optimizer; however, the optimizer will fall back to the heuristic planner for this functionality. If performance is worse than in previous versions of CockroachDB, you can [turn the optimizer off](#how-to-turn-the-optimizer-off) to manually force it to fallback to the heuristic planner. - -## See also - -- [`SET (session variable)`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW (session variable)`](show-vars.html) -- [`CREATE STATISTICS`](create-statistics.html) -- [`SHOW STATISTICS`](show-statistics.html) -- [`EXPLAIN`](explain.html) diff --git a/src/current/v19.1/create-a-file-server.md b/src/current/v19.1/create-a-file-server.md deleted file mode 100644 index 753f53bd275..00000000000 --- a/src/current/v19.1/create-a-file-server.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: Create a File Server for Imports and Backups -summary: Learn how to create a simple file server for use with CockroachDB IMPORT and BACKUP -toc: true ---- - -If you need a location to store files for the [`IMPORT`](import.html) process or [CockroachDB enterprise backups](backup.html), but do not have access to (or simply cannot use) cloud storage providers, you can run a local file server. You can then use this file server by leveraging support for our [HTTP Export Storage API](#http-export-storage-api). - -This is especially useful for: - -- Implementing a compatibility layer in front of custom or proprietary storage providers for which CockroachDB does not yet have built-in support -- Using on-premises storage - -## HTTP export storage API - -CockroachDB tasks that require reading or writing external files (such as [`IMPORT`](import.html) and [`BACKUP`](backup.html)) can use the HTTP Export Storage API by prefacing the address with `http`, e.g., `http://fileserver/mnt/cockroach-exports`. - -This API uses the `GET`, `PUT` and `DELETE` methods. This behaves like you would expect typical HTTP requests to work. After a `PUT` request to some path, a subsequent `GET` request should return the content sent in the `PUT` request body, at least until a `DELETE` request is received for that path. - -## Examples - -You can use any file server software that supports `GET`, `PUT` and `DELETE` methods, but we've included code samples for common ones: - -- [Using PHP with `IMPORT`](#using-php-with-import) -- [Using Python with `IMPORT`](#using-python-with-import) -- [Using Ruby with `IMPORT`](#using-ruby-with-import) -- [Using Caddy as a file server](#using-caddy-as-a-file-server) -- [Using nginx as a file server](#using-nginx-as-a-file-server) - -{{site.data.alerts.callout_info}}We do not recommend using any machines running cockroach as file servers. Using machines that are running cockroach as file servers could negatively impact performance if I/O operations exceed capacity.{{site.data.alerts.end}} - -### Using PHP with `IMPORT` - -The PHP language has an HTTP server built in. You can serve local files using the commands below. For more information about how to import these locally served files, see the documentation for the [`IMPORT`][import] statement. - -{% include copy-clipboard.html %} -~~~ shell -$ cd /path/to/data -$ php -S 127.0.0.1:3000 # files available at e.g., 'http://localhost:3000/data.sql' -~~~ - -### Using Python with `IMPORT` - -The Python language has an HTTP server included in the standard library. You can serve local files using the commands below. For more information about how to import these locally served files, see the documentation for the [`IMPORT`][import] statement. - -{% include copy-clipboard.html %} -~~~ shell -$ cd /path/to/data -$ python -m SimpleHTTPServer 3000 # files available at e.g., 'http://localhost:3000/data.sql' -~~~ - -If you use Python 3, try: - -{% include copy-clipboard.html %} -~~~ shell -$ cd /path/to/data -$ python -m http.server 3000 -~~~ - -### Using Ruby with `IMPORT` - -The Ruby language has an HTTP server included in the standard library. You can serve local files using the commands below. For more information about how to import these locally served files, see the documentation for the [`IMPORT`][import] statement. - -{% include copy-clipboard.html %} -~~~ shell -$ cd /path/to/data -$ ruby -run -ehttpd . -p3000 # files available at e.g., 'http://localhost:3000/data.sql' -~~~ - -### Using Caddy as a file server - -1. [Download the Caddy web server](https://caddyserver.com/download). Before downloading, in the **Customize your build** step, open the list of **Plugins** and make sure to check the `http.upload` option. - -2. Copy the `caddy` binary to the directory containing the files you want to serve, and run it [with an upload directive](https://caddyserver.com/docs/http.upload), either in the command line or via [Caddyfile](https://caddyserver.com/docs/caddyfile). - -- Command line example (with no TLS): - {% include copy-clipboard.html %} - ~~~ shell - $ caddy -root /mnt/cockroach-exports "upload / {" 'to "/mnt/cockroach-exports"' 'yes_without_tls' "}" - ~~~ -- `Caddyfile` example (using a key and cert): - {% include copy-clipboard.html %} - ~~~ shell - tls key cert - root "/mnt/cockroach-exports" - upload / { - to "/mnt/cockroach-exports" - } - ~~~ - -For more information about Caddy, see [its documentation](https://caddyserver.com/docs). - -### Using nginx as a file server - -1. Install `nginx` with the `webdav` module (often included in `-full` or similarly named packages in various distributions). - -2. In the `nginx.conf` file, add a `dav_methods PUT DELETE` directive. For example: - - {% include copy-clipboard.html %} - ~~~ nginx - events { - worker_connections 1024; - } - http { - server { - listen 20150; - location / { - dav_methods PUT DELETE; - root /mnt/cockroach-exports; - sendfile on; - sendfile_max_chunk 1m; - } - } - } - ~~~ - -## See also - -- [`IMPORT`][import] -- [`BACKUP`](backup.html) (*Enterprise only*) -- [`RESTORE`](restore.html) (*Enterprise only*) - - - -[import]: import.html diff --git a/src/current/v19.1/create-and-manage-users.md b/src/current/v19.1/create-and-manage-users.md deleted file mode 100644 index f34eaa14891..00000000000 --- a/src/current/v19.1/create-and-manage-users.md +++ /dev/null @@ -1,242 +0,0 @@ ---- -title: Manage Users -summary: To create and manage your cluster's users (which lets you control SQL-level privileges), use the cockroach user command with appropriate flags. -toc: true ---- - -To create, manage, and remove your cluster's users (which lets you control SQL-level [privileges](authorization.html#assign-privileges), use the `cockroach user` [command](cockroach-commands.html) with appropriate flags. - -{{site.data.alerts.callout_success}}You can also use the CREATE USER and DROP USER statements to create and remove users.{{site.data.alerts.end}} - -## Considerations - -- Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. -- After creating users, you must [grant them privileges to databases and tables](grant.html). -- All users belong to the `public` role, to which you can [grant](grant.html) and [revoke](revoke.html) privileges. -- On secure clusters, you must [create client certificates for users](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) and users must [authenticate their access to the cluster](authentication.html#client-authentication). -- {% include {{ page.version.version }}/misc/remove-user-callout.html %} - -## Subcommands - -Subcommand | Usage ------------|------ -`get` | Retrieve a table containing a user and their hashed password. -`ls` | List all users. -`rm` | Remove a user. -`set` | Create or update a user. - -## Synopsis - -Create a user: - -~~~ shell -$ cockroach user set -~~~ - -List all users: - -~~~ shell -$ cockroach user ls -~~~ - -Display a specific user: - -~~~ shell -$ cockroach user get -~~~ - -View help: - -~~~ shell -$ cockroach user --help -~~~ -~~~ shell -$ cockroach user --help -~~~ - -## Flags - -The `user` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--password` | Enable password authentication for the user; you will be prompted to enter the password on the command line.

      Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below. -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

      **Default:** `table` for sessions that [output on a terminal](use-the-built-in-sql-client.html#session-and-output-types); `tsv` otherwise. - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -Currently, only members of the `admin` role can create users. By default, the `root` user belongs to the `admin` role. - -{{site.data.alerts.callout_info}} -Password creation is supported only in secure clusters for non-root users. The root user must authenticate with a client certificate and key. -{{site.data.alerts.end}} - -### Logging - -By default, the `user` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Create a user - -
      - - -
      -

      - -Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --certs-dir=certs -~~~ - -{{site.data.alerts.callout_success}}If you want to allow password authentication for the user, include the --password flag and then enter and confirm the password at the command prompt.{{site.data.alerts.end}} - -After creating users, you must: - -- [Create their client certificates](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client). -- [Grant them privileges to databases](grant.html). - -
      - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --insecure -~~~ - -After creating users, you must [grant them privileges to databases](grant.html). - -
      - -### Log in as a specific user - -
      - - -
      -

      - -
      - -#### Secure clusters with client certificates - -All users can authenticate their access to a secure cluster using [a client certificate](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) issued to their username. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --user=jpointsman -~~~ - -#### Secure clusters with passwords - -Users with passwords can authenticate their access by entering their password at the command prompt instead of using their client certificate and key. - -If we cannot find client certificate and key files matching the user, we fall back on password authentication. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --certs-dir=certs --user=jpointsman -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --user=jpointsman -~~~ - -
      - -### Update a user's password - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set jpointsman --certs-dir=certs --password -~~~ - -After issuing this command, enter and confirm the user's new password at the command prompt. - -Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. - -### List all users - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user ls --insecure -~~~ - -~~~ -+------------+ -| username | -+------------+ -| jpointsman | -+------------+ -~~~ - -### Find a specific user - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user get jpointsman --insecure -~~~ - -~~~ -+------------+--------------------------------------------------------------+ -| username | hashedPassword | -+------------+--------------------------------------------------------------+ -| jpointsman | $2a$108tm5lYjES9RSXSKtQFLhNO.e/ysTXCBIRe7XeTgBrR6ubXfp6dDczS | -+------------+--------------------------------------------------------------+ -~~~ - -### Remove a user - -{{site.data.alerts.callout_danger}}{% include {{ page.version.version }}/misc/remove-user-callout.html %}{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user rm jpointsman --insecure -~~~ - -{{site.data.alerts.callout_success}}You can also use the DROP USER SQL statement to remove users.{{site.data.alerts.end}} - -### Reveal the SQL statements sent implicitly by the command-line utility - -In this example, we use the `--echo-sql` flag to reveal the SQL statement sent implicitly by the command-line utility: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user rm jpointsman --insecure --echo-sql -~~~ - -~~~ -> DELETE FROM system.users WHERE username=$1 -DELETE 1 -~~~ - -## See also - -- [Authorization](authorization.html) -- [`CREATE USER`](create-user.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/create-changefeed.md b/src/current/v19.1/create-changefeed.md deleted file mode 100644 index 5014c0b024c..00000000000 --- a/src/current/v19.1/create-changefeed.md +++ /dev/null @@ -1,277 +0,0 @@ ---- -title: CREATE CHANGEFEED -summary: The CREATE CHANGEFEED statement creates a changefeed of row-level change subscriptions in a configurable format to a configurable sink. -toc: true ---- - -{{site.data.alerts.callout_info}} -`CREATE CHANGEFEED` is an [enterprise-only](enterprise-licensing.html) feature. For the core version, see [`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html). -{{site.data.alerts.end}} - -The `CREATE CHANGEFEED` [statement](sql-statements.html) creates a new enterprise changefeed, which targets an allowlist of tables, called "watched rows". Every change to a watched row is emitted as a record in a configurable format (`JSON` or Avro) to a configurable sink ([Kafka](https://kafka.apache.org/) or a [cloud storage sink](#cloud-storage-sink)). You can [create](#create-a-changefeed-connected-to-kafka), [pause](#pause-a-changefeed), [resume](#resume-a-paused-changefeed), or [cancel](#cancel-a-changefeed) an enterprise changefeed. - -For more information, see [Change Data Capture](change-data-capture.html). - -## Required privileges - -Changefeeds can only be created by superusers, i.e., [members of the `admin` role](authorization.html#create-and-manage-roles). The admin role exists by default with `root` as the member. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/create_changefeed.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table (or tables in a comma separated list) to create a changefeed for. -`sink` | The location of the configurable sink. The scheme of the URI indicates the type. For more information, see [Sink URI](#sink-uri) below. -`option` / `value` | For a list of available options and their values, see [Options](#options) below. - - - -### Sink URI - -The sink URI follows the basic format of: - -~~~ -'[scheme]://[host]:[port]?[query_parameters]' -~~~ - -The `scheme` can be [`kafka`](#kafka) or any [cloud storage sink](#cloud-storage-sink). - -#### Kafka - -Example of a Kafka sink URI: - -~~~ -'kafka://broker.address.com:9092?topic_prefix=bar_&tls_enabled=true&ca_cert=LS0tLS1CRUdJTiBDRVJUSUZ&sasl_enabled=true&sasl_user=petee&sasl_password=bones' -~~~ - -Query parameters include: - -Parameter | Value | Description -----------+-------+--------------- -`topic_prefix` | [`STRING`](string.html) | Adds a prefix to all topic names.

      For example, `CREATE CHANGEFEED FOR TABLE foo INTO 'kafka://...?topic_prefix=bar_'` would emit rows under the topic `bar_foo` instead of `foo`. -`tls_enabled=true` | [`BOOL`](bool.html) | If `true`, enable Transport Layer Security (TLS) on the connection to Kafka. This can be used with a `ca_cert` (see below). -`ca_cert` | [`STRING`](string.html) | The base64-encoded `ca_cert` file.

      Note: To encode your `ca.cert`, run `base64 -w 0 ca.cert`. -`sasl_enabled` | [`BOOL`](bool.html) | If `true`, [use SASL/PLAIN to authenticate](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_plain.html). This requires a `sasl_user` and `sasl_password` (see below). -`sasl_user` | [`STRING`](string.html) | Your SASL username. -`sasl_password` | [`STRING`](string.html) | Your SASL password. - -#### Cloud storage sink - -New in v19.1: Use a cloud storage sink to deliver changefeed data to OLAP or big data systems without requiring transport via Kafka. - -{{site.data.alerts.callout_info}} -Currently, cloud storage sinks only work with `JSON` and emits newline-delimited `JSON` files. -{{site.data.alerts.end}} - -Example of a cloud storage sink (i.e., AWS S3) URI: - -~~~ -'experimental-s3://test-s3encryption/test?AWS_ACCESS_KEY_ID=ABCDEFGHIJKLMNOPQ&AWS_SECRET_ACCESS_KEY=LS0tLS1CRUdJTiBDRVJUSUZ' -~~~ - -{{site.data.alerts.callout_info}} -The `scheme` for a cloud storage sink should be prepended with `experimental-`. -{{site.data.alerts.end}} - -Any of the cloud storages below can be used as a sink: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -### Options - -Option | Value | Description --------|-------|------------ -`updated` | N/A | Include updated timestamps with each row.

      If a `cursor` is provided, the "updated" timestamps will match the [MVCC](architecture/storage-layer.html#mvcc) timestamps of the emitted rows, and there is no initial scan. If a `cursor` is not provided, the changefeed will perform an initial scan (as of the time the changefeed was created), and the "updated" timestamp for each change record emitted in the initial scan will be the timestamp of the initial scan. Similarly, when a [backfill is performed for a schema change](change-data-capture.html#schema-changes-with-column-backfill), the "updated" timestamp is set to the first timestamp for when the new schema is valid. -`resolved` | [`INTERVAL`](interval.html) | Periodically emit resolved timestamps to the changefeed. Optionally, set a minimum duration between emitting resolved timestamps. If unspecified, all resolved timestamps are emitted.

      Example: `resolved='10s'` -`envelope` | `key_only` / `wrapped` | Use `key_only` to emit only the key and no value, which is faster if you only want to know when the key changes.

      Default: `envelope=wrapped` -`cursor` | [Timestamp](as-of-system-time.html#parameters) | Emits any changes after the given timestamp, but does not output the current state of the table first. If `cursor` is not specified, the changefeed starts by doing an initial scan of all the watched rows and emits the current value, then moves to emitting any changes that happen after the scan.

      When starting a changefeed at a specific `cursor`, the `cursor` cannot be before the configured garbage collection window (see [`gc.ttlseconds`](configure-replication-zones.html#replication-zone-variables)) for the table you're trying to follow; otherwise, the changefeed will error. With default garbage collection settings, this means you cannot create a changefeed that starts more than 25 hours in the past.

      `cursor` can be used to [start a new changefeed where a previous changefeed ended.](#start-a-new-changefeed-where-another-ended)

      Example: `CURSOR='1536242855577149065.0000000000'` -`format` | `json` / `experimental_avro` | Format of the emitted record. Currently, support for [Avro is limited and experimental](#avro-limitations). For mappings of CockroachDB types to Avro types, [see the table below](#avro-types).

      Default: `format=json`. -`confluent_schema_registry` | Schema Registry address | The [Schema Registry](https://docs.confluent.io/current/schema-registry/docs/index.html#sr) address is required to use `experimental_avro`. -`key_in_value` | N/A | New in v19.1: Makes the [primary key](primary-key.html) of a deleted row recoverable in sinks where each message has a value but not a key (most have a key and value in each message). `key_in_value` is automatically used for these sinks (currently only [cloud storage sinks](#cloud-storage-sink)). - -#### Avro limitations - -Currently, support for Avro is limited and experimental. Below is a list of unsupported SQL types and values for Avro changefeeds: - -- [Decimals](decimal.html) must have precision specified. -- [Decimals](decimal.html) with `NaN` or infinite values cannot be written in Avro. - - {{site.data.alerts.callout_info}} - To avoid `NaN` or infinite values, add a [`CHECK` constraint](check.html) to prevent these values from being inserted into decimal columns. - {{site.data.alerts.end}} - -- [`TIME`, `DATE`, `INTERVAL`](https://github.com/cockroachdb/cockroach/issues/32472), [`UUID`, `INET`](https://github.com/cockroachdb/cockroach/issues/34417), [`ARRAY`](https://github.com/cockroachdb/cockroach/issues/34420), [`JSONB`](https://github.com/cockroachdb/cockroach/issues/34421), `BIT`, and collated `STRING` are not supported in Avro yet. - -#### Avro types - -Below is a mapping of CockroachDB types to Avro types: - -CockroachDB Type | Avro Type | Avro Logical Type ------------------+-----------+--------------------- -[`INT`](int.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`BOOL`](bool.html) | [`BOOLEAN`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`FLOAT`](float.html) | [`DOUBLE`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`STRING`](string.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`DATE`](date.html) | [`INT`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`DATE`](https://avro.apache.org/docs/1.8.1/spec.html#Date) -[`TIME`](time.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`TIMESTAMP`](timestamp.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`TIMESTAMPTZ`](timestamp.html) | [`LONG`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`TIME-MICROS`](https://avro.apache.org/docs/1.8.1/spec.html#Time+%28microsecond+precision%29) -[`DECIMAL`](decimal.html) | [`BYTES`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | [`DECIMAL`](https://avro.apache.org/docs/1.8.1/spec.html#Decimal) -[`UUID`](uuid.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`INET`](inet.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | -[`JSONB`](jsonb.html) | [`STRING`](https://avro.apache.org/docs/1.8.1/spec.html#schema_primitive) | - -## Responses - -The messages (i.e., keys and values) emitted to a Kafka topic are specific to the [`envelope`](#options). The default format is `wrapped`, and the output messages are composed of the following: - -- **Key**: An array always composed of the row's `PRIMARY KEY` field(s) (e.g., `[1]` for `JSON` or `{"id":{"long":1}}` for Avro). -- **Value**: - - One of three possible top-level fields: - - `after`, which contains the state of the row after the update (or `null`' for `DELETE`s). - - `updated`, which contains the updated timestamp. - - `resolved`, which is emitted for records representing resolved timestamps. These records do not include an "after" value since they only function as checkpoints. - - For [`INSERT`](insert.html) and [`UPDATE`](update.html), the current state of the row inserted or updated. - - For [`DELETE`](delete.html), `null`. - -For example: - -Statement | Response ------------------------------------------------+----------------------------------------------------------------------- -`INSERT INTO office_dogs VALUES (1, 'Petee');` | JSON: `[1] {"after": {"id": 1, "name": "Petee"}}`
      Avro: `{"id":{"long":1}} {"after":{"office_dogs":{"id":{"long":1},"name":{"string":"Petee"}}}}` -`DELETE FROM office_dogs WHERE name = 'Petee'` | JSON: `[1] {"after": null}`
      Avro: `{"id":{"long":1}} {"after":null}` - -## Examples - -### Create a changefeed connected to Kafka - -{% include copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name - INTO 'kafka://host:port' - WITH updated, resolved; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For more information on how to create a changefeed connected to Kafka, see [Change Data Capture](change-data-capture.html#create-a-changefeed-connected-to-kafka). - -### Create a changefeed connected to Kafka using Avro - -{% include copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name - INTO 'kafka://host:port' - WITH format = experimental_avro, confluent_schema_registry = ; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For more information on how to create a changefeed that emits an [Avro](https://avro.apache.org/docs/1.8.2/spec.html) record, see [Change Data Capture](change-data-capture.html#create-a-changefeed-connected-to-kafka-using-avro). - -### Create a changefeed connected to a cloud storage sink - -{% include {{ page.version.version }}/cdc/correctness-warning.md %} - -{% include copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name - INTO 'experimental-scheme://host?parameters' - WITH updated, resolved; -~~~ -~~~ -+--------------------+ -| job_id | -+--------------------+ -| 360645287206223873 | -+--------------------+ -(1 row) -~~~ - -For more information on how to create a changefeed connected to a cloud storage sink, see [Change Data Capture](change-data-capture.html#create-a-changefeed-connected-to-a-cloud-storage-sink). - -### Manage a changefeed - -Use the following SQL statements to pause, resume, and cancel a changefeed. - -{{site.data.alerts.callout_info}} -Changefeed-specific SQL statements (e.g., `CANCEL CHANGEFEED`) will be added in the future. -{{site.data.alerts.end}} - -#### Pause a changefeed - -{% include copy-clipboard.html %} -~~~ sql -> PAUSE JOB job_id; -~~~ - -For more information, see [`PAUSE JOB`](pause-job.html). - -#### Resume a paused changefeed - -{% include copy-clipboard.html %} -~~~ sql -> RESUME JOB job_id; -~~~ - -For more information, see [`RESUME JOB`](resume-job.html). - -#### Cancel a changefeed - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL JOB job_id; -~~~ - -For more information, see [`CANCEL JOB`](cancel-job.html). - -### Start a new changefeed where another ended - -Find the [high-water timestamp](change-data-capture.html#monitor-a-changefeed) for the ended changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM crdb_internal.jobs WHERE job_id = ; -~~~ -~~~ - job_id | job_type | ... | high_water_timestamp | error | coordinator_id -+--------------------+------------+ ... +--------------------------------+-------+----------------+ - 383870400694353921 | CHANGEFEED | ... | 1537279405671006870.0000000000 | | 1 -(1 row) -~~~ - -Use the `high_water_timestamp` to start the new changefeed: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE CHANGEFEED FOR TABLE name - INTO 'kafka//host:port' - WITH cursor = ''; -~~~ - -Note that because the cursor is provided, the initial scan is not performed. - -## See also - -- [Change Data Capture](change-data-capture.html) -- [Other SQL Statements](sql-statements.html) -- [Changefeed Dashboard](admin-ui-cdc-dashboard.html) diff --git a/src/current/v19.1/create-database.md b/src/current/v19.1/create-database.md deleted file mode 100644 index 5a52f63231e..00000000000 --- a/src/current/v19.1/create-database.md +++ /dev/null @@ -1,97 +0,0 @@ ---- -title: CREATE DATABASE -summary: The CREATE DATABASE statement creates a new CockroachDB database. -toc: true ---- - -The `CREATE DATABASE` [statement](sql-statements.html) creates a new CockroachDB database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -Only members of the `admin` role can configure replication zones. By default, the `root` user belongs to the `admin` role. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/create_database.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`IF NOT EXISTS` | Create a new database only if a database of the same name does not already exist; if one does exist, do not return an error. -`name` | The name of the database to create, which [must be unique](#create-fails-name-already-in-use) and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`encoding` | The `CREATE DATABASE` statement accepts an optional `ENCODING` clause for compatibility with PostgreSQL, but `UTF-8` is the only supported encoding. The aliases `UTF8` and `UNICODE` are also accepted. Values should be enclosed in single quotes and are case-insensitive.

      Example: `CREATE DATABASE bank ENCODING = 'UTF-8'`. - -## Example - -### Create a database - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ -> SHOW DATABASES; -~~~ - -~~~ -+---------------+ -| database_name | -+---------------+ -| bank | -| defaultdb | -| postgres | -| system | -+---------------+ -(4 rows) -~~~ - -### Create fails (name already in use) - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -~~~ -pq: database "bank" already exists -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE IF NOT EXISTS bank; -~~~ - -SQL does not generate an error, but instead responds `CREATE DATABASE` even though a new database wasn't created. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ -+---------------+ -| database_name | -+---------------+ -| bank | -| defaultdb | -| postgres | -| system | -+---------------+ -(4 rows) -~~~ - -## See also - -- [`SHOW DATABASES`](show-databases.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/create-index.md b/src/current/v19.1/create-index.md deleted file mode 100644 index ee3aedb9558..00000000000 --- a/src/current/v19.1/create-index.md +++ /dev/null @@ -1,177 +0,0 @@ ---- -title: CREATE INDEX -summary: The CREATE INDEX statement creates an index for a table. Indexes improve your database's performance by helping SQL quickly locate data. -toc: true ---- - -The `CREATE INDEX` [statement](sql-statements.html) creates an index for a table. [Indexes](indexes.html) improve your database's performance by helping SQL locate data without having to look through every row of a table. - -The following types cannot be included in an index key, but can be stored (and used in a covered query) using the [`STORING` or `COVERING`](create-index.html#store-columns) clause: - -- [`JSONB`](jsonb.html) -- [`ARRAY`](array.html) -- The computed [`TUPLE`](scalar-expressions.html#tuple-constructor) type, even if it is constructed from indexed fields - -To create an index on the schemaless data in a [`JSONB`](jsonb.html) column, use an [inverted index](inverted-indexes.html). - -{{site.data.alerts.callout_info}} -Indexes are automatically created for a table's [`PRIMARY KEY`](primary-key.html) and [`UNIQUE`](unique.html) columns. - -When querying a table, CockroachDB uses the fastest index. For more information about that process, see [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). -{{site.data.alerts.end}} - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Synopsis - -**Standard index:** - -
      {% include {{ page.version.version }}/sql/diagrams/create_index.html %}
      - -**Inverted index:** - -
      {% include {{ page.version.version }}/sql/diagrams/create_inverted_index.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`UNIQUE` | Apply the [`UNIQUE` constraint](unique.html) to the indexed columns.

      This causes the system to check for existing duplicate values on index creation. It also applies the `UNIQUE` constraint at the table level, so the system checks for duplicate values when inserting or updating data. -`INVERTED` | Create an [inverted index](inverted-indexes.html) on the schemaless data in the specified [`JSONB`](jsonb.html) column.

      You can also use the PostgreSQL-compatible syntax `USING gin`. For more details, see [Inverted Indexes](inverted-indexes.html#creation). -`IF NOT EXISTS` | Create a new index only if an index of the same name does not already exist; if one does exist, do not return an error. -`opt_index_name`
      `index_name` | The name of the index to create, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers).

      If you do not specify a name, CockroachDB uses the format `__key/idx`. `key` indicates the index applies the `UNIQUE` constraint; `idx` indicates it does not. Example: `accounts_balance_idx` -`table_name` | The name of the table you want to create the index on. -`USING name` | An optional clause for compatibility with third-party tools. Accepted values for `name` are `btree` and `gin`, with `btree` for a standard secondary index and `gin` as the PostgreSQL-compatible syntax for an [inverted index](#create-inverted-indexes) on schemaless data in a `JSONB` column. -`column_name` | The name of the column you want to index. -`ASC` or `DESC`| Sort the column in ascending (`ASC`) or descending (`DESC`) order in the index. How columns are sorted affects query results, particularly when using `LIMIT`.

      __Default:__ `ASC` -`STORING ...`| Store (but do not sort) each column whose name you include.

      For information on when to use `STORING`, see [Store Columns](#store-columns). Note that columns that are part of a table's [`PRIMARY KEY`](primary-key.html) cannot be specified as `STORING` columns in secondary indexes on the table.

      `COVERING` aliases `STORING` and works identically. -`opt_interleave` | You can potentially optimize query performance by [interleaving indexes](interleave-in-parent.html), which changes how CockroachDB stores your data. -`opt_partition_by` | An [enterprise-only](enterprise-licensing.html) option that lets you [define index partitions at the row level](partitioning.html). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Create standard indexes - -To create the most efficient indexes, we recommend reviewing: - -- [Indexes: Best Practices](indexes.html#best-practices) -- [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) - -#### Single-column indexes - -Single-column indexes sort the values of a single column. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON products (price); -~~~ - -Because each query can only use one index, single-column indexes are not typically as useful as multiple-column indexes. - -#### Multiple-column indexes - -Multiple-column indexes sort columns in the order you list them. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON products (price, stock); -~~~ - -To create the most useful multiple-column indexes, we recommend reviewing our [best practices](indexes.html#indexing-columns). - -#### Unique indexes - -Unique indexes do not allow duplicate values among their columns. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE UNIQUE INDEX ON products (name, manufacturer_id); -~~~ - -This also applies the [`UNIQUE` constraint](unique.html) at the table level, similarly to [`ALTER TABLE`](alter-table.html). The above example is equivalent to: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE products ADD CONSTRAINT products_name_manufacturer_id_key UNIQUE (name, manufacturer_id); -~~~ - -### Create inverted indexes - -[Inverted indexes](inverted-indexes.html) can be created on schemaless data in a [`JSONB`](jsonb.html) column. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INVERTED INDEX ON users (profile); -~~~ - -The above example is equivalent to the following PostgreSQL-compatible syntax: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON users USING GIN (profile); -~~~ - -### Store columns - -Storing a column improves the performance of queries that retrieve (but do not filter) its values. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON products (price) STORING (name); -~~~ - -However, to use stored columns, queries must filter another column in the same index. For example, SQL can retrieve `name` values from the above index only when a query's `WHERE` clause filters `price`. - -### Change column sort order - -To sort columns in descending order, you must explicitly set the option when creating the index. (Ascending order is the default.) - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX ON products (price DESC, stock); -~~~ - -Note that how a column is ordered in the index will affect the ordering of the index keys, and may affect the efficiency of queries that include an `ORDER BY` clause. - -### Query specific indexes - -Normally, CockroachDB selects the index that it calculates will scan the fewest rows. However, you can override that selection and specify the name of the index you want to use. To find the name, use [`SHOW INDEX`](show-index.html). - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM products; -~~~ - -~~~ -+------------+--------------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+--------------------+------------+--------------+-------------+-----------+---------+----------+ -| products | primary | false | 1 | id | ASC | false | false | -| products | products_price_idx | true | 1 | price | ASC | false | false | -| products | products_price_idx | true | 2 | id | ASC | false | true | -+------------+--------------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name FROM products@products_price_idx WHERE price > 10; -~~~ - -## See also - -- [Indexes](indexes.html) -- [`SHOW INDEX`](show-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW JOBS`](show-jobs.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/create-role.md b/src/current/v19.1/create-role.md deleted file mode 100644 index 3c11d18919b..00000000000 --- a/src/current/v19.1/create-role.md +++ /dev/null @@ -1,64 +0,0 @@ ---- -title: CREATE ROLE (Enterprise) -summary: The CREATE ROLE statement creates SQL roles, which are groups containing any number of roles and users as members. -toc: true ---- - -The `CREATE ROLE` [statement](sql-statements.html) creates SQL [roles](authorization.html#create-and-manage-roles), which are groups containing any number of roles and users as members. You can assign privileges to roles, and all members of the role (regardless of whether if they are direct or indirect members) will inherit the role's privileges. - -{{site.data.alerts.callout_info}}CREATE ROLE is an enterprise-only feature.{{site.data.alerts.end}} - - -## Considerations - -- Role names: - - Are case-insensitive - - Must start with either a letter or underscore - - Must contain only letters, numbers, or underscores - - Must be between 1 and 63 characters. -- After creating roles, you must [grant them privileges to databases and tables](grant.html). -- Roles and users can be members of roles. -- Roles and users share the same namespace and must be unique. -- All privileges of a role are inherited by all of its members. -- There is no limit to the number of members in a role. -- Roles cannot log in. They do not have a password and cannot use certificates. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Required privileges - -Roles can only be created by superusers, i.e., members of the `admin` role. The `admin` role exists by default with `root` as the member. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_role.html %}
      - -## Parameters - -| Parameter | Description | -------------|-------------- -`name` | The name of the role you want to create. Role names are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters.

      Note that roles and [users](create-user.html) share the same namespace and must be unique. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE ROLE dev_ops; -~~~ -~~~ -CREATE ROLE 1 -~~~ - -After creating roles, you can [add users to the role](grant-roles.html) and [grant the role privileges](grant.html). - -## See also - -- [Authorization](authorization.html) -- [`DROP ROLE` (Enterprise)](drop-user.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`GRANT `](grant-roles.html) -- [`REVOKE `](revoke-roles.html) -- [`SHOW ROLES`](show-roles.html) -- [`SHOW USERS`](show-users.html) -- [`SHOW GRANTS`](show-grants.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/create-security-certificates-custom-ca.md b/src/current/v19.1/create-security-certificates-custom-ca.md deleted file mode 100644 index 856374dce21..00000000000 --- a/src/current/v19.1/create-security-certificates-custom-ca.md +++ /dev/null @@ -1,137 +0,0 @@ ---- -title: Create Security Certificates -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true ---- - -To secure your CockroachDB cluster's inter-node and client-node communication, you need to provide a Certificate Authority (CA) certificate that has been used to sign keys and certificates (SSLs) for: - -- Nodes -- Clients -- Admin UI (optional) - -To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, use [`openssl` commands](https://wiki.openssl.org/index.php/), or use a [custom CA](create-security-certificates-custom-ca.html) (for example, a public CA or your organizational CA). - - - -This document discusses the following advanced use cases for using security certificates with CockroachDB: - -Approach | Use case description --------------|------------ -[UI certificate and key](#accessing-the-admin-ui-for-a-secure-cluster) | When you want to access the Admin UI for a secure cluster and avoid clicking through a warning message to get to the UI. -[Split-node certificate](#split-node-certificates) | When your organizational CA requires you to have separate certificates for the node's incoming connections (from SQL and Admin UI clients, and from other CockroachDB nodes) and for outgoing connections to other CockroachDB nodes. -[Split-CA certificates](#split-ca-certificates) | When you have multiple CockroachDB clusters and need to restrict access to clients from accessing the other cluster. - -## Accessing the Admin UI for a secure cluster - -On [accessing the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) for a secure cluster, your web browser will consider the CockroachDB-issued certificate invalid, because the browser hasn't been configured to trust the CA that issued the certificate. - -For secure clusters, you can avoid getting the warning message by using a certificate issued by a public CA whose certificates are trusted by browsers, in addition to the CockroachDB-created certificates: - -1. Request a certificate from a public CA (for example, [Let's Encrypt](https://letsencrypt.org/)). The certificate must have the IP addresses and DNS names used to reach the Admin UI listed in the `Subject Alternative Name` field. -2. Rename the certificate and key as `ui.crt` and `ui.key`. -3. Add the `ui.crt` and `ui.key` to the [certificate directory](create-security-certificates.html#certificate-directory). `ui.key` must not have group or world permissions (maximum permissions are 0700, or rwx------). You can disable this check by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK=true`. -4. For nodes that are already running, load the `ui.crt` certificate without restarting the node by issuing a `SIGHUP` signal to the cockroach process: - {% include copy-clipboard.html %} - ~~~ shell - pkill -SIGHUP -x cockroach - ~~~ - The `SIGHUP` signal must be sent by the same user running the process (e.g., run with sudo if the cockroach process is running under user root). - -### Node key and certificates - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`node.crt` | Server certificate created using the `cockroach cert` command.

      `node.crt` must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field.

      Must be signed by `ca.crt`. -`node.key` | Server key created using the `cockroach cert` command. -`ui.crt` | UI certificate signed by the public CA. `ui.crt` must have the IP addresses and DNS names used to reach the Admin UI listed in `Subject Alternative Name`. -`ui.key` | UI key corresponding to `ui.crt`. - -### Client key and certificates - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`)

      Must be signed by `ca.crt`. -`client..key` | Client key created using the `cockroach cert` command. - -## Split node certificates - -The node certificate discussed in the `cockroach cert` command [documentation](create-security-certificates.html) is multifunctional, which means the same certificate is presented for the node's incoming connections (from SQL and Admin UI clients, and from other CockroachDB nodes) and for outgoing connections to other CockroachDB nodes. To make the certificate multi-functional, the `node.crt` created using the `cockroach cert` command has `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. This works if you are also using the CockroachDB CA created using the `cockroach cert` command. However, if you need to use an external public CA or your own organizational CA, the CA policy might not allow it to sign a server certificate containing a CN that is not an IP address or domain name. - -To get around this issue, you can split the node key and certificate into two: - -- `node.crt` and `node.key`: `node.crt` is used as the server certificate when a node receives incoming connections from clients and other nodes. All IP addresses and DNS names for the node must be listed in the `Subject Alternative Name` field. -- `client.node.crt` and `client.node.key`: `client.node.crt` is used as the client certificate when making connections to other nodes. `client.node.crt` must have `CN=node`. - -### Node key and certificates - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`node.crt` | Server certificate used when a node receives incoming connections from clients and other nodes.

      All IP addresses and DNS names for the node must be listed in `Subject Alternative Name`.

      Must be signed by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Client certificate when making connections to other nodes.

      Must have `CN=node`.

      Must be signed by `ca.crt`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the Admin UI, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -### Client key and certificates - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate issued by the public CA or your organizational CA. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`)

      Must be signed by `ca.crt`. -`client..key` | Client key corresponding to `client..crt`. - -## Split CA certificates - -{{site.data.alerts.callout_danger}} -We do not recommend you use split CA certificates unless your organizational security practices mandate you to do so. -{{site.data.alerts.end}} - -If you need to use separate CAs to sign node certificates and client certificates, then you need two CAs and their respective certificates and keys: `ca.crt` and `ca-client.crt`. - -### Node key and certificates - -A node must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate to verify node certificates. -`ca-client.crt` | CA certificate to verify client certificates. -`node.crt` | Server certificate used when a node receives incoming connections from clients and other nodes.

      All IP addresses and DNS names for the node must be listed in `Subject Alternative Name`.

      Must be signed by `ca.crt`. -`node.key` | Server key corresponding to `node.crt`. -`client.node.crt` | Client certificate when making connections to other nodes. This certificate must be signed using `ca-client.crt`

      Must have `CN=node`. -`client.node.key` | Client key corresponding to `client.node.crt`. - -Optionally, if you have a certificate issued by a public CA to securely access the Admin UI, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. - -### Client key and certificates - -A client must have the following files with file names as specified in the table: - -File name | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Each `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`).

      Must be signed by `ca-client.crt`. -`client..key` | Client key corresponding to `client..crt`. - -## See also - -- [Manual Deployment](manual-deployment.html): Learn about starting a multi-node secure cluster and accessing it from a client. -- [Start a Node](start-a-node.html): Learn more about the flags you pass when adding a node to a secure cluster -- [Client Connection Parameters](connection-parameters.html) diff --git a/src/current/v19.1/create-security-certificates-openssl.md b/src/current/v19.1/create-security-certificates-openssl.md deleted file mode 100644 index 4df47700941..00000000000 --- a/src/current/v19.1/create-security-certificates-openssl.md +++ /dev/null @@ -1,342 +0,0 @@ ---- -title: Create Security Certificates -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true ---- - -To secure your CockroachDB cluster's inter-node and client-node communication, you need to provide a Certificate Authority (CA) certificate that has been used to sign keys and certificates (SSLs) for: - -- Nodes -- Clients -- Admin UI (optional) - -To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, use [`openssl` commands](https://wiki.openssl.org/index.php/), or use a [custom CA](create-security-certificates-custom-ca.html) (for example, a public CA or your organizational CA). - - - -## Subcommands - -Subcommand | Usage ------------|------ -[`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) | Create an RSA private key. -[`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) | Create CA certificate and CSRs (certificate signing requests). -[`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) | Create node and client certificates using the CSRs. - -## Configuration files - -To use [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) and [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) subcommands, you need the following configuration files: - -File name pattern | File usage --------------|------------ -`ca.cnf` | CA configuration file -`node.cnf` | Server configuration file -`client.cnf` | Client configuration file - -## Certificate directory - -To create node and client certificates using the OpenSSL commands, you need access to a local copy of the CA certificate and key. We recommend creating all certificates (node, client, and CA certificates), and node and client keys in one place and then distributing them appropriately. Store the CA key somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster. - -## Required keys and certificates - -Use the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) and [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) subcommands to create all certificates, and node and client keys in a single directory, with the files named as follows: - -### Node key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate -`node.crt` | Server certificate -`node.key` | Key for server certificate - -### Client key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (for example: `client.root.crt` for user `root`). -`client..key` | Key for the client certificate. - -Note the following: - -- The CA key should not be uploaded to the nodes and clients, so it should be created in a separate directory. - -- Keys (files ending in `.key`) must not have group or world permissions (maximum permissions are 0700, or `rwx------`). This check can be disabled by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK=true`. - -## Examples - -### Create the CA key and certificate pair - -1. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: Create your CA certificate and all node and client certificates and keys in this directory and then upload the relevant files to the nodes and clients. - - `my-safe-directory`: Create your CA key in this directory and then reference the key when generating node and client certificates. After that, keep the key safe and secret; do not upload it to your nodes or clients. - -2. Create the `ca.cnf` file and copy the following configuration into it. - - You can set the CA certificate expiration period using the `default_days` parameter. We recommend using the CockroachDB default value of the CA certificate expiration period, which is 3660 days. - - {% include copy-clipboard.html %} - ~~~ - # OpenSSL CA configuration file - [ ca ] - default_ca = CA_default - - [ CA_default ] - default_days = 3660 - database = index.txt - serial = serial.txt - default_md = sha256 - copy_extensions = copy - unique_subject = no - - # Used to create the CA certificate. - [ req ] - prompt=no - distinguished_name = distinguished_name - x509_extensions = extensions - - [ distinguished_name ] - organizationName = Cockroach - commonName = Cockroach CA - - [ extensions ] - keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,keyCertSign - basicConstraints = critical,CA:true,pathlen:1 - - # Common policy for nodes and users. - [ signing_policy ] - organizationName = supplied - commonName = supplied - - # Used to sign node certificates. - [ signing_node_req ] - keyUsage = critical,digitalSignature,keyEncipherment - extendedKeyUsage = serverAuth,clientAuth - - # Used to sign client certificates. - [ signing_client_req ] - keyUsage = critical,digitalSignature,keyEncipherment - extendedKeyUsage = clientAuth - ~~~ - - {{site.data.alerts.callout_danger}}The keyUsage and extendedkeyUsage parameters are vital for CockroachDB functions. You can modify or omit other parameters as per your preferred OpenSSL configuration and you can add additional usages, but do not omit keyUsage and extendedkeyUsage parameters or remove the listed usages. {{site.data.alerts.end}} - -3. Create the CA key using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out my-safe-directory/ca.key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 my-safe-directory/ca.key - ~~~ - -4. Create the CA certificate using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl req \ - -new \ - -x509 \ - -config ca.cnf \ - -key my-safe-directory/ca.key \ - -out certs/ca.crt \ - -days 3660 \ - -batch - ~~~ - -5. Reset database and index files. - - {% include copy-clipboard.html %} - ~~~ shell - $ rm -f index.txt serial.txt - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ touch index.txt - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ echo '01' > serial.txt - ~~~ - -### Create the certificate and key pairs for nodes - -In the following steps, replace the placeholder text in the code with the actual username and node address. - -1. Create the `node.cnf` file for the first node and copy the following configuration into it: - - {% include copy-clipboard.html %} - ~~~ - # OpenSSL node configuration file - [ req ] - prompt=no - distinguished_name = distinguished_name - req_extensions = extensions - - [ distinguished_name ] - organizationName = Cockroach - # Required value for commonName, do not change. - commonName = node - - [ extensions ] - subjectAltName = DNS:,DNS:,IP: - ~~~ - - {{site.data.alerts.callout_danger}}The commonName and subjectAltName parameters are vital for CockroachDB functions. It is also required that commonName be set to node. You can modify or omit other parameters as per your preferred OpenSSL configuration, but do not omit the commonName and subjectAltName parameters. {{site.data.alerts.end}} - -2. Create the key for the first node using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out certs/node.key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 certs/node.key - ~~~ - -3. Create the CSR for the first node using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl req \ - -new \ - -config node.cnf \ - -key certs/node.key \ - -out node.csr \ - -batch - ~~~ - -4. Sign the node CSR to create the node certificate for the first node using the [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) command. - - You can set the node certificate expiration period using the `days` flag. We recommend using the CockroachDB default value of the node certificate expiration period, which is 1830 days. - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl ca \ - -config ca.cnf \ - -keyfile my-safe-directory/ca.key \ - -cert certs/ca.crt \ - -policy signing_policy \ - -extensions signing_node_req \ - -out certs/node.crt \ - -outdir certs/ \ - -in node.csr \ - -days 1830 \ - -batch - ~~~ - -5. Upload certificates to the first node: - - {% include copy-clipboard.html %} - ~~~ shell - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Delete the local copy of the first node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key.{{site.data.alerts.end}} - -7. Repeat steps 1 - 6 for each additional node. - -8. Remove the `.pem` files in the `certs` directory. These files are unnecessary duplicates of the `.crt` files that CockroachDB requires. - -### Create the certificate and key pair for a client - -In the following steps, replace the placeholder text in the code with the actual username. - -1. Create the `client.cnf` file for the first client and copy the following configuration into it: - - {% include copy-clipboard.html %} - ~~~ - # OpenSSL client configuration file - [ req ] - prompt=no - distinguished_name = distinguished_name - - [ distinguished_name ] - organizationName = Cockroach - commonName = - ~~~ - - {{site.data.alerts.callout_info}}The commonName parameter is vital for CockroachDB functions. You can modify or omit other parameters as per your preferred OpenSSL configuration, but do not omit the commonName parameter. {{site.data.alerts.end}} - -2. Create the key for the first client using the [`openssl genrsa`](https://www.openssl.org/docs/manmaster/man1/genrsa.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl genrsa -out certs/client..key 2048 - ~~~ - {% include copy-clipboard.html %} - ~~~ shell - $ chmod 400 certs/client..key - ~~~ - -3. Create the CSR for the first client using the [`openssl req`](https://www.openssl.org/docs/manmaster/man1/req.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl req \ - -new \ - -config client.cnf \ - -key certs/client..key \ - -out client..csr \ - -batch - ~~~ - -4. Sign the client CSR to create the client certificate for the first client using the [`openssl ca`](https://www.openssl.org/docs/manmaster/man1/ca.html) command. You can set the client certificate expiration period using the `days` flag. We recommend using the CockroachDB default value of the client certificate expiration period, which is 1830 days. - - {% include copy-clipboard.html %} - ~~~ shell - $ openssl ca \ - -config ca.cnf \ - -keyfile my-safe-directory/ca.key \ - -cert certs/ca.crt \ - -policy signing_policy \ - -extensions signing_client_req \ - -out certs/client..crt \ - -outdir certs/ \ - -in client..csr \ - -days 1830 \ - -batch - ~~~ - -5. Upload certificates to the first client using your preferred method. - -6. Repeat steps 1 - 5 for each additional client. - -7. Remove the `.pem` files in the `certs` directory. These files are unnecessary duplicates of the `.crt` files that CockroachDB requires. - -## See also - -- [Manual Deployment](manual-deployment.html): Learn about starting a multi-node secure cluster and accessing it from a client. -- [Start a Node](start-a-node.html): Learn more about the flags you pass when adding a node to a secure cluster -- [Client Connection Parameters](connection-parameters.html) diff --git a/src/current/v19.1/create-security-certificates.md b/src/current/v19.1/create-security-certificates.md deleted file mode 100644 index 109b3aae134..00000000000 --- a/src/current/v19.1/create-security-certificates.md +++ /dev/null @@ -1,327 +0,0 @@ ---- -title: Create Security Certificates -summary: A secure CockroachDB cluster uses TLS for encrypted inter-node and client-node communication. -toc: true ---- - -To secure your CockroachDB cluster's inter-node and client-node communication, you need to provide a Certificate Authority (CA) certificate that has been used to sign keys and certificates (SSLs) for: - -- Nodes -- Clients -- Admin UI (optional) - -To create these certificates and keys, use the `cockroach cert` [commands](cockroach-commands.html) with the appropriate subcommands and flags, use [`openssl` commands](https://wiki.openssl.org/index.php/), or use a [custom CA](create-security-certificates-custom-ca.html) (for example, a public CA or your organizational CA). - -
      - - - -
      - -{{site.data.alerts.callout_success}}For details about when and how to change security certificates without restarting nodes, see Rotate Security Certificates.{{site.data.alerts.end}} - -## How security certificates work - -1. Using the `cockroach cert` command, you create a CA certificate and key and then node and client certificates that are signed by the CA certificate. Since you need access to a copy of the CA certificate and key to create node and client certs, it's best to create everything in one place. - -2. You then upload the appropriate node certificate and key and the CA certificate to each node, and you upload the appropriate client certificate and key and the CA certificate to each client. - -3. When nodes establish contact to each other, and when clients establish contact to nodes, they use the CA certificate to verify each other's identity. - -## Subcommands - -Subcommand | Usage ------------|------ -`create-ca` | Create the self-signed certificate authority (CA), which you'll use to create and authenticate certificates for your entire cluster. -`create-node` | Create a certificate and key for a specific node in the cluster. You specify all addresses at which the node can be reached and pass appropriate flags. -`create-client` | Create a certificate and key for a [specific user](create-and-manage-users.html) accessing the cluster from a client. You specify the username of the user who will use the certificate and pass appropriate flags. -`list` | List certificates and keys found in the certificate directory. - -## Certificate directory - -When using `cockroach cert` to create node and client certificates, you will need access to a local copy of the CA certificate and key. It is therefore recommended to create all certificates and keys in one place and then distribute node and client certificates and keys appropriately. For the CA key, be sure to store it somewhere safe and keep a backup; if you lose it, you will not be able to add new nodes or clients to your cluster. For a walkthrough of this process, see [Manual Deployment](manual-deployment.html). - -## Required keys and certificates - -The `create-*` subcommands generate the CA certificate and all node and client certificates and keys in a single directory specified by the `--certs-dir` flag, with the files named as follows: - -### Node key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate. -`node.crt` | Server certificate.

      `node.crt` must be signed by `ca.crt` and must have `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. CockroachDB also supports [wildcard notation in DNS names](https://en.wikipedia.org/wiki/Wildcard_certificate). -`node.key` | Key for server certificate. - -### Client key and certificates - -File name pattern | File usage --------------|------------ -`ca.crt` | CA certificate. -`client..crt` | Client certificate for `` (e.g., `client.root.crt` for user `root`).

      Must be signed by `ca.crt`. Also, `client..crt` must have `CN=` (for example, `CN=marc` for `client.marc.crt`) -`client..key` | Key for the client certificate. - -Optionally, if you have a certificate issued by a public CA to securely access the Admin UI, you need to place the certificate and key (`ui.crt` and `ui.key` respectively) in the directory specified by the `--certs-dir` flag. For more information, refer to [Use a UI certificate and key to access the Admin UI](create-security-certificates-custom-ca.html#accessing-the-admin-ui-for-a-secure-cluster). - -Note the following: - -- By default, the `node.crt` is multi-functional, as in the same certificate is used for both incoming connections (from SQL and Admin UI clients, and from other CockroachDB nodes) and for outgoing connections to other CockroachDB nodes. To make this possible, the `node.crt` created using the `cockroach cert` command has `CN=node` and the list of IP addresses and DNS names listed in `Subject Alternative Name` field. - -- The CA key is never loaded automatically by `cockroach` commands, so it should be created in a separate directory, identified by the `--ca-key` flag. - -- Keys (files ending in `.key`) must not have group or world permissions (maximum permissions are 0700, or `rwx------`). This check can be disabled by setting the environment variable `COCKROACH_SKIP_KEY_PERMISSION_CHECK=true`. - -## Synopsis - -Create the CA certificate and key: - -~~~ shell -$ cockroach cert create-ca \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -Create a node certificate and key: - -~~~ shell -$ cockroach cert create-node \ - [node-hostname] \ - [node-other-hostname] \ - [node-yet-another-hostname] \ - [hostname-in-wildcard-notation] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -Create a client certificate and key: - -~~~ shell -$ cockroach cert create-client \ - [username] \ - --certs-dir=[path-to-certs-directory] \ - --ca-key=[path-to-ca-key] -~~~ - -List certificates and keys: - -~~~ shell -$ cockroach cert list \ - --certs-dir=[path-to-certs-directory] -~~~ - -View help: - -~~~ shell -$ cockroach cert --help -~~~ -~~~ shell -$ cockroach cert --help -~~~ - -## Flags - -The `cert` command and subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](#certificate-directory) containing all certificates and keys needed by `cockroach` commands.

      This flag is used by all subcommands.

      **Default:** `${HOME}/.cockroach-certs/` -`--ca-key` | The path to the private key protecting the CA certificate.

      This flag is required for all `create-*` subcommands. When used with `create-ca` in particular, it defines where to create the CA key; the specified directory must exist.

      **Env Variable:** `COCKROACH_CA_KEY` -`--allow-ca-key-reuse` | When running the `create-ca` subcommand, pass this flag to re-use an existing CA key identified by `--ca-key`. Otherwise, a new CA key will be generated.

      This flag is used only by the `create-ca` subcommand. It helps avoid accidentally re-using an existing CA key. -`--overwrite` | When running `create-*` subcommands, pass this flag to allow existing files in the certificate directory (`--certs-dir`) to be overwritten.

      This flag helps avoid accidentally overwriting sensitive certificates and keys. -`--lifetime` | The lifetime of the certificate, in hours, minutes, and seconds.

      Certificates are valid from the time they are created through the duration specified in `--lifetime`.

      **Default:** `87840h0m0s` (10 years) -`--key-size` | The size of the CA, node, or client key, in bits.

      **Default:** `2048` - `--also-generate-pkcs8-key` | New in v19.1: Also create a key in [PKCS#8 format](https://tools.ietf.org/html/rfc5208), which is the standard key encoding format used by Java. For example usage, see [Build a Java App with CockroachDB](build-a-java-app-with-cockroachdb.html). - -### Logging - -By default, the `cert` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Create the CA certificate and key pair - -1. Create two directories: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir my-safe-directory - ~~~ - - `certs`: You'll generate your CA certificate and all node and client certificates and keys in this directory and then upload some of the files to your nodes. - - `my-safe-directory`: You'll generate your CA key in this directory and then reference the key when generating node and client certificates. After that, you'll keep the key safe and secret; you will not upload it to your nodes. - -2. Generate the CA certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 8 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - ~~~ - -### Create the certificate and key pairs for nodes - -1. Generate the certificate and key for the first node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node1.example.com \ - node1.another-example.com \ - *.dev.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:16 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:16 node.key - ~~~ - -2. Upload certificates to the first node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -3. Delete the local copy of the first node's certificate and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm certs/node.crt certs/node.key - ~~~ - - {{site.data.alerts.callout_info}}This is necessary because the certificates and keys for additional nodes will also be named node.crt and node.key As an alternative to deleting these files, you can run the next cockroach cert create-node commands with the --overwrite flag.{{site.data.alerts.end}} - -4. Create the certificate and key for the second node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - node2.example.com \ - node2.another-example.com \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ ls -l certs - ~~~ - - ~~~ - total 24 - -rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt - -rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt - -rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key - ~~~ - -5. Upload certificates to the second node: - - {% include copy-clipboard.html %} - ~~~ shell - # Create the certs directory: - $ ssh @ "mkdir certs" - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - # Upload the CA certificate and node certificate and key: - $ scp certs/ca.crt \ - certs/node.crt \ - certs/node.key \ - @:~/certs - ~~~ - -6. Repeat steps 3 - 5 for each additional node. - -### Create the certificate and key pair for a client - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert create-client \ -maxroach \ ---certs-dir=certs \ ---ca-key=my-safe-directory/ca.key -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ ls -l certs -~~~ - -~~~ -total 40 --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:12 ca.crt --rw-r--r-- 1 maxroach maxroach 1.1K Jul 10 14:13 client.maxroach.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:13 client.maxroach.key --rw-r--r-- 1 maxroach maxroach 1.2K Jul 10 14:17 node.crt --rw------- 1 maxroach maxroach 1.6K Jul 10 14:17 node.key -~~~ - -### List certificates and keys - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach cert list \ ---certs-dir=certs -~~~ - -~~~ -Certificate directory: certs -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Usage | Certificate File | Key File | Expires | Notes | Error | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -| Certificate Authority | ca.crt | | 2027/07/18 | num certs: 1 | | -| Node | node.crt | node.key | 2022/07/14 | addresses: node2.example.com,node2.another-example.com | | -| Client | client.maxroach.crt | client.maxroach.key | 2022/07/14 | user: maxroach | | -+-----------------------+---------------------+---------------------+------------+--------------------------------------------------------+-------+ -(3 rows) -~~~ - -## See also - -- [Security overview](security-overview.html) -- [Authentication](authentication.html) -- [Client Connection Parameters](connection-parameters.html) -- [Rotate Security Certificates](rotate-certificates.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](secure-a-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/create-sequence.md b/src/current/v19.1/create-sequence.md deleted file mode 100644 index d693ebe854a..00000000000 --- a/src/current/v19.1/create-sequence.md +++ /dev/null @@ -1,202 +0,0 @@ ---- -title: CREATE SEQUENCE -summary: -toc: true ---- - -The `CREATE SEQUENCE` [statement](sql-statements.html) creates a new sequence in a database. Use a sequence to auto-increment integers in a table. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Considerations - -- Using a sequence is slower than [auto-generating unique IDs with the `gen_random_uuid()`, `uuid_v4()` or `unique_rowid()` built-in functions](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). Incrementing a sequence requires a write to persistent storage, whereas auto-generating a unique ID does not. Therefore, use auto-generated unique IDs unless an incremental sequence is preferred or required. -- A column that uses a sequence can have a gap in the sequence values if a transaction advances the sequence and is then rolled back. Sequence updates are committed immediately and aren't rolled back along with their containing transaction. This is done to avoid blocking concurrent transactions that use the same sequence. - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_sequence.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`seq_name` | The name of the sequence to be created, which must be unique within its database and follow the [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.seq_name`. -`INCREMENT` | The value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence.

      **Default:** `1` -`MINVALUE` | The minimum value of the sequence. Default values apply if not specified or if you enter `NO MINVALUE`.

      **Default for ascending:** `1`

      **Default for descending:** `MININT` -`MAXVALUE` | The maximum value of the sequence. Default values apply if not specified or if you enter `NO MAXVALUE`.

      **Default for ascending:** `MAXINT`

      **Default for descending:** `-1` -`START` | The first value of the sequence.

      **Default for ascending:** `1`

      **Default for descending:** `-1` -`NO CYCLE` | Currently, all sequences are set to `NO CYCLE` and the sequence will not wrap. - - - -## Sequence functions - -We support the following [SQL sequence functions](functions-and-operators.html): - -- `nextval('seq_name')` - {{site.data.alerts.callout_info}}If nextval() is used in conjunction with RETURNING NOTHING statements, the sequence increments can be reordered. For more information, see Parallel Statement Execution.{{site.data.alerts.end}} -- `currval('seq_name')` -- `lastval()` -- `setval('seq_name', value, is_called)` - -## Examples - -### List all sequences - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_seq | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -### Create a sequence with default settings - -In this example, we create a sequence with default settings. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE customer_seq; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE customer_seq; -~~~ - -~~~ -+--------------+--------------------------------------------------------------------------+ -| table_name | create_statement | -+--------------+--------------------------------------------------------------------------+ -| customer_seq | CREATE SEQUENCE customer_seq MINVALUE 1 MAXVALUE 9223372036854775807 | -| | INCREMENT 1 START 1 | -+--------------+--------------------------------------------------------------------------+ -(1 row) -~~~ - -### Create a sequence with user-defined settings - -In this example, we create a sequence that starts at -1 and descends in increments of 2. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE SEQUENCE desc_customer_list START -1 INCREMENT -2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE desc_customer_list; -~~~ - -~~~ -+--------------------+--------------------------------------------------------------------------+ -| table_name | create_statement | -+--------------------+--------------------------------------------------------------------------+ -| desc_customer_list | CREATE SEQUENCE desc_customer_list MINVALUE -9223372036854775808 | -| | MAXVALUE -1 INCREMENT -2 START -1 | -+--------------------+--------------------------------------------------------------------------+ -(1 row) -~~~ - -### Create a table with a sequence - -In this example, we create a table using the sequence we created in the first example as the table's primary key. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customer_list ( - id INT PRIMARY KEY DEFAULT nextval('customer_seq'), - customer string, - address string - ); -~~~ - -Insert a few records to see the sequence. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customer_list (customer, address) - VALUES - ('Lauren', '123 Main Street'), - ('Jesse', '456 Broad Ave'), - ('Amruta', '9876 Green Parkway'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_list; -~~~ - -~~~ -+----+----------+--------------------+ -| id | customer | address | -+----+----------+--------------------+ -| 1 | Lauren | 123 Main Street | -| 2 | Jesse | 456 Broad Ave | -| 3 | Amruta | 9876 Green Parkway | -+----+----------+--------------------+ -~~~ - -### View the current value of a sequence - -To view the current value without incrementing the sequence, use: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customer_seq; -~~~ - -~~~ -+------------+---------+-----------+ -| last_value | log_cnt | is_called | -+------------+---------+-----------+ -| 3 | 0 | true | -+------------+---------+-----------+ -~~~ - -{{site.data.alerts.callout_info}}The log_cnt and is_called columns are returned only for PostgreSQL compatibility; they are not stored in the database.{{site.data.alerts.end}} - -If a value has been obtained from the sequence in the current session, you can also use the `currval('seq_name')` function to get that most recently obtained value: - -~~~ sql -> SELECT currval('customer_seq'); -~~~ - -~~~ -+---------+ -| currval | -+---------+ -| 3 | -+---------+ -~~~ - -## See also -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`RENAME SEQUENCE`](rename-sequence.html) -- [`DROP SEQUENCE`](drop-sequence.html) -- [`SHOW CREATE`](show-create.html) -- [`SHOW SEQUENCES`](show-sequences.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/create-statistics.md b/src/current/v19.1/create-statistics.md deleted file mode 100644 index 16b8bf05b97..00000000000 --- a/src/current/v19.1/create-statistics.md +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: CREATE STATISTICS -summary: Use the CREATE STATISTICS statement to generate table statistics for the cost-based optimizer to use. -toc: true ---- -Use the `CREATE STATISTICS` [statement](sql-statements.html) to generate table statistics for the [cost-based optimizer](cost-based-optimizer.html) to use. - -Once you [create a table](create-table.html) and load data into it (e.g., [`INSERT`](insert.html), [`IMPORT`](import.html)), table statistics can be generated. Table statistics help the cost-based optimizer determine the cardinality of the rows used in each query, which helps to predict more accurate costs. - -`CREATE STATISTICS` automatically figures out which columns to get statistics on — specifically, it chooses: - -- Columns that are part of the primary key or an index (in other words, all indexed columns). -- Up to 100 non-indexed columns (unless you specify which columns to create statistics on, as shown in [this example](#create-statistics-on-a-specific-column)). - -{{site.data.alerts.callout_info}} -New in v19.1: [Automatic statistics is enabled by default](cost-based-optimizer.html#table-statistics); most users do not need to issue `CREATE STATISTICS` statements directly. -{{site.data.alerts.end}} - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/create_stats.html %} -
      - -## Required Privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database. - -## Parameters - -| Parameter | Description | -|-----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `statistics_name` | The name of the set of statistics you are creating. | -| `opt_stats_columns` | The name of the column(s) you want to create statistics for. | -| `create_stats_target` | The name of the table you want to create statistics for. | -| `opt_as_of_clause` | Used to create historical stats using the [`AS OF SYSTEM TIME`](as-of-system-time.html) clause. For instructions, see [Create statistics as of a given time](#create-statistics-as-of-a-given-time). | - -## Examples - -### Create statistics on a specific column - -{% include copy-clipboard.html %} -~~~ sql -> CREATE STATISTICS students ON id FROM students_by_list; -~~~ - -{{site.data.alerts.callout_info}} -Multi-column statistics are not supported yet. -{{site.data.alerts.end}} - -### Create statistics on a default set of columns - -The `CREATE STATISTICS` statement shown below automatically figures out which columns to get statistics on — specifically, it chooses: - -- Columns that are part of the primary key or an index (in other words, all indexed columns). -- Up to 100 non-indexed columns. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE STATISTICS students FROM students_by_list; -~~~ - -### Create statistics as of a given time - -To create statistics as of a given time (in this example, 1 minute ago to avoid interfering with the production workload), run a statement like the following: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE STATISTICS employee_stats FROM employees AS OF SYSTEM TIME '-1m'; -~~~ - -For more information about how the `AS OF SYSTEM TIME` clause works, including supported time formats, see [`AS OF SYSTEM TIME`](as-of-system-time.html). - -### Delete statistics - -{% include {{ page.version.version }}/misc/delete-statistics.md %} - -### View statistics jobs - -Every time the `CREATE STATISTICS` statement is executed, it kicks off a background job. This is true for queries issued by your application as well as queries issued by the [automatic stats feature](cost-based-optimizer.html#table-statistics). - -To view statistics jobs, there are two options: - -1. Use [`SHOW JOBS`](show-jobs.html) to see all statistics jobs that were created by user queries (i.e., someone entering `CREATE STATISTICS` at the SQL prompt or via application code): - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM [SHOW JOBS] WHERE job_type LIKE '%CREATE STATS%'; - ~~~ - - ~~~ - job_id | job_type | description | statement | user_name | status | running_status | created | started | finished | modified | fraction_completed | error | coordinator_id - --------------------+--------------+------------------------------------------------------------------+-----------+-----------+-----------+----------------+----------------------------+----------------------------+----------------------------+----------------------------+--------------------+-------+---------------- - 441281249412743169 | CREATE STATS | CREATE STATISTICS salary_stats FROM employees.public.salaries | | root | succeeded | | 2019-04-08 15:52:30.040531 | 2019-04-08 15:52:30.046646 | 2019-04-08 15:52:32.757519 | 2019-04-08 15:52:32.757519 | 1 | | 1 - 441281163978637313 | CREATE STATS | CREATE STATISTICS employee_stats FROM employees.public.employees | | root | succeeded | | 2019-04-08 15:52:03.968099 | 2019-04-08 15:52:03.972557 | 2019-04-08 15:52:05.168809 | 2019-04-08 15:52:05.168809 | 1 | | 1 - (2 rows) - ~~~ - -2. Use `SHOW AUTOMATIC JOBS` to see statistics jobs that were created by the [automatic statistics feature](cost-based-optimizer.html#table-statistics): - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM [SHOW AUTOMATIC JOBS] WHERE job_type LIKE '%CREATE STATS%'; - ~~~ - - ~~~ - job_id | job_type | description | statement | user_name | status | running_status | created | started | finished | modified | fraction_completed | error | coordinator_id - --------------------+-------------------+------------------------------------------------------------+-------------------------------------------------------------------------------------------+-----------+-----------+----------------+----------------------------+----------------------------+----------------------------+----------------------------+--------------------+-------+---------------- - 441280366254850049 | AUTO CREATE STATS | Table statistics refresh for employees.public.departments | CREATE STATISTICS __auto__ FROM [55] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:48:00.522119 | 2019-04-08 15:48:00.52663 | 2019-04-08 15:48:00.541608 | 2019-04-08 15:48:00.541608 | 1 | | 1 - 441280364809289729 | AUTO CREATE STATS | Table statistics refresh for employees.public.titles | CREATE STATISTICS __auto__ FROM [60] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:48:00.080971 | 2019-04-08 15:48:00.083117 | 2019-04-08 15:48:00.515766 | 2019-04-08 15:48:00.515767 | 1 | | 1 - 441280356286201857 | AUTO CREATE STATS | Table statistics refresh for employees.public.salaries | CREATE STATISTICS __auto__ FROM [59] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:47:57.479929 | 2019-04-08 15:47:57.482235 | 2019-04-08 15:48:00.075025 | 2019-04-08 15:48:00.075025 | 1 | | 1 - 441280352161693697 | AUTO CREATE STATS | Table statistics refresh for employees.public.employees | CREATE STATISTICS __auto__ FROM [58] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:47:56.221223 | 2019-04-08 15:47:56.223664 | 2019-04-08 15:47:57.474159 | 2019-04-08 15:47:57.474159 | 1 | | 1 - 441280352070434817 | AUTO CREATE STATS | Table statistics refresh for employees.public.dept_manager | CREATE STATISTICS __auto__ FROM [57] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:47:56.193375 | 2019-04-08 15:47:56.195813 | 2019-04-08 15:47:56.215114 | 2019-04-08 15:47:56.215114 | 1 | | 1 - 441280350791401473 | AUTO CREATE STATS | Table statistics refresh for employees.public.dept_emp | CREATE STATISTICS __auto__ FROM [56] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:47:55.803052 | 2019-04-08 15:47:55.806071 | 2019-04-08 15:47:56.187153 | 2019-04-08 15:47:56.187154 | 1 | | 1 - 441279760786096129 | AUTO CREATE STATS | Table statistics refresh for test.public.kv | CREATE STATISTICS __auto__ FROM [53] WITH OPTIONS THROTTLING 0.9 AS OF SYSTEM TIME '-30s' | root | succeeded | | 2019-04-08 15:44:55.747725 | 2019-04-08 15:44:55.754582 | 2019-04-08 15:44:55.775664 | 2019-04-08 15:44:55.775665 | 1 | | 1 - (7 rows) - ~~~ - -## See Also - -- [Cost-Based Optimizer](cost-based-optimizer.html) -- [`SHOW STATISTICS`](show-statistics.html) -- [`CREATE TABLE`](create-table.html) -- [`INSERT`](insert.html) -- [`IMPORT`](import.html) -- [`SHOW JOBS`](show-jobs.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/create-table-as.md b/src/current/v19.1/create-table-as.md deleted file mode 100644 index 103789d39d1..00000000000 --- a/src/current/v19.1/create-table-as.md +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: CREATE TABLE AS -summary: The CREATE TABLE AS statement persists the result of a query into the database for later reuse. -toc: true ---- - -The `CREATE TABLE ... AS` statement creates a new table from a [selection query](selection-queries.html). - - -## Intended use - -Tables created with `CREATE TABLE ... AS` are intended to persist the -result of a query for later reuse. - -This can be more efficient than a [view](create-view.html) when the -following two conditions are met: - -- The result of the query is used as-is multiple times. -- The copy needs not be kept up-to-date with the original table over time. - -When the results of a query are reused multiple times within a larger -query, a view is advisable instead. The query optimizer can "peek" -into the view and optimize the surrounding query using the primary key -and indices of the tables mentioned in the view query. - -A view is also advisable when the results must be up-to-date; a view -always retrieves the current data from the tables that the view query -mentions. - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_table_as.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------- - `IF NOT EXISTS` | Create a new table only if a table of the same name does not already exist in the database; if one does exist, do not return an error.

      Note that `IF NOT EXISTS` checks the table name only; it does not check if an existing table has the same columns, indexes, constraints, etc., of the new table. - `table_name` | The name of the table to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`.

      The [`UPSERT`](upsert.html) and [`INSERT ON CONFLICT`](insert.html) statements use a temporary table called `excluded` to handle uniqueness conflicts during execution. It's therefore not recommended to use the name `excluded` for any of your tables. - `name` | The name of the column you want to use instead of the name of the column from `select_stmt`. - `select_stmt` | A [selection query](selection-queries.html) to provide the data. - -## Limitations - -The [primary key](primary-key.html) of tables created with `CREATE -TABLE ... AS` is not derived from the query results. Like for other -tables, it is not possible to add or change the primary key after -creation. Moreover, these tables are not -[interleaved](interleave-in-parent.html) with other tables. The -default rules for [column families](column-families.html) apply. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE logoff ( - user_id INT PRIMARY KEY, - user_email STRING UNIQUE, - logoff_date DATE NOT NULL, -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE logoff_copy AS TABLE logoff; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE logoff_copy; -~~~ -~~~ -+-------------+-----------------------------------------------------------------+ -| Table | CreateTable | -+-------------+-----------------------------------------------------------------+ -| logoff_copy | CREATE TABLE logoff_copy ( | -| | user_id INT NULL, | -| | user_email STRING NULL, | -| | logoff_date DATE NULL, | -| | FAMILY "primary" (user_id, user_email, logoff_date, rowid) | -| | ) | -+-------------+-----------------------------------------------------------------+ -(1 row) -~~~ - -The example illustrates that the primary key, unique, and "not null" -constraints are not propagated to the copy. - -It is however possible to -[create a secondary index](create-index.html) after `CREATE TABLE -... AS`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX logoff_copy_id_idx ON logoff_copy(user_id); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE logoff_copy; -~~~ -~~~ -+-------------+-----------------------------------------------------------------+ -| Table | CreateTable | -+-------------+-----------------------------------------------------------------+ -| logoff_copy | CREATE TABLE logoff_copy ( | -| | user_id INT NULL, | -| | user_email STRING NULL, | -| | logoff_date DATE NULL, | -| | INDEX logoff_copy_id_idx (user_id ASC), | -| | FAMILY "primary" (user_id, user_email, logoff_date, rowid) | -| | ) | -+-------------+-----------------------------------------------------------------+ -(1 row) -~~~ - -For maximum data storage optimization, consider using separately -[`CREATE`](create-table.html) followed by -[`INSERT INTO ...`](insert.html) to populate the table using the query -results. - -## Examples - -### Create a table from a `SELECT` query - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers WHERE state = 'NY'; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_ny AS SELECT * FROM customers WHERE state = 'NY'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_ny; -~~~ -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -### Change column names - -This statement creates a copy of an existing table but with changed column names. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_ny (id, first_name) AS SELECT id, name FROM customers WHERE state = 'NY'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_ny; -~~~ -~~~ -+----+------------+ -| id | first_name | -+----+------------+ -| 6 | Dorotea | -| 15 | Thales | -+----+------------+ -~~~ - -### Create a table from a `VALUES` clause - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE tech_states AS VALUES ('CA'), ('NY'), ('WA'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM tech_states; -~~~ -~~~ -+---------+ -| column1 | -+---------+ -| CA | -| NY | -| WA | -+---------+ -(3 rows) -~~~ - - -### Create a copy of an existing table - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_ny_copy AS TABLE customers_ny; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_ny_copy; -~~~ -~~~ -+----+------------+ -| id | first_name | -+----+------------+ -| 6 | Dorotea | -| 15 | Thales | -+----+------------+ -~~~ - -When a table copy is created this way, the copy is not associated to -any primary key, secondary index or constraint that was present on the -original table. - -## See also - -- [Selection Queries](selection-queries.html) -- [Simple `SELECT` Clause](select-clause.html) -- [`CREATE TABLE`](create-table.html) -- [`CREATE VIEW`](create-view.html) -- [`INSERT`](insert.html) -- [`DROP TABLE`](drop-table.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/create-table.md b/src/current/v19.1/create-table.md deleted file mode 100644 index 5e7b667767d..00000000000 --- a/src/current/v19.1/create-table.md +++ /dev/null @@ -1,469 +0,0 @@ ---- -title: CREATE TABLE -summary: The CREATE TABLE statement creates a new table in a database. -toc: true ---- - -The `CREATE TABLE` [statement](sql-statements.html) creates a new table in a database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database. - -## Synopsis - -
      - - -

      - -
      -{% include {{ page.version.version }}/sql/diagrams/create_table.html %} -
      - -
      - -
      - {% include {{ page.version.version }}/sql/diagrams/create_table.html %} -
      - -**column_def ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/column_def.html %} -
      - -**col_qualification ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/col_qualification.html %} -
      - -**index_def ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/index_def.html %} -
      - -**family_def ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/family_def.html %} -
      - -**table_constraint ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/table_constraint.html %} -
      - -**opt_interleave ::=** - -
      - {% include {{ page.version.version }}/sql/diagrams/opt_interleave.html %} -
      - -
      - -{{site.data.alerts.callout_success}}To create a table from the results of a SELECT statement, use CREATE TABLE AS. -{{site.data.alerts.end}} - -## Parameters - -Parameter | Description -----------|------------ -`IF NOT EXISTS` | Create a new table only if a table of the same name does not already exist in the database; if one does exist, do not return an error.

      Note that `IF NOT EXISTS` checks the table name only; it does not check if an existing table has the same columns, indexes, constraints, etc., of the new table. -`table_name` | The name of the table to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`.

      The [`UPSERT`](upsert.html) and [`INSERT ON CONFLICT`](insert.html) statements use a temporary table called `excluded` to handle uniqueness conflicts during execution. It's therefore not recommended to use the name `excluded` for any of your tables. -`column_def` | A comma-separated list of column definitions. Each column requires a [name/identifier](keywords-and-identifiers.html#identifiers) and [data type](data-types.html); optionally, a [column-level constraint](constraints.html) or other column qualification (e.g., [computed columns](computed-columns.html)) can be specified. Column names must be unique within the table but can have the same name as indexes or constraints.

      Any `PRIMARY KEY`, `UNIQUE`, and `CHECK` [constraints](constraints.html) defined at the column level are moved to the table-level as part of the table's creation. Use the [`SHOW CREATE`](show-create.html) statement to view them at the table level. -`index_def` | An optional, comma-separated list of [index definitions](indexes.html). For each index, the column(s) to index must be specified; optionally, a name can be specified. Index names must be unique within the table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). See the [Create a Table with Secondary Indexes and Inverted Indexes](#create-a-table-with-secondary-and-inverted-indexes) example below.

      The [`CREATE INDEX`](create-index.html) statement can be used to create an index separate from table creation. -`family_def` | An optional, comma-separated list of [column family definitions](column-families.html). Column family names must be unique within the table but can have the same name as columns, constraints, or indexes.

      A column family is a group of columns that are stored as a single key-value pair in the underlying key-value store. CockroachDB automatically groups columns into families to ensure efficient storage and performance. However, there are cases when you may want to manually assign columns to families. For more details, see [Column Families](column-families.html). -`table_constraint` | An optional, comma-separated list of [table-level constraints](constraints.html). Constraint names must be unique within the table but can have the same name as columns, column families, or indexes. -`opt_interleave` | You can potentially optimize query performance by [interleaving tables](interleave-in-parent.html), which changes how CockroachDB stores your data. -`opt_partition_by` | An [enterprise-only](enterprise-licensing.html) option that lets you define table partitions at the row level. You can define table partitions by list or by range. See [Define Table Partitions](partitioning.html) for more information. - -## Table-level replication - -By default, tables are created in the default replication zone but can be placed into a specific replication zone. See [Create a Replication Zone for a Table](configure-replication-zones.html#create-a-replication-zone-for-a-table) for more information. - -## Row-level replication - -CockroachDB allows [enterprise users](enterprise-licensing.html) to [define table partitions](partitioning.html), thus providing row-level control of how and where the data is stored. See [Create a Replication Zone for a Table Partition](configure-replication-zones.html#create-a-replication-zone-for-a-table-or-secondary-index-partition) for more information. - -{{site.data.alerts.callout_info}}The primary key required for partitioning is different from the conventional primary key. To define the primary key for partitioning, prefix the unique identifier(s) in the primary key with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. See Partition using Primary Key for more details.{{site.data.alerts.end}} - -## Examples - -### Create a table (no primary key defined) - -In CockroachDB, every table requires a [primary key](primary-key.html). If one is not explicitly defined, a column called `rowid` of the type `INT` is added automatically as the primary key, with the `unique_rowid()` function used to ensure that new rows always default to unique `rowid` values. The primary key is automatically indexed. - -{{site.data.alerts.callout_info}}Strictly speaking, a primary key's unique index is not created; it is derived from the key(s) under which the data is stored, so it takes no additional space. However, it appears as a normal unique index when using commands like SHOW INDEX.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE logon ( - user_id INT, - logon_date DATE -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM logon; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+---------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+---------+ -| user_id | INT | true | NULL | | {} | -| logon_date | DATE | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+---------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM logon; -~~~ - -~~~ -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| logon | primary | false | 1 | rowid | ASC | false | false | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -(1 row) -~~~ - -### Create a table (primary key defined) - -In this example, we create a table with three columns. One column is the [`PRIMARY KEY`](primary-key.html), another is given the [`UNIQUE` constraint](unique.html), and the third has no constraints. The `PRIMARY KEY` and column with the `UNIQUE` constraint are automatically indexed. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE logoff ( - user_id INT PRIMARY KEY, - user_email STRING UNIQUE, - logoff_date DATE -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM logoff; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------------------------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------------------------------+ -| user_id | INT | false | NULL | | {"primary","logoff_user_email_key"} | -| user_email | STRING | true | NULL | | {"logoff_user_email_key"} | -| logoff_date | DATE | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------------------------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM logoff; -~~~ - -~~~ -+------------+-----------------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+-----------------------+------------+--------------+-------------+-----------+---------+----------+ -| logoff | primary | false | 1 | user_id | ASC | false | false | -| logoff | logoff_user_email_key | false | 1 | user_email | ASC | false | false | -| logoff | logoff_user_email_key | false | 2 | user_id | ASC | false | true | -+------------+-----------------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -### Create a table with secondary and inverted indexes - -In this example, we create two secondary indexes during table creation. Secondary indexes allow efficient access to data with keys other than the primary key. This example also demonstrates a number of column-level and table-level [constraints](constraints.html). - -[Inverted indexes](inverted-indexes.html) allow efficient access to the schemaless data in a [`JSONB`](jsonb.html) column. - -This example also demonstrates a number of column-level and table-level [constraints](constraints.html). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE product_information ( - product_id INT PRIMARY KEY NOT NULL, - product_name STRING(50) UNIQUE NOT NULL, - product_description STRING(2000), - category_id STRING(1) NOT NULL CHECK (category_id IN ('A','B','C')), - weight_class INT, - warranty_period INT CONSTRAINT valid_warranty CHECK (warranty_period BETWEEN 0 AND 24), - supplier_id INT, - product_status STRING(20), - list_price DECIMAL(8,2), - min_price DECIMAL(8,2), - catalog_url STRING(50) UNIQUE, - date_added DATE DEFAULT CURRENT_DATE(), - misc JSONB, - CONSTRAINT price_check CHECK (list_price >= min_price), - INDEX date_added_idx (date_added), - INDEX supp_id_prod_status_idx (supplier_id, product_status), - INVERTED INDEX details (misc) -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM product_information; -~~~ - -~~~ -+---------------------+--------------------------------------+------------+--------------+----------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+---------------------+--------------------------------------+------------+--------------+----------------+-----------+---------+----------+ -| product_information | primary | false | 1 | product_id | ASC | false | false | -| product_information | product_information_product_name_key | false | 1 | product_name | ASC | false | false | -| product_information | product_information_product_name_key | false | 2 | product_id | ASC | false | true | -| product_information | product_information_catalog_url_key | false | 1 | catalog_url | ASC | false | false | -| product_information | product_information_catalog_url_key | false | 2 | product_id | ASC | false | true | -| product_information | date_added_idx | true | 1 | date_added | ASC | false | false | -| product_information | date_added_idx | true | 2 | product_id | ASC | false | true | -| product_information | supp_id_prod_status_idx | true | 1 | supplier_id | ASC | false | false | -| product_information | supp_id_prod_status_idx | true | 2 | product_status | ASC | false | false | -| product_information | supp_id_prod_status_idx | true | 3 | product_id | ASC | false | true | -| product_information | details | true | 1 | misc | ASC | false | false | -| product_information | details | true | 2 | product_id | ASC | false | true | -+---------------------+--------------------------------------+------------+--------------+----------------+-----------+---------+----------+ -(12 rows) -~~~ - -We also have other resources on indexes: - -- Create indexes for existing tables using [`CREATE INDEX`](create-index.html). -- [Learn more about indexes](indexes.html). - -### Create a table with auto-generated unique row IDs - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -### Create a table with a foreign key constraint - -[`FOREIGN KEY` constraints](foreign-key.html) guarantee a column uses only values that already exist in the column it references, which must be from another table. This constraint enforces referential integrity between the two tables. - -There are a [number of rules](foreign-key.html#rules-for-creating-foreign-keys) that govern foreign keys, but the two most important are: - -- Foreign key columns must be [indexed](indexes.html). If no index is defined in the `CREATE TABLE` statement using `INDEX`, `PRIMARY KEY`, or `UNIQUE`, a secondary index is automatically created on the foreign key columns. - -- Referenced columns must contain only unique values. This means the `REFERENCES` clause must use exactly the same columns as a [`PRIMARY KEY`](primary-key.html) or [`UNIQUE`](unique.html) constraint. - -You can include a [foreign key action](foreign-key.html#foreign-key-actions) to specify what happens when a column referenced by a foreign key constraint is updated or deleted. The default actions are `ON UPDATE NO ACTION` and `ON DELETE NO ACTION`. - -In this example, we use `ON DELETE CASCADE` (i.e., when row referenced by a foreign key constraint is deleted, all dependent rows are also deleted). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers(id) ON DELETE CASCADE - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE orders; -~~~ - -~~~ -+------------+--------------------------------------------------------------------------+ -| table_name | create_statement | -+------------+--------------------------------------------------------------------------+ -| orders | CREATE TABLE orders ( | -| | | -| | id INT NOT NULL, | -| | | -| | customer_id INT NULL, | -| | | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | | -| | CONSTRAINT fk_customer_id_ref_customers FOREIGN KEY (customer_id) | -| | REFERENCES customers (id) ON DELETE CASCADE, | -| | | -| | INDEX orders_auto_index_fk_customer_id_ref_customers (customer_id | -| | ASC), | -| | | -| | FAMILY "primary" (id, customer_id) | -| | | -| | ) | -+------------+--------------------------------------------------------------------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers VALUES (1, 'Lauren'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1,1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ -~~~ -+----+-------------+ -| id | customer_id | -+----+-------------+ -+----+-------------+ -~~~ - -### Create a table that mirrors key-value storage - -{% include {{ page.version.version }}/faq/simulate-key-value-store.html %} - -### Create a table from a `SELECT` statement - -You can use the [`CREATE TABLE AS`](create-table-as.html) statement to create a new table from the results of a `SELECT` statement, for example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers WHERE state = 'NY'; -~~~ - -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_ny AS SELECT * FROM customers WHERE state = 'NY'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_ny; -~~~ - -~~~ -+----+---------+-------+ -| id | name | state | -+----+---------+-------+ -| 6 | Dorotea | NY | -| 15 | Thales | NY | -+----+---------+-------+ -~~~ - -### Create a table with a computed column - -{% include {{ page.version.version }}/computed-columns/simple.md %} - -### Create a table with partitions - -{{site.data.alerts.callout_info}} -The primary key required for partitioning is different from the conventional primary key. To define the primary key for partitioning, prefix the unique identifier(s) in the primary key with all columns you want to partition and subpartition the table on, in the order in which you want to nest your subpartitions. See [Partition using Primary Key](partitioning.html#partition-using-primary-key) for more details. -{{site.data.alerts.end}} - -#### Create a table with partitions by list - -In this example, we create a table and [define partitions by list](partitioning.html#partition-by-list). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_list ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (country, id)) - PARTITION BY LIST (country) - (PARTITION north_america VALUES IN ('CA','US'), - PARTITION australia VALUES IN ('AU','NZ'), - PARTITION DEFAULT VALUES IN (default)); -~~~ - -#### Create a table with partitions by range - -In this example, we create a table and [define partitions by range](partitioning.html#partition-by-range). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE students_by_range ( - id INT DEFAULT unique_rowid(), - name STRING, - email STRING, - country STRING, - expected_graduation_date DATE, - PRIMARY KEY (expected_graduation_date, id)) - PARTITION BY RANGE (expected_graduation_date) - (PARTITION graduated VALUES FROM (MINVALUE) TO ('2017-08-15'), - PARTITION current VALUES FROM ('2017-08-15') TO (MAXVALUE)); -~~~ - -### Show the definition of a table - -To show the definition of a table, use the [`SHOW CREATE`](show-create.html) statement. The contents of the `create_statement` column in the response is a string with embedded line breaks that, when echoed, produces formatted output. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE logoff; -~~~ - -~~~ -+------------+----------------------------------------------------------+ -| table_name | create_statement | -+------------+----------------------------------------------------------+ -| logoff | CREATE TABLE logoff ( | -| | | -| | user_id INT NOT NULL, | -| | | -| | user_email STRING NULL, | -| | | -| | logoff_date DATE NULL, | -| | | -| | CONSTRAINT "primary" PRIMARY KEY (user_id ASC), | -| | | -| | UNIQUE INDEX logoff_user_email_key (user_email ASC), | -| | | -| | FAMILY "primary" (user_id, user_email, logoff_date) | -| | | -| | ) | -+------------+----------------------------------------------------------+ -(1 row) -~~~ - -## See also - -- [`INSERT`](insert.html) -- [`ALTER TABLE`](alter-table.html) -- [`DELETE`](delete.html) -- [`DROP TABLE`](drop-table.html) -- [`RENAME TABLE`](rename-table.html) -- [`SHOW TABLES`](show-tables.html) -- [`SHOW COLUMNS`](show-columns.html) -- [Column Families](column-families.html) -- [Table-Level Replication Zones](configure-replication-zones.html#create-a-replication-zone-for-a-table) -- [Define Table Partitions](partitioning.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/create-user.md b/src/current/v19.1/create-user.md deleted file mode 100644 index 2e4fdbe5a0a..00000000000 --- a/src/current/v19.1/create-user.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: CREATE USER -summary: The CREATE USER statement creates SQL users, which let you control privileges on your databases and tables. -toc: true ---- - -The `CREATE USER` [statement](sql-statements.html) creates SQL users, which let you control [privileges](authorization.html#assign-privileges) on your databases and tables. - -{{site.data.alerts.callout_success}} -You can also use the [`cockroach user set`](create-and-manage-users.html) command to create and manage users. -{{site.data.alerts.end}} - -## Considerations - -- Usernames: - - Are case-insensitive - - Must start with either a letter or underscore - - Must contain only letters, numbers, or underscores - - Must be between 1 and 63 characters. -- After creating users, you must [grant them privileges to databases and tables](grant.html). -- All users belong to the `public` role, to which you can [grant](grant.html) and [revoke](revoke.html) privileges. -- On secure clusters, you must [create client certificates for users](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) and users must [authenticate their access to the cluster](#user-authentication). - -## Required privileges - -The user must have the `INSERT` and `UPDATE` [privileges](authorization.html#assign-privileges) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_user.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------- -`user_name` | The name of the user you want to create.

      Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. -`password` | Let the user [authenticate their access to a secure cluster](#user-authentication) using this password. Passwords must be entered as [string](string.html) values surrounded by single quotes (`'`).

      Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. - -## User authentication - -Secure clusters require users to authenticate their access to databases and tables. CockroachDB offers two methods for this: - -- [Client certificate and key authentication](#secure-clusters-with-client-certificates), which is available to all users. To ensure the highest level of security, we recommend only using client certificate and key authentication. - -- [Password authentication](#secure-clusters-with-passwords), which is available to non-`root` users who you've created passwords for. To create a user with a password, use the `WITH PASSWORD` clause of `CREATE USER`. To add a password to an existing user, use the [`cockroach user`](create-and-manage-users.html#update-a-users-password) command. - - Users can use passwords to authenticate without supplying client certificates and keys; however, we recommend using certificate-based authentication whenever possible. - - Password creation is supported only in secure clusters. - -## Examples - -### Create a user - -Usernames are case-insensitive; must start with either a letter or underscore; must contain only letters, numbers, or underscores; and must be between 1 and 63 characters. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER jpointsman; -~~~ - -After creating users, you must: - -- [Grant them privileges to databases](grant.html). -- For secure clusters, you must also [create their client certificates](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client). - -### Create a user with a password - -{% include copy-clipboard.html %} -~~~ sql -> CREATE USER jpointsman WITH PASSWORD 'Q7gc8rEdS'; -~~~ - -Password creation is supported only in secure clusters for non-`root` users. The `root` user must authenticate with a client certificate and key. - -### Manage users - -After creating users, you can manage them using the [`cockroach user`](create-and-manage-users.html) command. - -### Authenticate as a specific user - -
      - - -
      -

      - -
      - -#### Secure clusters with client certificates - -All users can authenticate their access to a secure cluster using [a client certificate](create-security-certificates.html#create-the-certificate-and-key-pair-for-a-client) issued to their username. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --user=jpointsman -~~~ - -#### Secure clusters with passwords - -[Users with passwords](#create-a-user) can authenticate their access by entering their password at the command prompt instead of using their client certificate and key. - -If we cannot find client certificate and key files matching the user, we fall back on password authentication. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --user=jpointsman -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --user=jpointsman -~~~ - -
      - -## See also - -- [Authorization](authorization.html) -- [`cockroach user` command](create-and-manage-users.html) -- [`DROP USER`](drop-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/create-view.md b/src/current/v19.1/create-view.md deleted file mode 100644 index 313bfcb0cef..00000000000 --- a/src/current/v19.1/create-view.md +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: CREATE VIEW -summary: The CREATE VIEW statement creates a . -toc: true ---- - -The `CREATE VIEW` statement creates a new [view](views.html), which is a stored query represented as a virtual table. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the parent database and the `SELECT` privilege on any table(s) referenced by the view. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/create_view.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`view_name` | The name of the view to create, which must be unique within its database and follow these [identifier rules](keywords-and-identifiers.html#identifiers). When the parent database is not set as the default, the name must be formatted as `database.name`. -`name_list` | An optional, comma-separated list of column names for the view. If specified, these names will be used in the response instead of the columns specified in `AS select_stmt`. -`AS select_stmt` | The [selection query](selection-queries.html) to execute when the view is requested.

      Note that it is not currently possible to use `*` to select all columns from a referenced table or view; instead, you must specify specific columns. - -## Example - -{{site.data.alerts.callout_success}}This example highlights one key benefit to using views: simplifying complex queries. For additional benefits and examples, see Views.{{site.data.alerts.end}} - -Let's say you're using our [sample `startrek` database](generate-cockroachdb-resources.html#generate-example-data), which contains two tables, `episodes` and `quotes`. There's a foreign key constraint between the `episodes.id` column and the `quotes.episode` column. To count the number of famous quotes per season, you could run the following join: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -+--------+----------+ -| season | count(*) | -+--------+----------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+----------+ -(3 rows) -~~~ - -Alternatively, to make it much easier to run this complex query, you could create a view: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE VIEW startrek.quotes_per_season (season, quotes) - AS SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -CREATE VIEW -~~~ - -The view is then represented as a virtual table alongside other tables in the database: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ - -~~~ -+-------------------+ -| table_name | -+-------------------+ -| episodes | -| quotes | -| quotes_per_season | -+-------------------+ -(4 rows) -~~~ - -Executing the query is as easy as `SELECT`ing from the view, as you would from a standard table: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.quotes_per_season; -~~~ - -~~~ -+--------+--------+ -| season | quotes | -+--------+--------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+--------+ -(3 rows) -~~~ - -## See also - -- [Selection Queries](selection-queries.html) -- [Views](views.html) -- [`SHOW CREATE`](show-create.html) -- [`ALTER VIEW`](alter-view.html) -- [`DROP VIEW`](drop-view.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/data-types.md b/src/current/v19.1/data-types.md deleted file mode 100644 index 1631e75dc12..00000000000 --- a/src/current/v19.1/data-types.md +++ /dev/null @@ -1,51 +0,0 @@ ---- -title: Data Types -summary: Learn about the data types supported by CockroachDB. -toc: true ---- - -## Supported types - -CockroachDB supports the following data types. Click a type for more details. - -Type | Description | Example ------|-------------|-------- -[`ARRAY`](array.html) | A 1-dimensional, 1-indexed, homogeneous array of any non-array data type. | `{"sky","road","car"}` -[`BIT`](bit.html) | A string of binary digits (bits). | `B'10010101'` -[`BOOL`](bool.html) | A Boolean value. | `true` -[`BYTES`](bytes.html) | A string of binary characters. | `b'\141\061\142\062\143\063'` -[`COLLATE`](collate.html) | The `COLLATE` feature lets you sort [`STRING`](string.html) values according to language- and country-specific rules, known as collations. | `'a1b2c3' COLLATE en` -[`DATE`](date.html) | A date. | `DATE '2016-01-25'` -[`DECIMAL`](decimal.html) | An exact, fixed-point number. | `1.2345` -[`FLOAT`](float.html) | A 64-bit, inexact, floating-point number. | `1.2345` -[`INET`](inet.html) | An IPv4 or IPv6 address. | `192.168.0.1` -[`INT`](int.html) | A signed integer, up to 64 bits. | `12345` -[`INTERVAL`](interval.html) | A span of time. | `INTERVAL '2h30m30s'` -[`JSONB`](jsonb.html) | JSON (JavaScript Object Notation) data. | `'{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'` -[`SERIAL`](serial.html) | A pseudo-type that combines an [integer type](int.html) with a [`DEFAULT` expression](default-value.html). | `148591304110702593` -[`STRING`](string.html) | A string of Unicode characters. | `'a1b2c3'` -[`TIME`](time.html) | A time of day in UTC. | `TIME '01:23:45.123456'` -[`TIMESTAMP`
      `TIMESTAMPTZ`](timestamp.html) | A date and time pairing in UTC. | `TIMESTAMP '2016-01-25 10:10:10'`
      `TIMESTAMPTZ '2016-01-25 10:10:10-05:00'` -[`UUID`](uuid.html) | A 128-bit hexadecimal value. | `7f9c24e8-3b12-4fef-91e0-56a2d5a246ec` - -## Data type conversions and casts - -CockroachDB supports explicit type conversions using the following methods: - -- ` 'string literal'`, to convert from the literal representation of a value to a value of that type. For example: - `DATE '2008-12-21'`, `INT '123'`, or `BOOL 'true'`. - -- `::`, or its equivalent longer form `CAST( AS )`, which converts an arbitrary expression of one built-in type to another (this is also known as type coercion or "casting"). For example: - `NOW()::DECIMAL`, `VARIANCE(a+2)::INT`. - - {{site.data.alerts.callout_success}} - To create constant values, consider using a - type annotation - instead of a cast, as it provides more predictable results. - {{site.data.alerts.end}} - -- Other [built-in conversion functions](functions-and-operators.html) when the type is not a SQL type, for example `from_ip()`, `to_ip()` to convert IP addresses between `STRING` and `BYTES` values. - - -You can find each data type's supported conversion and casting on its -respective page in its section **Supported casting & conversion**. diff --git a/src/current/v19.1/date.md b/src/current/v19.1/date.md deleted file mode 100644 index caf207b7a35..00000000000 --- a/src/current/v19.1/date.md +++ /dev/null @@ -1,92 +0,0 @@ ---- -title: DATE -summary: CockroachDB's DATE data type stores a year, month, and day. -toc: true ---- - -The `DATE` [data type](data-types.html) stores a year, month, and day. - - -## Syntax - -A constant value of type `DATE` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `DATE` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`DATE`. - -The string format for dates is `YYYY-MM-DD`. For example: `DATE '2016-12-23'`. - -CockroachDB also supports using uninterpreted -[string literals](sql-constants.html#string-literals) in contexts -where a `DATE` value is otherwise expected. - -## Size - -A `DATE` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE dates (a DATE PRIMARY KEY, b INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM dates; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| a | DATE | false | NULL | | {"primary"} | -| b | INT | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(2 rows) -~~~ - -Explicitly typed `DATE` literal: -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO dates VALUES (DATE '2016-03-26', 12345); -~~~ - -String literal implicitly typed as `DATE`: -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO dates VALUES ('2016-03-27', 12345); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM dates; -~~~ - -~~~ -+---------------------------+-------+ -| a | b | -+---------------------------+-------+ -| 2016-03-26 00:00:00+00:00 | 12345 | -| 2016-03-27 00:00:00+00:00 | 12345 | -+---------------------------+-------+ -~~~ - -## Supported casting and conversion - -`DATE` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`DECIMAL` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`FLOAT` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIMESTAMP` | Sets the time to 00:00 (midnight) in the resulting timestamp -`INT` | Converts to number of days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`STRING` | –– - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/dbeaver.md b/src/current/v19.1/dbeaver.md deleted file mode 100644 index 501f404c79f..00000000000 --- a/src/current/v19.1/dbeaver.md +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: DBeaver -summary: The DBeaver database tool completely integrates with CockroachDB to provide a GUI for managing your database. -toc: true ---- - -The [DBeaver database tool][dbeaver] is a tool that completely integrates with CockroachDB to provide a GUI for managing your database. - -According to the [DBeaver website][dbeaver]: - -> DBeaver is a cross-platform Database GUI tool for developers, SQL programmers, database administrators, and analysts. - -In this tutorial, you'll work through the process of using DBeaver with a secure CockroachDB cluster. - -{{site.data.alerts.callout_success}} -For more information about using DBeaver, see the [DBeaver documentation](https://dbeaver.io/docs/). - -If you run into problems, please file an issue on the [DBeaver issue tracker](https://github.com/dbeaver/dbeaver/issues). -{{site.data.alerts.end}} - -## Before You Begin - -To work through this tutorial, take the following steps: - -- [Install CockroachDB](install-cockroachdb.html) and [start a secure cluster](secure-a-cluster.html). -- Download a copy of [DBeaver](https://dbeaver.io/download/) version 5.2.3 or greater. - -## Step 1. Start DBeaver and connect to CockroachDB - -Start DBeaver, and select **Database > New Connection** from the menu. In the dialog that appears, select **CockroachDB** from the list. - -DBeaver - Select CockroachDB - -## Step 2. Update the connection settings - -On the **Create new connection** dialog that appears, click **Network settings**. - -DBeaver - CockroachDB connection settings - -From the network settings, click the **SSL** tab. It will look like the screenshot below. - -DBeaver - SSL tab - -Check the **Use SSL** checkbox as shown, and fill in the text areas as follows: - -- **Root certificate**: Use the `ca.crt` file you generated for your secure cluster. -- **SSL certificate**: Use a client certificate generated from your cluster's root certificate. For the root user, this will be named `client.root.crt`. For additional security, you may want to create a new database user and client certificate just for use with DBeaver. -- **SSL certificate key**: Because DBeaver is a Java application, you will need to transform your key file to the `*.pk8` format using an [OpenSSL command](https://wiki.openssl.org/index.php/Command_Line_Utilities#pkcs8_.2F_pkcs5) like the one shown below. Once you have created the file, enter its location here. In this example, the filename is `client.root.pk8`. - {% include copy-clipboard.html %} - ~~~ console - $ openssl pkcs8 -topk8 -inform PEM -outform DER -in client.root.key -out client.root.pk8 -nocrypt - ~~~ - -Select **require** from the **SSL mode** dropdown. There is no need to set the **SSL Factory**, you can let DBeaver use the default. - -## Step 3. Test the connection settings - -Click **Test Connection ...**. If everything worked, you will see a **Success** dialog like the one shown below. - -DBeaver - connection success dialog - -## Step 4. Start using DBeaver - -Click **Finish** to get started using DBeaver with CockroachDB. - -DBeaver - CockroachDB with the movr database - -For more information about using DBeaver, see the [DBeaver documentation](https://dbeaver.io/docs/). - -## Report Issues with DBeaver & CockroachDB - -If you run into problems, please file an issue on the [DBeaver issue tracker](https://github.com/dbeaver/dbeaver/issues), including the following details about the environment where you encountered the issue: - -- CockroachDB version ([`cockroach version`](view-version-details.html)) -- DBeaver version -- Operating system -- Steps to reproduce the behavior -- If possible, a trace of the SQL statements sent to CockroachDB while the error is being reproduced using [SQL query logging](query-behavior-troubleshooting.html#sql-logging). - -## See Also - -+ [DBeaver documentation](https://dbeaver.io/docs/) -+ [DBeaver issue tracker](https://github.com/dbeaver/dbeaver/issues) -+ [Client connection parameters](connection-parameters.html) -+ [Third-Party Database Tools](third-party-database-tools.html) -+ [Learn CockroachDB SQL](learn-cockroachdb-sql.html) - - - -[dbeaver]: https://dbeaver.io diff --git a/src/current/v19.1/debug-and-error-logs.md b/src/current/v19.1/debug-and-error-logs.md deleted file mode 100644 index 93ec0e6da10..00000000000 --- a/src/current/v19.1/debug-and-error-logs.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: Understand Debug & Error Logs -summary: CockroachDB logs include details about certain node-level and range-level events, such as errors. -toc: true ---- - -If you need to [troubleshoot](troubleshooting-overview.html) issues with your cluster, you can check a node's logs, which include details about certain node-level and range-level events, such as errors. For example, if CockroachDB crashes, it normally logs a stack trace to what caused the problem. - -{{site.data.alerts.callout_success}} -For detailed information about queries being executed against your system, see [SQL Audit Logging](sql-audit-logging.html). -{{site.data.alerts.end}} - - -## Details - -When a node processes a [`cockroach` command](cockroach-commands.html), it produces a stream of messages about the command's activities. Each message's body describes the activity, and its envelope contains metadata such as the message's severity level. - -As a command generates messages, CockroachDB uses the [command](#commands)'s [logging flags](#flags) and the message's [severity level](#severity-levels) to determine the appropriate [location](#output-locations) for it. - -Each node's logs detail only the internal activity of that node without visibility into the behavior of other nodes in the cluster. When troubleshooting, this means that you must identify the node where the problem occurred or [collect the logs from all active nodes in your cluster](debug-zip.html). - -### Commands - -All [`cockroach` commands](cockroach-commands.html) support logging. However, it's important to note: - -- `cockroach start` generates most messages related to the operation of your cluster. -- Other commands do generate messages, but they're typically only interesting in troubleshooting scenarios. - -### Severity levels - -CockroachDB identifies each message with a severity level, letting operators know if they need to intercede: - -1. `INFO` *(lowest severity; no action necessary)* -2. `WARNING` -3. `ERROR` -4. `FATAL` *(highest severity; requires operator attention)* - -**Default behavior by severity level** - -Command | `INFO` messages | `WARNING` and above messages ---------|--------|-------------------- -[`cockroach start`](start-a-node.html) | Write to file | Write to file -[All other commands](cockroach-commands.html) | Discard | Print to `stderr` - -### Output locations - -Based on the command's flags and the message's [severity level](#severity-levels), CockroachDB does one of the following: - -- [Writes the message to a file](#write-to-file) -- [Prints it to `stderr`](#print-to-stderr) -- [Discards the message entirely](#discard-message) - -#### Write to file - -CockroachDB can write messages to log files. The files are named using the following format: - -~~~ -cockroach.[host].[user].[start timestamp in UTC].[process ID].log -~~~ - -For example: - -~~~ -cockroach.richards-mbp.rloveland.2018-03-15T15_24_10Z.024338.log -~~~ - -{{site.data.alerts.callout_info}}All log file timestamps are in UTC because CockroachDB is designed to be deployed in a distributed cluster. Nodes may be located in different time zones, and using UTC makes it easy to correlate log messages from those nodes no matter where they are located.{{site.data.alerts.end}} - -Property | `cockroach start` | All other commands ----------|-------------------|------------------- -Enabled by | Default1 | Explicit `--log-dir` flag -Default File Destination | `[first `[`store`](start-a-node.html#store)` dir]/logs` | *N/A* -Change File Destination | `--log-dir=[destination]` | `--log-dir=[destination]` -Default Severity Level Threshold | `INFO` | *N/A* -Change Severity Threshold | `--log-file-verbosity=[severity level]` | `--log-file-verbosity=[severity level]` -Disabled by | `--log-dir=`1 | Default - -{{site.data.alerts.callout_info}}1 If the cockroach process does not have access to on-disk storage, cockroach start does not write messages to log files; instead it prints all messages to stderr.{{site.data.alerts.end}} - -#### Print to `stderr` - -CockroachDB can print messages to `stderr`, which normally prints them to the machine's terminal but does not store them. - -Property | `cockroach start` | All other commands ----------|-------------------|------------------- -Enabled by | Explicit `--logtostderr` flag2 | Default -Default Severity Level Threshold | *N/A* | `WARNING` -Change Severity Threshold | `--logtostderr=[severity level]` | `--logtostderr=[severity level]` -Disabled by | Default2 | `--logtostderr=NONE` - -{{site.data.alerts.callout_info}}2 cockroach start does not print any messages to stderr unless the cockroach process does not have access to on-disk storage, in which case it defaults to --logtostderr=INFO and prints all messages to stderr.{{site.data.alerts.end}} - -#### Discard message - -Messages with severity levels below the `--logtostderr` and `--log-file-verbosity` flag's values are neither written to files nor printed to `stderr`, so they are discarded. - -By default, commands besides `cockroach start` discard messages with the `INFO` [severity level](#severity-levels). - -## Flags - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -## See also - -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Support Resources](support-resources.html) diff --git a/src/current/v19.1/debug-ballast.md b/src/current/v19.1/debug-ballast.md deleted file mode 100644 index a61cd81efda..00000000000 --- a/src/current/v19.1/debug-ballast.md +++ /dev/null @@ -1,49 +0,0 @@ ---- -title: Create a Ballast File -summary: Create a large, unused file in a node's storage directory that you can delete if the node runs out of disk space. -toc: true ---- - -The `cockroach debug ballast` [command](cockroach-commands.html) creates a large, unused file that you can place in a node's storage directory. In the case that a node runs out of disk space and shuts down, you can delete the ballast file to free up enough space to be able to restart the node. - -- Do not run `cockroach debug ballast` with a unix `root` user. Doing so brings the risk of mistakenly affecting system directories or files. -- Do not name the target file similar to a block device in `/dev`. Doing so brings the risk of mistyping a `/dev` prefix into the command and thereby corrupting a filesystem. -- In addition to placing a ballast file in each node's storage directory, it is important to actively [monitor remaining disk space](monitoring-and-alerting.html#events-to-alert-on). -- Ballast files may be created in many ways, including the standard `dd` command. `cockroach debug ballast` uses the `fallocate` system call when available, so it will be faster than `dd`. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -Create a ballast file: - -~~~ shell -$ cockroach debug ballast [path to ballast file] [flags] -~~~ - -View help: - -~~~ shell -$ cockroach debug ballast --help -~~~ - -## Flags - -Flag | Description ------|----------- -`--size`
      `-z` | The amount of space to fill, or to leave available, in a node's storage directory via a ballast file. Positive values equal the size of the ballast file. Negative values equal the amount of space to leave after creating the ballast file. This can be a percentage (notated as a decimal or with %) or any bytes-based unit, for example:

      `--size=1000000000 ----> 1000000000 bytes`
      `--size=1GiB ----> 1073741824 bytes`
      `--size=5% ----> 5% of available space`
      `--size=0.05 ----> 5% of available space`
      `--size=.05 ----> 5% of available space` - -## Example - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach debug ballast cockroach-data/ballast.txt --size=1GiB -~~~ - -## See also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) -- [Production Checklist](recommended-production-settings.html) diff --git a/src/current/v19.1/debug-encryption-active-key.md b/src/current/v19.1/debug-encryption-active-key.md deleted file mode 100644 index 737379ee931..00000000000 --- a/src/current/v19.1/debug-encryption-active-key.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: View the Encryption Algorithm and Store Key -summary: Learn the command for viewing the algorithm and store key for an encrypted store. -toc: true ---- - -The `debug encryption-active-key` [command](cockroach-commands.html) displays the encryption algorithm and store key for an encrypted store. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug encryption-active-key [path specified by the store flag] -~~~ - -## Example - -Start a node with encryption-at-rest enabled: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=aes-128.key,old-key=plain --insecure --certs-dir=certs -~~~ - -View the encryption algorithm and store key: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach debug encryption-active-key cockroach-data -~~~ - -~~~ -AES128_CTR:be235c29239aa84a48e5e1874d76aebf7fb3c1bdc438cec2eb98de82f06a57a0 -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v19.1/debug-merge-logs.md b/src/current/v19.1/debug-merge-logs.md deleted file mode 100644 index fd23237ba2d..00000000000 --- a/src/current/v19.1/debug-merge-logs.md +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Merge Debug Logs for All Nodes -summary: Learn the command for merging the collected debug logs from all nodes in your cluster. -toc: true ---- - -The `debug merge-logs` [command](cockroach-commands.html) merges log files from multiple nodes into a single time-ordered stream of messages with an added per-message prefix to indicate the corresponding node. You can use it in conjunction with logs collected using the [`debug zip`](debug-zip.html) command to aid in debugging. - -{{site.data.alerts.callout_danger}} -The file produced by `cockroach debug merge-log` can contain highly sensitive, unanonymized information, such as usernames, passwords, and possibly your table's data. You should share this data only with Cockroach Labs developers and only after determining the most secure method of delivery. -{{site.data.alerts.end}} - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug merge-logs [log file directory] [flags] -~~~ - -## Flags - -Use the following flags to filter the `debug merge-logs` results for a specified regular expression or time range. - -Flag | Description ------|----------- -`--filter` | Limit the results to the specified regular expression -`--from` | Start time for the time range filter. -`--to` | End time for the time range filter. - -## Example - -Generate a debug zip file: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach debug zip ./cockroach-data/logs/debug.zip --insecure -~~~ - -Unzip the file: - -{% include copy-clipboard.html %} -~~~ shell -$ unzip ./cockroach-data/logs/debug.zip -~~~ - -Merge the logs in the debug folder: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach debug merge-logs debug/nodes/*/logs/* -~~~ - -Alternatively, filter the merged logs for a specified time range: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach debug merge-logs debug/nodes/*/logs/* --from= "18:36:28.208553" --to= "18:36:29.232864" -~~~ - -You can also filter the merged logs for a regular expression: - -{% include copy-clipboard.html %} -~~~ shell -cockroach debug merge-logs debug/nodes/*/logs/* --filter="RUNNING IN INSECURE MODE" -~~~ - -## See also - -- [File an Issue](file-an-issue.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v19.1/debug-zip.md b/src/current/v19.1/debug-zip.md deleted file mode 100644 index 8fa8de12bc0..00000000000 --- a/src/current/v19.1/debug-zip.md +++ /dev/null @@ -1,116 +0,0 @@ ---- -title: Collect Debug Information from Your Cluster -summary: Learn the commands for collecting debug information from all nodes in your cluster. -toc: true ---- - -The `debug zip` [command](cockroach-commands.html) connects to your cluster and gathers information from each active node into a single file (inactive nodes are not included): - -- [Log files](debug-and-error-logs.html) -- Cluster events -- Schema change events -- Node liveness -- Gossip data -- Stack traces -- Range lists -- A list of databases and tables -- [Cluster Settings](cluster-settings.html) -- [Metrics](admin-ui-custom-chart-debug-page.html#available-metrics) -- Alerts -- Heap profiles -- Problem ranges -- Sessions -- Queries - -Additionally, you can run the [`debug merge-logs`](debug-merge-logs.html) command to merge the collected logs in one file, making it easier to parse them to locate an issue with your cluster. - -{{site.data.alerts.callout_danger}} -The file produced by `cockroach debug zip` can contain highly sensitive, unanonymized information, such as usernames, hashed passwords, and possibly your table's data. You should share this data only with Cockroach Labs developers and only after determining the most secure method of delivery. -{{site.data.alerts.end}} - -## Details - -### Use cases - -There are two scenarios in which `debug zip` is useful: - -- To collect all of your nodes' logs, which you can then parse to locate issues. It's important to note, though, that `debug zip` can only access logs from active nodes. See more information [on this page](#collecting-log-files). - -- If you experience severe or difficult-to-reproduce issues with your cluster, Cockroach Labs might ask you to send us your cluster's debugging information using `cockroach debug zip`. - -### Collecting log files - -When you issue the `debug zip` command, the node that receives the request connects to each other node in the cluster. Once it's connected, the node requests the content of all log files stored on the node, the location of which is determined by the `--log-dir` value when you [started the node](start-a-node.html). - -Because `debug zip` relies on CockroachDB's distributed architecture, this means that nodes not currently connected to the cluster cannot respond to the request, so their log files *are not* included. - -After receiving the log files from all of the active nodes, the requesting node aggregates the files and writes them to an archive file you specify. - -You can locate logs in the unarchived file's `debug/nodes/[node dir]/logs` directories. - -## Subcommands - -{% include {{ page.version.version }}/misc/debug-subcommands.md %} - -## Synopsis - -~~~ shell -$ cockroach debug zip [ZIP file destination] [flags] -~~~ - -It's important to understand that the `[flags]` here are used to connect to CockroachDB nodes. This means the values you use in those flags must connect to an active node. If no nodes are live, you must [start at least one node](start-a-node.html). - -## Flags - -The `debug zip` subcommand supports the following [general-use](#general), [client connection](#client-connection), and [logging](#logging) flags. - -### General - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html). The directory must contain valid certificates if running in secure mode.

      **Env Variable:** `COCKROACH_CERTS_DIR`
      **Default:** `${HOME}/.cockroach-certs/` -`--host` | The server host to connect to. This can be the address of any node in the cluster.

      **Env Variable:** `COCKROACH_HOST`
      **Default:** `localhost` -`--insecure` | Run in insecure mode. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

      **Env Variable:** `COCKROACH_INSECURE`
      **Default:** `false` -`--port`
      `-p` | The server port to connect to.

      **Env Variable:** `COCKROACH_PORT`
      **Default:** `26257` - -### Client connection - -Flag | Description ------|----------- -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

      **Env Variable:** `COCKROACH_URL`
      **Default:** no URL - -### Logging - -By default, the `debug zip` command logs errors it experiences to `stderr`. Note that these are errors executing `debug zip`; these are not errors that the logs collected by `debug zip` contain. - -If you need to troubleshoot this command's behavior, you can also change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Generate a debug zip file - -{% include copy-clipboard.html %} -~~~ shell -# Generate the debug zip file for an insecure cluster: -$ cockroach debug zip ./cockroach-data/logs/debug.zip --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Generate the debug zip file for a secure cluster: -$ cockroach debug zip ./cockroach-data/logs/debug.zip -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Generate the debug zip file from a remote machine: -$ cockroach debug zip ./crdb-debug.zip --host=200.100.50.25 -~~~ - -{{site.data.alerts.callout_info}}Secure examples assume you have the appropriate certificates in the default certificate directory, ${HOME}/.cockroach-certs/.{{site.data.alerts.end}} - -## See also - -- [File an Issue](file-an-issue.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [Troubleshooting Overview](troubleshooting-overview.html) diff --git a/src/current/v19.1/decimal.md b/src/current/v19.1/decimal.md deleted file mode 100644 index dc7d49eb7a0..00000000000 --- a/src/current/v19.1/decimal.md +++ /dev/null @@ -1,115 +0,0 @@ ---- -title: DECIMAL -summary: The DECIMAL data type stores exact, fixed-point numbers. -toc: true ---- - -The `DECIMAL` [data type](data-types.html) stores exact, fixed-point numbers. This type is used when it is important to preserve exact precision, for example, with monetary data. - - -## Aliases - -In CockroachDB, the following are aliases for `DECIMAL`: - -- `DEC` -- `NUMERIC` - -## Precision and scale - -To limit a decimal column, use `DECIMAL(precision, scale)`, where `precision` is the **maximum** count of digits both to the left and right of the decimal point and `scale` is the **exact** count of digits to the right of the decimal point. The `precision` must not be smaller than the `scale`. Also note that using `DECIMAL(precision)` is equivalent to `DECIMAL(precision, 0)`. - -When inserting a decimal value: - -- If digits to the right of the decimal point exceed the column's `scale`, CockroachDB rounds to the scale. -- If digits to the right of the decimal point are fewer than the column's `scale`, CockroachDB pads to the scale with `0`s. -- If digits to the left and right of the decimal point exceed the column's `precision`, CockroachDB gives an error. -- If the column's `precision` and `scale` are identical, the inserted value must round to less than 1. - -## Syntax - -A constant value of type `DECIMAL` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `1.414` or `-1234`. - -The special IEEE754 values for positive infinity, negative infinity -and [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) cannot be -entered using numeric literals directly and must be converted using an -[interpreted literal](sql-constants.html#interpreted-literals) or an -[explicit conversion](scalar-expressions.html#explicit-type-coercions) -from a string literal instead. - -The following values are recognized: - - Syntax | Value -----------------------------------------|------------------------------------------------ - `inf`, `infinity`, `+inf`, `+infinity` | +∞ - `-inf`, `-infinity` | -∞ - `nan` | [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) - -For example: - -- `DECIMAL '+Inf'` -- `'-Inf'::DECIMAL` -- `CAST('NaN' AS DECIMAL)` - -## Size - -The size of a `DECIMAL` value is variable, starting at 9 bytes. It's recommended to keep values under 64 kilobytes to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE decimals (a DECIMAL PRIMARY KEY, b DECIMAL(10,5), c NUMERIC); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM decimals; -~~~ - -~~~ -+-------------+---------------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+---------------+-------------+----------------+-----------------------+-------------+ -| a | DECIMAL | false | NULL | | {"primary"} | -| b | DECIMAL(10,5) | true | NULL | | {} | -| c | DECIMAL | true | NULL | | {} | -+-------------+---------------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO decimals VALUES (1.01234567890123456789, 1.01234567890123456789, 1.01234567890123456789); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM decimals; -~~~ - -~~~ -+------------------------+---------+-----------------------+ -| a | b | c | -+------------------------+---------+-----------------------+ -| 1.01234567890123456789 | 1.01235 | 1.0123456789012346789 | -+------------------------+---------+-----------------------+ -# The value in "a" matches what was inserted exactly. -# The value in "b" has been rounded to the column's scale. -# The value in "c" is handled like "a" because NUMERIC is an alias. -~~~ - -## Supported casting and conversion - -`DECIMAL` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Truncates decimal precision -`FLOAT` | Loses precision and may round up to +/- infinity if the value is too large in magnitude, or to +/-0 if the value is too small in magnitude -`BOOL` | **0** converts to `false`; all other values convert to `true` -`STRING` | –– - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/default-value.md b/src/current/v19.1/default-value.md deleted file mode 100644 index 2535ab4265e..00000000000 --- a/src/current/v19.1/default-value.md +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: Default Value Constraint -summary: The Default Value constraint specifies a value to populate a column with if none is provided. -toc: true ---- - -The `DEFAULT` value [constraint](constraints.html) specifies a value to write into the constrained column if one is not defined in an `INSERT` statement. The value may be either a hard-coded literal or an expression that is evaluated at the time the row is created. - - -## Details - -- The [data type](data-types.html) of the Default Value must be the same as the data type of the column. -- The `DEFAULT` value constraint only applies if the column does not have a value specified in the [`INSERT`](insert.html) statement. You can still insert a *NULL* into an optional (nullable) column by explicitly inserting *NULL*. For example, `INSERT INTO foo VALUES (1, NULL);`. - -## Syntax - -You can only apply the `DEFAULT` value constraint to individual columns. - -{{site.data.alerts.callout_info}} -You can also add the `DEFAULT` value constraint to an existing table through [`ALTER COLUMN`](alter-column.html#set-or-change-a-default-value). -{{site.data.alerts.end}} - -
      {% include {{ page.version.version }}/sql/diagrams/default_value_column_level.html %}
      - - Parameter | Description ------------|------------- - `table_name` | The name of the table you're creating. - `column_name` | The name of the constrained column. - `column_type` | The constrained column's [data type](data-types.html). - `default_value` | The value you want to insert by default, which must evaluate to the same [data type](data-types.html) as the `column_type`. - `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. - `column_def` | Definitions for any other columns in the table. - `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE inventories ( - product_id INT, - warehouse_id INT, - quantity_on_hand INT DEFAULT 100, - PRIMARY KEY (product_id, warehouse_id) - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO inventories (product_id, warehouse_id) VALUES (1,20); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO inventories (product_id, warehouse_id, quantity_on_hand) VALUES (2,30, NULL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM inventories; -~~~ -~~~ -+------------+--------------+------------------+ -| product_id | warehouse_id | quantity_on_hand | -+------------+--------------+------------------+ -| 1 | 20 | 100 | -| 2 | 30 | NULL | -+------------+--------------+------------------+ -~~~ - -If the `DEFAULT` value constraint is not specified and an explicit value is not given, a value of *NULL* is assigned to the column. - -## See also - -- [Constraints](constraints.html) -- [`ALTER COLUMN`](alter-column.html) -- [`CHECK` constraint](check.html) -- [`REFERENCES` constraint (Foreign Key)](foreign-key.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY KEY` constraint](primary-key.html) -- [`UNIQUE` constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v19.1/delete.md b/src/current/v19.1/delete.md deleted file mode 100644 index b934d3c99a1..00000000000 --- a/src/current/v19.1/delete.md +++ /dev/null @@ -1,280 +0,0 @@ ---- -title: DELETE -summary: The DELETE statement deletes one or more rows from a table. -toc: true ---- - -The `DELETE` [statement](sql-statements.html) deletes rows from a table. - -{{site.data.alerts.callout_danger}}If you delete a row that is referenced by a foreign key constraint and has an ON DELETE action, all of the dependent rows will also be deleted or updated.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}To delete columns, see DROP COLUMN.{{site.data.alerts.end}} - -## Required privileges - -The user must have the `DELETE` and `SELECT` [privileges](authorization.html#assign-privileges) on the table. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/delete.html %} -
      - -## Parameters - - - - Parameter | Description ------------|------------- - `common_table_expr` | See [Common Table Expressions](common-table-expressions.html). - `table_name` | The name of the table that contains the rows you want to update. - `AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`WHERE a_expr`| `a_expr` must be an expression that returns Boolean values using columns (e.g., ` = `). Delete rows that return `TRUE`.

      __Without a `WHERE` clause in your statement, `DELETE` removes all rows from the table.__ - `sort_clause` | An `ORDER BY` clause.

      New in v19.1: The `ORDER BY` clause can no longer be used with a `DELETE` statement when there is no `LIMIT` clause present. - `limit_clause` | A `LIMIT` clause. See [Limiting Query Results](limit-offset.html) for more details. - `RETURNING target_list` | Return values based on rows deleted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      To return nothing in the response, not even the number of rows updated, use `RETURNING NOTHING`. - -## Success responses - -Successful `DELETE` statements return one of the following: - - Response | Description ------------|------------- -`DELETE` _`int`_ | _int_ rows were deleted.

      `DELETE` statements that do not delete any rows respond with `DELETE 0`. When `RETURNING NOTHING` is used, this information is not included in the response. -Retrieved table | Including the `RETURNING` clause retrieves the deleted rows, using the columns identified by the clause's parameters.

      [See an example.](#return-deleted-rows) - -## Disk space usage after deletes - -Deleting a row does not immediately free up the disk space. This is -due to the fact that CockroachDB retains [the ability to query tables -historically](https://www.cockroachlabs.com/blog/time-travel-queries-select-witty_subtitle-the_future/). - -If disk usage is a concern, the solution is to -[reduce the time-to-live](configure-replication-zones.html) (TTL) for -the zone by setting `gc.ttlseconds` to a lower value, which will cause -garbage collection to clean up deleted objects (rows, tables) more -frequently. - -## Select performance on deleted rows - -Queries that scan across tables that have lots of deleted rows will -have to scan over deletions that have not yet been garbage -collected. Certain database usage patterns that frequently scan over -and delete lots of rows will want to reduce the -[time-to-live](configure-replication-zones.html) values to clean up -deleted rows more frequently. - -## Sorting the output of deletes - -{% include {{page.version.version}}/misc/sorting-delete-output.md %} - -For more information about ordering query results in general, see -[Ordering Query Results](query-order.html). - -## Delete performance on large data sets - -If you are deleting a large amount of data using iterative `DELETE ... LIMIT` statements, you are likely to see a drop in performance for each subsequent `DELETE` statement. - -For an explanation of why this happens, and for instructions showing how to iteratively delete rows in constant time, see [Why are my deletes getting slower over time?](sql-faqs.html#why-are-my-deletes-getting-slower-over-time). - -## Force index selection for deletes - -By using the explicit index annotation (also known as "index hinting"), you can override [CockroachDB's index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/) and use a specific [index](indexes.html) for deleting rows of a named table. - -{{site.data.alerts.callout_info}} -Index selection can impact [performance](performance-best-practices-overview.html), but does not change the result of a query. -{{site.data.alerts.end}} - -The syntax to force a specific index for a delete is: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM table@my_idx; -~~~ - -This is equivalent to the longer expression: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM table@{FORCE_INDEX=my_idx}; -~~~ - -To view how the index hint modifies the query plan that CockroachDB follows for deleting rows, use an [`EXPLAIN`](explain.html#opt-option) statement. To see all indexes available on a table, use [`SHOW INDEXES`](show-index.html). - -## Examples - -### Delete all rows - -You can delete all rows from a table by not including a `WHERE` clause in your `DELETE` statement. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details; -~~~ -~~~ -DELETE 7 -~~~ - -{{site.data.alerts.callout_success}} -Unless your table is small (less than 1000 rows), using [`TRUNCATE`][truncate] to delete the contents of a table will be more performant than using `DELETE`. -{{site.data.alerts.end}} - -### Delete specific rows - -When deleting specific rows from a table, the most important decision you make is which columns to use in your `WHERE` clause. When making that choice, consider the potential impact of using columns with the [Primary Key](primary-key.html)/[Unique](unique.html) constraints (both of which enforce uniqueness) versus those that are not unique. - -#### Delete rows using Primary Key/unique columns - -Using columns with the [Primary Key](primary-key.html) or [Unique](unique.html) constraints to delete rows ensures your statement is unambiguous—no two rows contain the same column value, so it's less likely to delete data unintentionally. - -In this example, `account_id` is our primary key and we want to delete the row where it equals 1. Because we're positive no other rows have that value in the `account_id` column, there's no risk of accidentally removing another row. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details WHERE account_id = 1 RETURNING *; -~~~ -~~~ - account_id | balance | account_type -------------+---------+-------------- - 1 | 32000 | Savings -(1 row) - -DELETE 1 -~~~ - -#### Delete rows using non-unique columns - -Deleting rows using non-unique columns removes _every_ row that returns `TRUE` for the `WHERE` clause's `a_expr`. This can easily result in deleting data you didn't intend to. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details WHERE balance = 30000 RETURNING *; -~~~ -~~~ - account_id | balance | account_type -------------+---------+-------------- - 2 | 30000 | Checking - 3 | 30000 | Savings -(2 rows) - -DELETE 2 -~~~ - -The example statement deleted two rows, which might be unexpected. - -### Return deleted rows - -To see which rows your statement deleted, include the `RETURNING` clause to retrieve them using the columns you specify. - -#### Use all columns - -By specifying `*`, you retrieve all columns of the delete rows. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details WHERE balance < 23000 RETURNING *; -~~~ -~~~ - account_id | balance | account_type -------------+---------+-------------- - 4 | 22000 | Savings -(1 row) - -DELETE 1 -~~~ - -#### Use specific columns - -To retrieve specific columns, name them in the `RETURNING` clause. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details WHERE account_id = 5 RETURNING account_id, account_type; -~~~ -~~~ - account_id | account_type -------------+-------------- - 5 | Checking -(1 row) - -DELETE 1 -~~~ - -#### Change column labels - -When `RETURNING` specific columns, you can change their labels using `AS`. - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM account_details WHERE balance < 24500 RETURNING account_id, balance AS final_balance; -~~~ -~~~ - account_id | final_balance -------------+--------------- - 6 | 23500 -(1 row) - -DELETE 1 -~~~ - -#### Sort and return deleted rows - -To sort and return deleted rows, use a statement like the following: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [DELETE FROM account_details RETURNING *] ORDER BY account_id; -~~~ - -~~~ - account_id | balance | account_type -------------+----------+-------------- - 7 | 79493.51 | Checking - 8 | 40761.66 | Savings - 9 | 2111.67 | Checking - 10 | 59173.15 | Savings -(4 rows) -~~~ - -## See also - -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`][truncate] -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) -- [Limiting Query Results](limit-offset.html) - - - -[truncate]: truncate.html - - diff --git a/src/current/v19.1/demo-automatic-cloud-migration.md b/src/current/v19.1/demo-automatic-cloud-migration.md deleted file mode 100644 index 45e0628624e..00000000000 --- a/src/current/v19.1/demo-automatic-cloud-migration.md +++ /dev/null @@ -1,274 +0,0 @@ ---- -title: Cross-Cloud Migration -summary: Use a local cluster to simulate migrating from one cloud platform to another. -toc: true ---- - -CockroachDB's flexible [replication controls](configure-replication-zones.html) make it trivially easy to run a single CockroachDB cluster across cloud platforms and to migrate data from one cloud to another without any service interruption. This page walks you through a local simulation of the process. - -## Watch the demo - -{% include_cached youtube.html video_id="cCJkgZy6s2Q" %} - -## Step 1. Install prerequisites - -In this tutorial, you'll use CockroachDB, its built-in `ycsb` workload, and the HAProxy load balancer. Before you begin, make sure these applications are installed: - -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install [HAProxy](http://www.haproxy.org/). If you're on a Mac and using Homebrew, use `brew install haproxy`. - -Also, to keep track of the data files and logs for your cluster, you may want to create a new directory (e.g., `mkdir cloud-migration`) and start all your nodes in that directory. - -## Step 2. Start a 3-node cluster on "cloud 1" - -If you've already [started a local cluster](start-a-local-cluster.html), the commands for starting nodes should be familiar to you. The new flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which accepts key-value pairs that describe the topography of a node. In this case, you're using the flag to specify that the first 3 nodes are running on cloud 1. - -In a new terminal, start node 1 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node1 \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~~ - -In a new terminal, start node 2 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 3 on cloud 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=1 \ ---store=cloud1node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 3. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost:26257 -~~~ - -## Step 4. Set up HAProxy load balancing - -You're now running 3 nodes in a simulated cloud. Each of these nodes is an equally suitable SQL gateway to your cluster, but to ensure an even balancing of client requests across these nodes, you can use a TCP load balancer. Let's use the open-source [HAProxy](http://www.haproxy.org/) load balancer that you installed earlier. - -In a new terminal, run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command, specifying the port of any node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=localhost:26257 -~~~ - -This command generates an `haproxy.cfg` file automatically configured to work with the 3 nodes of your running cluster. In the file, change `bind :26257` to `bind :26000`. This changes the port on which HAProxy accepts requests to a port that is not already in use by a node and that will not be used by the nodes you'll add later. - -~~~ -global - maxconn 4096 - -defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - -listen psql - bind :26000 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 localhost:26257 check port 8080 - server cockroach2 localhost:26258 check port 8081 - server cockroach3 localhost:26259 check port 8082 -~~~ - -Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - -{% include copy-clipboard.html %} -~~~ shell -$ haproxy -f haproxy.cfg -~~~ - -## Step 5. Run a sample workload - -Now that you have a load balancer running in front of your cluster, lets use the YCSB workload built into CockroachDB to simulate multiple client connections, each performing mixed read/write workloads. - -1. In a new terminal, load the initial `ycsb` schema and data, pointing it at HAProxy's port: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init ycsb \ - 'postgresql://root@localhost:26000?sslmode=disable' - ~~~ - -2. Run the `ycsb` workload, pointing it at HAProxy's port: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run ycsb \ - --duration=20m \ - --concurrency=10 \ - --max-rate=1000 - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - This command initiates 10 concurrent client workloads for 20 minutes, but limits the total load to 1000 operations per second (since you're running everything on a single machine). - - You'll soon see per-operation statistics print to standard output every second: - - ~~~ - _elapsed___errors__ops/sec(inst)___ops/sec(cum)__p50(ms)__p95(ms)__p99(ms)_pMax(ms) - 1s 0 9258.1 9666.6 0.7 1.3 2.0 8.9 read - 1s 0 470.1 490.9 1.7 2.9 4.1 5.0 update - 2s 0 10244.6 9955.6 0.7 1.2 2.0 6.6 read - 2s 0 559.0 525.0 1.6 3.1 6.0 7.3 update - 3s 0 9870.8 9927.4 0.7 1.4 2.4 10.0 read - 3s 0 500.0 516.6 1.6 4.2 7.9 15.2 update - 4s 0 9847.2 9907.3 0.7 1.4 2.4 23.1 read - 4s 0 506.8 514.2 1.6 3.7 7.6 17.8 update - 5s 0 10084.4 9942.6 0.7 1.3 2.1 7.1 read - 5s 0 537.2 518.8 1.5 3.5 10.0 15.2 update - ... - ~~~ - -## Step 6. Watch data balance across all 3 nodes - -Now open the Admin UI at `http://localhost:8080` and click **Metrics** in the left-hand navigation bar. The **Overview** dashboard is displayed. Hover over the **SQL Queries** graph at the top. After a minute or so, you'll see that the load generator is executing approximately 95% reads and 5% writes across all nodes: - -CockroachDB Admin UI - -Scroll down a bit and hover over the **Replicas per Node** graph. Because CockroachDB replicates each piece of data 3 times by default, the replica count on each of your 3 nodes should be identical: - -CockroachDB Admin UI - -## Step 7. Add 3 nodes on "cloud 2" - -At this point, you're running three nodes on cloud 1. But what if you'd like to start experimenting with resources provided by another cloud vendor? Let's try that by adding three more nodes to a new cloud platform. Again, the flag to note is [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes), which you're using to specify that these next 3 nodes are running on cloud 2. - -In a new terminal, start node 4 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node4 \ ---listen-addr=localhost:26260 \ ---http-addr=localhost:8083 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 5 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node5 \ ---advertise-addr=localhost:26261 \ ---http-addr=localhost:8084 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -In a new terminal, start node 6 on cloud 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---locality=cloud=2 \ ---store=cloud2node6 \ ---advertise-addr=localhost:26262 \ ---http-addr=localhost:8085 \ ---cache=100MB \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 8. Watch data balance across all 6 nodes - -Back on the **Overview** dashboard in Admin UI, hover over the **Replicas per Node** graph again. Because you used [`--locality`](configure-replication-zones.html#descriptive-attributes-assigned-to-nodes) to specify that nodes are running on 2 clouds, you'll see an approximately even number of replicas on each node, indicating that CockroachDB has automatically rebalanced replicas across both simulated clouds: - -CockroachDB Admin UI - -Note that it takes a few minutes for the Admin UI to show accurate per-node replica counts on hover. This is why the new nodes in the screenshot above show 0 replicas. However, the graph lines are accurate, and you can click **View node list** in the **Summary** area for accurate per-node replica counts as well. - -## Step 9. Migrate all data to "cloud 2" - -So your cluster is replicating across two simulated clouds. But let's say that after experimentation, you're happy with cloud vendor 2, and you decide that you'd like to move everything there. Can you do that without interruption to your live client traffic? Yes, and it's as simple as running a single command to add a [hard constraint](configure-replication-zones.html#replication-constraints) that all replicas must be on nodes with `--locality=cloud=2`. - -In a new terminal, [edit the default replication zone](configure-zone.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING constraints='[+cloud=2]';" --insecure --host=localhost:26257 -~~~ - -## Step 10. Verify the data migration - -Back on the **Overview** dashboard in the Admin UI, hover over the **Replicas per Node** graph again. Very soon, you'll see the replica count double on nodes 4, 5, and 6 and drop to 0 on nodes 1, 2, and 3: - -CockroachDB Admin UI - -This indicates that all data has been migrated from cloud 1 to cloud 2. In a real cloud migration scenario, at this point you would update the load balancer to point to the nodes on cloud 2 and then stop the nodes on cloud 1. But for the purpose of this local simulation, there's no need to do that. - -## Step 11. Stop the cluster - -Once you're done with your cluster, stop YCSB by switching into its terminal and pressing **CTRL-C**. Then do the same for HAProxy and each CockroachDB node. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores and the HAProxy config file: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cloud1node1 cloud1node2 cloud1node3 cloud2node4 cloud2node5 cloud2node6 haproxy.cfg -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} - -You may also want to learn other ways to control the location and number of replicas in a cluster: - -- [Even Replication Across Datacenters](configure-replication-zones.html#even-replication-across-datacenters) -- [Multiple Applications Writing to Different Databases](configure-replication-zones.html#multiple-applications-writing-to-different-databases) -- [Stricter Replication for a Table and Its Indexes](configure-replication-zones.html#stricter-replication-for-a-table-and-its-secondary-indexes) -- [Tweaking the Replication of System Ranges](configure-replication-zones.html#tweaking-the-replication-of-system-ranges) diff --git a/src/current/v19.1/demo-automatic-rebalancing.md b/src/current/v19.1/demo-automatic-rebalancing.md deleted file mode 100644 index 6b7c15347a1..00000000000 --- a/src/current/v19.1/demo-automatic-rebalancing.md +++ /dev/null @@ -1,175 +0,0 @@ ---- -title: Automatic Rebalancing -summary: Use a local cluster to explore how CockroachDB automatically rebalances data as you scale. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB automatically rebalances data as you scale. Starting with a 3-node local cluster, you'll run a sample workload and watch the replica count increase. You'll then add 2 more nodes and watch how CockroachDB automatically rebalances replicas to efficiently use all available capacity. - -## Before you begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start a 3-node cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 1: -$ cockroach start \ ---insecure \ ---store=scale-node1 \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 2: -$ cockroach start \ ---insecure \ ---store=scale-node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 3: -$ cockroach start \ ---insecure \ ---store=scale-node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 2. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost:26257 -~~~ - -## Step 3. Verify that the cluster is live - -In a new terminal, connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node to verify that the cluster is live: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name -+---------------+ - defaultdb - postgres - system -(3 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Run a sample workload - -CockroachDB comes with [built-in load generators](cockroach-workload.html) for simulating different types of client workloads, printing out per-operation statistics every second and totals after a specific duration or max number of operations. In this tutorial, you'll use the `tpcc` workload to simulate transaction processing using a rich schema of multiple tables. - -1. Load the initial schema and data: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - -2. The initial data is enough for the purpose of this tutorial, but you can run the workload for as long as you like to increase the data size, adjusting the `--duration` flag as appropriate: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=30s \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second. - -## Step 5. Watch the replica count increase - -Open the Admin UI at http://localhost:8080 and you’ll see the replica count increase as the `tpcc` workload writes data. - -CockroachDB Admin UI - -## Step 6. Add 2 more nodes - -Adding capacity is as simple as starting more nodes and joining them to the running cluster: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 4: -$ cockroach start \ ---insecure \ ---store=scale-node4 \ ---listen-addr=localhost:26260 \ ---http-addr=localhost:8083 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 5: -$ cockroach start \ ---insecure \ ---store=scale-node5 \ ---listen-addr=localhost:26261 \ ---http-addr=localhost:8084 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 7. Watch data rebalance across all 5 nodes - -Back in the Admin UI, you'll now see 5 nodes listed. At first, the replica count will be lower for nodes 4 and 5. Very soon, however, you'll see those numbers even out across all nodes, indicating that data is being automatically rebalanced to utilize the additional capacity of the new nodes. - -CockroachDB Admin UI - -{{site.data.alerts.callout_info}} -After scaling to 5 nodes, the Admin UI will call out a number of under-replicated ranges. This is due to the cluster preferring 5 replicas for important [internal system data](configure-replication-zones.html#for-system-data) by default. When the cluster is less than 5 nodes, this preference is ignored in reporting, but as soon as there are more than 3 nodes, the cluster recognizes this preference and reports the under-replicated state in the UI. As those ranges are up-replicated, the under-replicated range count will decrease to 0. -{{site.data.alerts.end}} - -## Step 8. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}} -For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press **CTRL-C** a second time. -{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf scale-node1 scale-node2 scale-node3 scale-node4 scale-node5 -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} diff --git a/src/current/v19.1/demo-data-replication.md b/src/current/v19.1/demo-data-replication.md deleted file mode 100644 index 1e82b3edee3..00000000000 --- a/src/current/v19.1/demo-data-replication.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: Data Replication -summary: Use a local cluster to explore how CockroachDB replicates and distributes data. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB replicates and distributes data. Starting with a 1-node local cluster, you'll write some data, add 2 nodes, and watch how the data is replicated automatically. You'll then update the cluster to replicate 5 ways, add 2 more nodes, and again watch how all existing replicas are re-replicated to the new nodes. - -## Before you begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start a 1-node cluster - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node1 \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 -~~~ - -## Step 2. Write data - -In a new terminal, use [`cockroach workload`](cockroach-workload.html) command to generate an example `intro` database: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach workload init intro \ -'postgresql://root@localhost:26257?sslmode=disable' -~~~ - -In the same terminal, open the [built-in SQL shell](use-the-built-in-sql-client.html) and verify that the new `intro` database was added with one table, `mytable`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name -+---------------+ - defaultdb - intro - postgres - system -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM intro; -~~~ - -~~~ - table_name -+------------+ - mytable -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM intro.mytable WHERE (l % 2) = 0; -~~~ - -~~~ - l | v -+----+------------------------------------------------------+ - 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, - 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ - 4 | ! "?##mW##?"- - 6 | ! C O N G R A T S _am#Z??A#ma, Y - 8 | ! _ummY" "9#ma, A - 10 | ! vm#Z( )Xmms Y - 12 | ! .j####mmm#####mm#m##6. - 14 | ! W O W ! jmm###mm######m#mmm##6 - 16 | ! ]#me*Xm#m#mm##m#m##SX##c - 18 | ! dm#||+*$##m#mm#m#Svvn##m - 20 | ! :mmE=|+||S##m##m#1nvnnX##; A - 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M - 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A - 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z - 28 | ! U D 4##c|+|+|]m#kvnvnno##P E - 30 | ! I 4#ma+|++]mmhvnnvq##P` ! - 32 | ! D I ?$#q%+|dmmmvnnm##! - 34 | ! T -4##wu#mm#pw##7' - 36 | ! -?$##m####Y' - 38 | ! !! "Y##Y"- - 40 | ! -(21 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 3. Add two nodes - -In a new terminal, add node 2: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, add node 3: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---join=localhost:26257 -~~~ - -## Step 4. Watch data replicate to the new nodes - -Open the Admin UI at http://localhost:8080 to see that all three nodes are listed. At first, the replica count will be lower for nodes 2 and 3. Very soon, the replica count will be identical across all three nodes, indicating that all data in the cluster has been replicated 3 times; there's a copy of every piece of data on each node. - -CockroachDB Admin UI - -## Step 5. Increase the replication factor - -As you just saw, CockroachDB replicates data 3 times by default. Now, in the terminal you used for the built-in SQL shell or in a new terminal, use the [`ALTER RANGE ... CONFIGURE ZONE`](configure-zone.html) statement to change the cluster's `.default` replication factor to 5: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --execute="ALTER RANGE default CONFIGURE ZONE USING num_replicas=5;" --insecure --host=localhost:26257 -~~~ - -## Step 6. Add two more nodes - -In a new terminal, add node 4: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node4 \ ---listen-addr=localhost:26260 \ ---http-addr=localhost:8083 \ ---join=localhost:26257 -~~~ - -In a new terminal, add node 5: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=repdemo-node5 \ ---listen-addr=localhost:26261 \ ---http-addr=localhost:8084 \ ---join=localhost:26257 -~~~ - -## Step 7. Watch data replicate to the new nodes - -Back in the Admin UI, you'll see that there are now 5 nodes listed. Again, at first, the replica count will be lower for nodes 4 and 5. But because you changed the default replication factor to 5, very soon, the replica count will be identical across all 5 nodes, indicating that all data in the cluster has been replicated 5 times. - -CockroachDB Admin UI - -## Step 8. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}} -For the last 2 nodes, the shutdown process will take longer (about a minute) and will eventually force stop the nodes. This is because, with only 2 nodes still online, a majority of replicas are no longer available (3 of 5), and so the cluster is not operational. To speed up the process, press **CTRL-C** a second time in the nodes' terminals. -{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf repdemo-node1 repdemo-node2 repdemo-node3 repdemo-node4 repdemo-node5 -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} diff --git a/src/current/v19.1/demo-fault-tolerance-and-recovery.md b/src/current/v19.1/demo-fault-tolerance-and-recovery.md deleted file mode 100644 index a70d3c493a6..00000000000 --- a/src/current/v19.1/demo-fault-tolerance-and-recovery.md +++ /dev/null @@ -1,352 +0,0 @@ ---- -title: Fault Tolerance & Recovery -summary: Use a local cluster to explore how CockroachDB remains available during, and recovers after, failure. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB remains available during, and recovers after, failure. Starting with a 3-node local cluster, you'll remove a node and see how the cluster continues uninterrupted. You'll then write some data while the node is offline, rejoin the node, and see how it catches up with the rest of the cluster. Finally, you'll add a fourth node, remove a node again, and see how missing replicas eventually re-replicate to the new node. - -## Before you begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start a 3-node cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes: - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 1: -$ cockroach start \ ---insecure \ ---store=fault-node1 \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 2: -$ cockroach start \ ---insecure \ ---store=fault-node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# In a new terminal, start node 3: -$ cockroach start \ ---insecure \ ---store=fault-node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -## Step 2. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost:26257 -~~~ - -## Step 3. Verify that the cluster is live - -In a new terminal, use the [`cockroach sql`](use-the-built-in-sql-client.html) command to connect the built-in SQL shell to any node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26257 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name -+---------------+ - defaultdb - postgres - system -(3 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Remove a node temporarily - -In the terminal running node 2, press **CTRL-C** to stop the node. - -Alternatively, you can open a new terminal and run the [`cockroach quit`](stop-a-node.html) command against port `26258`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --insecure --host=localhost:26258 -~~~ - -~~~ -initiating graceful shutdown of server -ok -~~~ - -## Step 5. Verify that the cluster remains available - -Switch to the terminal for the built-in SQL shell and reconnect the shell to node 1 (port `26257`) or node 3 (port `26259`): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name -+---------------+ - defaultdb - postgres - system -(3 rows) -~~~ - -As you see, despite one node being offline, the cluster continues uninterrupted because a majority of replicas (2/3) remains available. If you were to remove another node, however, leaving only one node live, the cluster would be unresponsive until another node was brought back online. - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 6. Write data while the node is offline - -In the same terminal, use the [`cockroach workload`](cockroach-workload.html) command to generate an example `startrek` database: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach workload init startrek \ -'postgresql://root@localhost:26257?sslmode=disable' -~~~ - -Then reconnect the SQL shell to node 1 (port `26257`) or node 3 (port `26259`) and verify that the new `startrek` database was added with two tables, `episodes` and `quotes`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26259 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW DATABASES; -~~~ - -~~~ - database_name -+---------------+ - defaultdb - postgres - startrek - system -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ - -~~~ - table_name -+------------+ - episodes - quotes -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.episodes WHERE stardate > 5500; -~~~ - -~~~ - id | season | num | title | stardate -+----+--------+-----+-----------------------------------+----------+ - 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7 - 62 | 3 | 7 | Day of the Dove | 5630.3 - 64 | 3 | 9 | The Tholian Web | 5693.2 - 65 | 3 | 10 | Plato's Stepchildren | 5784.2 - 66 | 3 | 11 | Wink of an Eye | 5710.5 - 69 | 3 | 14 | Whom Gods Destroy | 5718.3 - 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2 - 73 | 3 | 18 | The Lights of Zetar | 5725.3 - 74 | 3 | 19 | Requiem for Methuselah | 5843.7 - 75 | 3 | 20 | The Way to Eden | 5832.3 - 76 | 3 | 21 | The Cloud Minders | 5818.4 - 77 | 3 | 22 | The Savage Curtain | 5906.4 - 78 | 3 | 23 | All Our Yesterdays | 5943.7 - 79 | 3 | 24 | Turnabout Intruder | 5928.5 -(14 rows) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 7. Rejoin the node to the cluster - -Switch to the terminal for node 2, and rejoin the node to the cluster, using the same command that you used in step 1: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure \ ---store=fault-node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257 -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://localhost:8081 -sql: postgresql://root@localhost:26258?sslmode=disable -logs: node2/logs -store[0]: path=fault-node2 -status: restarted pre-existing node -clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9} -nodeID: 2 -~~~ - -## Step 8. Verify that the rejoined node has caught up - -Switch to the terminal for the built-in SQL shell, connect the shell to the rejoined node 2 (port `26258`), and check for the `startrek` data that was added while the node was offline: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26258 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.episodes WHERE stardate > 5500; -~~~ - -~~~ - id | season | num | title | stardate -+----+--------+-----+-----------------------------------+----------+ - 60 | 3 | 5 | Is There in Truth No Beauty? | 5630.7 - 62 | 3 | 7 | Day of the Dove | 5630.3 - 64 | 3 | 9 | The Tholian Web | 5693.2 - 65 | 3 | 10 | Plato's Stepchildren | 5784.2 - 66 | 3 | 11 | Wink of an Eye | 5710.5 - 69 | 3 | 14 | Whom Gods Destroy | 5718.3 - 70 | 3 | 15 | Let That Be Your Last Battlefield | 5730.2 - 73 | 3 | 18 | The Lights of Zetar | 5725.3 - 74 | 3 | 19 | Requiem for Methuselah | 5843.7 - 75 | 3 | 20 | The Way to Eden | 5832.3 - 76 | 3 | 21 | The Cloud Minders | 5818.4 - 77 | 3 | 22 | The Savage Curtain | 5906.4 - 78 | 3 | 23 | All Our Yesterdays | 5943.7 - 79 | 3 | 24 | Turnabout Intruder | 5928.5 -(14 rows) -~~~ - -At first, while node 2 is catching up, it acts as a proxy to one of the other nodes with the data. This shows that even when a copy of the data is not local to the node, it has seamless access. - -Soon enough, node 2 catches up entirely. To verify, open the Admin UI at `http://localhost:8080` to see that all three nodes are listed, and the replica count is identical for each. This means that all data in the cluster has been replicated 3 times; there's a copy of every piece of data on each node. - -{{site.data.alerts.callout_success}}CockroachDB replicates data 3 times by default. You can customize the number and location of replicas for the entire cluster or for specific sets of data using replication zones.{{site.data.alerts.end}} - -CockroachDB Admin UI - -## Step 9. Add another node - -Now, to prepare the cluster for a permanent node failure, open a new terminal and add a fourth node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=fault-node4 \ ---listen-addr=localhost:26260 \ ---http-addr=localhost:8083 \ ---join=localhost:26257,localhost:26258,localhost:26259 -~~~ - -~~~ -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -admin: http://localhost:8083 -sql: postgresql://root@localhost:26260?sslmode=disable -logs: node4/logs -store[0]: path=fault-node4 -status: initialized new node, joined pre-existing cluster -clusterID: {5638ba53-fb77-4424-ada9-8a23fbce0ae9} -nodeID: 4 -~~~ - -## Step 10. Remove a node permanently - -Again, switch to the terminal running node 2 and press **CTRL-C** to stop it. - -Alternatively, you can open a new terminal and run the [`cockroach quit`](stop-a-node.html) command against port `26258`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --insecure --host=localhost:26258 -~~~ - -~~~ -initiating graceful shutdown of server -ok -server drained and shutdown completed -~~~ - -## Step 11. Verify that the cluster re-replicates missing replicas - -Back in the Admin UI, you'll see 4 nodes listed. After about 1 minute, the dot next to node 2 will turn yellow, indicating that the node is not responding. - -CockroachDB Admin UI - -After about 10 minutes, node 2 will move into a **Dead Nodes** section, indicating that the node is not expected to come back. At this point, in the **Live Nodes** section, you should also see that the **Replicas** count for node 4 matches the count for node 1 and 3, the other live nodes. This indicates that all missing replicas (those that were on node 2) have been re-replicated to node 4. - -CockroachDB Admin UI - -## Step 12. Stop the cluster - -Once you're done with your test cluster, stop each node by switching to its terminal and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}}For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press CTRL-C a second time.{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf fault-node1 fault-node2 fault-node3 fault-node4 fault-node5 -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} diff --git a/src/current/v19.1/demo-follow-the-workload.md b/src/current/v19.1/demo-follow-the-workload.md deleted file mode 100644 index 404725de211..00000000000 --- a/src/current/v19.1/demo-follow-the-workload.md +++ /dev/null @@ -1,262 +0,0 @@ ---- -title: Follow-the-Workload -summary: CockroachDB can dynamically optimize read latency for the location from which most of the workload is originating. -toc: true ---- - -"Follow-the-workload" refers to CockroachDB's ability to dynamically optimize read latency for the location from which most of the workload is originating. This page explains how "follow-the-workload" works and walks you through a simple demonstration using a local cluster. - -## Overview - -### Basic terms - -To understand how "follow-the-workload" works, it's important to start with some basic terms: - -Term | Description ------|------------ -**Range** | CockroachDB stores all user data and almost all system data in a giant sorted map of key-value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -**Range Replica** | CockroachDB replicates each range (3 times by default) and stores each replica on a different node. -**Range Lease** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range. - -### How it works - -"Follow-the-workload" is based on the way **range leases** handle read requests. Read requests bypass the Raft consensus protocol, accessing the range replica that holds the range lease (the leaseholder) and sending the results to the client without needing to coordinate with any of the other range replicas. Bypassing Raft, and the network round trips involved, is possible because the leaseholder is guaranteed to be up-to-date due to the fact that all write requests also go to the leaseholder. - -This increases the speed of reads, but it doesn't guarantee that the range lease will be anywhere close to the origin of requests. If requests are coming from the US West, for example, and the relevant range lease is on a node in the US East, the requests would likely enter a gateway node in the US West and then get routed to the node with the range lease in the US East. - -However, you can cause the cluster to actively move range leases for even better read performance by starting each node with the [`--locality`](start-a-node.html#locality) flag. With this flag specified, the cluster knows about the location of each node, so when there's high latency between nodes, the cluster will move active range leases to a node closer to the origin of the majority of the workload. This is especially helpful for applications with workloads that move around throughout the day (e.g., most of the traffic is in the US East in the morning and in the US West in the evening). - -{{site.data.alerts.callout_success}}To enable "follow-the-workload", you just need to start each node of the cluster with the --locality flag, as shown in the tutorial below. No additional user action is required.{{site.data.alerts.end}} - -### Example - -In this example, let's imagine that lots of read requests are going to node 1, and that the requests are for data in range 3. Because range 3's lease is on node 3, the requests are routed to node 3, which returns the results to node 1. Node 1 then responds to the clients. - -Follow-the-workload example - -However, if the nodes were started with the [`--locality`](start-a-node.html#locality) flag, after a short while, the cluster would move range 3's lease to node 1, which is closer to the origin of the workload, thus reducing the network round trips and increasing the speed of reads. - -Follow-the-workload example - -## Tutorial - -### Step 1. Install prerequisites - -In this tutorial, you'll use CockroachDB, the `comcast` network tool to simulate network latency on your local workstation, and the `tpcc` workload built into CockroachDB to simulate client workloads. Before you begin, make sure these applications are installed: - -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install [Go](https://golang.org/doc/install) version 1.9 or higher. If you're on a Mac and using Homebrew, use `brew install go`. You can check your local version by running `go version`. -- Install the [`comcast`](https://github.com/tylertreat/comcast) network simulation tool: `go get github.com/tylertreat/comcast` - -Also, to keep track of the data files and logs for your cluster, you may want to create a new directory (e.g., `mkdir follow-workload`) and start all your nodes in that directory. - -### Step 2. Start simulating network latency - -"Follow-the-workload" only kicks in when there's high latency between the nodes of the CockroachDB cluster. In this tutorial, you'll run 3 nodes on your local workstation, with each node pretending to be in a different region of the US. To simulate latency between the nodes, use the `comcast` tool that you installed earlier. - -In a new terminal, start `comcast` as follows: - -{% include copy-clipboard.html %} -~~~ shell -$ comcast --device lo0 --latency 100 -~~~ - -For the `--device` flag, use `lo0` if you're on Mac or `lo` if you're on Linux. If neither works, run the `ifconfig` command and find the interface responsible for `127.0.0.1` in the output. - -This command causes a 100 millisecond delay for all requests on the loopback interface of your local workstation. It will only affect connections from the machine to itself, not to/from the Internet. - -### Step 3. Start the cluster - -Use the [`cockroach start`](start-a-node.html) command to start 3 nodes on your local workstation, using the [`--locality`](start-a-node.html#locality) flag to pretend that each node is in a different region of the US. - -1. In a new terminal, start a node in the "US West": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-west \ - --store=follow1 \ - --listen-addr=localhost:26257 \ - --http-addr=localhost:8080 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -2. In a new terminal, start a node in the "US Midwest": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-midwest \ - --store=follow2 \ - --listen-addr=localhost:26258 \ - --http-addr=localhost:8081 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -3. In a new terminal, start a node in the "US East": - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --locality=region=us-east \ - --store=follow3 \ - --listen-addr=localhost:26259 \ - --http-addr=localhost:8082 \ - --join=localhost:26257,localhost:26258,localhost:26259 - ~~~ - -### Step 4. Initialize the cluster - -In a new terminal, use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=localhost:26257 -~~~ - -### Step 5. Simulate traffic in the US East - -Now that the cluster is live, use the `tpcc` workload to simulate multiple client connections to the node in the "US East". - -1. In the same terminal, run the [`cockroach workload init tpcc`](cockroach-workload.html) command to load the initial schema and data, pointing it at port `26259`, which is the port of the node with the `us-east` locality: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload init tpcc \ - 'postgresql://root@localhost:26259?sslmode=disable' - ~~~ - -2. Let the workload run to completion. - -### Step 6. Check the location of the range lease - -The load generator created a `tpcc` database with several tables that map to underlying key-value ranges. Verify that the range lease for the `customer` table moved to the node in the "US East" as follows. - -1. In the same terminal, run the [`cockroach node status`](view-node-details.html) command against any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach node status --insecure --host=localhost:26259 - ~~~ - - ~~~ - id | address | build | started_at | updated_at | is_available | is_live - +----+-----------------+----------------------------------------+----------------------------------+----------------------------------+--------------+---------+ - 1 | localhost:26257 | v2.2.0-alpha.00000000-2397-geb8345b19c | 2018-11-21 03:12:17.572557+00:00 | 2018-11-21 03:16:11.917193+00:00 | true | true - 2 | localhost:26259 | v2.2.0-alpha.00000000-2397-geb8345b19c | 2018-11-21 03:12:18.935464+00:00 | 2018-11-21 03:16:13.510253+00:00 | true | true - 3 | localhost:26258 | v2.2.0-alpha.00000000-2397-geb8345b19c | 2018-11-21 03:12:19.11294+00:00 | 2018-11-21 03:16:13.571382+00:00 | true | true - (3 rows) - ~~~ - -2. In the response, note the ID of the node running on port `26259` (in this case, node 2). - -3. In the same terminal, connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost:26259 - ~~~ - -4. Check where the range lease is for the `tpcc.customer` table: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW EXPERIMENTAL_RANGES FROM TABLE tpcc.customer; - ~~~ - - ~~~ - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 33 | {1,2,3} | 2 - (1 row) - ~~~ - - `replicas` and `lease_holder` indicate the node IDs. As you can see, the lease for the range holding the `customer` table's data is on node 2, which is the same ID as the node on port `26259`. - -5. Exit the SQL shell: - - {% include copy-clipboard.html %} - ~~~ sql - > \q - ~~~ - -### Step 7. Simulate traffic in the US West - -1. In the same terminal, run the [`cockroach workload run tpcc`](cockroach-workload.html) command to generate more load, this time pointing it at port `26257`, which is the port of the node with the `us-west` locality: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach workload run tpcc \ - --duration=5m \ - 'postgresql://root@localhost:26257?sslmode=disable' - ~~~ - - You'll see per-operation statistics print to standard output every second. - -2. Let the workload run to completion. This is necessary since the system will still "remember" the earlier requests to the other locality. - - {{site.data.alerts.callout_info}} - The latency numbers printed by the workload will be over 200 milliseconds because the 100 millisecond delay in each direction (200ms round-trip) caused by the `comcast` tool also applies to the traffic going from the `tpcc` process to the `cockroach` process. If you were to set up more advanced rules that excluded the `tpcc` process's traffic or to run this on a real network with real network delay, these numbers would be down in the single-digit milliseconds. - {{site.data.alerts.end}} - -### Step 8. Check the location of the range lease - -Verify that the range lease for the `customer` table moved to the node in the "US West" as follows. - -1. Connect the [built-in SQL shell](use-the-built-in-sql-client.html) to any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost:26257 - ~~~ - -2. Check where the range lease is for the `kv` table: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW EXPERIMENTAL_RANGES FROM TABLE test.kv; - ~~~ - - ~~~ - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 33 | {1,2,3} | 1 - (1 row) - ~~~ - - As you can see, the lease for the range holding the `customer` table's data is now on node 1, which is the same ID as the node on port `26257`. - -### Step 9. Stop the cluster - -Once you're done with your cluster, press **CTRL-C** in each node's terminal. - -{{site.data.alerts.callout_success}} -For the last node, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 node still online, a majority of replicas are no longer available (2 of 3), and so the cluster is not operational. To speed up the process, press **CTRL-C** a second time. -{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf follow1 follow2 follow3 -~~~ - -### Step 10. Stop simulating network latency - -Once you're done with this tutorial, you will not want a 100 millisecond delay for all requests on your local workstation, so stop the `comcast` tool: - -{% include copy-clipboard.html %} -~~~ shell -$ comcast --device lo0 --stop -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} diff --git a/src/current/v19.1/demo-geo-partitioning.md b/src/current/v19.1/demo-geo-partitioning.md deleted file mode 100644 index fb0ad9db4aa..00000000000 --- a/src/current/v19.1/demo-geo-partitioning.md +++ /dev/null @@ -1,823 +0,0 @@ ---- -title: Geo-Partitioning for Fast Reads and Writes in a Multi-Region Cluster -summary: Use geo-partitioning to get low-latency reads and writes in a multi-region CockroachDB cluster. -toc: true -toc_not_nested: true ---- - -CockroachDB's [geo-partitioning](partitioning.html) feature gives you low-latency reads and writes in a broadly distributed cluster. This tutorial walks you through the process in a 9-node deployment across 3 US regions on GCE. If you follow along, you'll see dramatic improvements in latency, with the majority of reads and writes executing in 2 milliseconds or less. - -## See it in action - -### Watch a demo - -{% include_cached youtube.html video_id="TgnQwOOk9Js" %} - -### Read a case study - -Read about how an [electronic lock manufacturer](https://www.cockroachlabs.com/case-studies/european-electronic-lock-manufacturer-modernizes-iam-system-with-managed-cockroachdb/) and [multi-national bank](https://www.cockroachlabs.com/case-studies/top-five-multinational-bank-modernizes-its-european-core-banking-services-migrating-from-oracle-to-cockroachdb/) are using the Geo-Partitioned Replicas topology in production for improved performance and regulatory compliance. - -## Before you begin - -### Review important concepts - -To understand performance in a geographically distributed CockroachDB cluster, it's important to first review [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html). - -### Review the cluster topology - -You'll deploy a 9-node CockroachDB cluster across 3 GCE regions, with each node on a VM in a distinct availability zone for optimal fault tolerance: - -Geo-partitioning topology - -A few notes: - -- For each CockroachDB node, you'll use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) with the Ubuntu 16.04 OS image and a [local SSD](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd) disk. -- You'll start each node with the [`--locality` flag](start-a-node.html#locality) describing the node's region and availability zone. Before partitioning, this will lead CockroachDB to evenly distribute data across the 3 regions. After partitioning, this information will be used to pin data closer to users. -- There will be an extra VM in each region for an instance of your application and the open-source HAProxy load balancer. The application in each region will be pointed at the local load balancer, which will direct connections only to the CockroachDB nodes in the same region. - -### Review the application - -For your application, you'll use our open-source, fictional, peer-to-peer ride-sharing app, [MovR](https://github.com/cockroachdb/movr). You'll run 3 instances of MovR, one in each US region, with each instance representing users in a specific city: New York, Chicago, or Seattle. - -#### The schema - -Geo-partitioning schema - -A few notes about the schema: - -- There are just three self-explanatory tables: `users` represents the people registered for the service, `vehicles` represents the pool of vehicles for the service, and `rides` represents when and where users have rented a vehicle. -- Each table has a composite primary key of `city` and `id`, with `city` being first in the key. These compound primary keys will enable you to [geo-partition data at the row level](partitioning.html#partition-using-primary-key) by `city`. - -#### The workflow - -The workflow for MovR is as follows: - -1. A user loads the app and sees the 25 closest vehicles. Behind the scenes, this is a `SELECT` from the `vehicles` table: - - ~~~ sql - > SELECT id, city, status, ... FROM vehicles WHERE city = - ~~~ - -2. The user signs up for the service, which is an `INSERT` of a row into the `users` table: - - ~~~ sql - > INSERT INTO users (id, name, address, ...) VALUES ... - ~~~ - -3. In some cases, the user adds their own vehicle to share, which is an `INSERT` of a row into the `vehicles` table: - - ~~~ sql - > INSERT INTO vehicles (id, city, type, ...) VALUES ... - ~~~ - -4. More often, the user reserves a vehicle and starts a ride, which is an `UPDATE` of a row in the `vehicles` table and an `INSERT` of a row into the `rides` table: - - ~~~ sql - > UPDATE vehicles SET status = 'in_use' WHERE ... - ~~~ - - ~~~ sql - > INSERT INTO rides (id, city, start_addr, ...) VALUES ... - ~~~ - -5. The user ends the ride and releases the vehicle, which is an `UPDATE` of a row in the `vehicles` table and an `UPDATE` of a row in the `rides` table: - - ~~~ sql - > UPDATE vehicles SET status = 'available' WHERE ... - ~~~ - - ~~~ sql - > UPDATE rides SET end_address = ... - ~~~ - -## Step 1. Request a trial license - -[Geo-partitioning](partitioning.html) is an enterprise feature. For the purpose of this tutorial, [request a 30-day trial license](https://www.cockroachlabs.com/get-cockroachdb/enterprise/) for use with your cluster. - -You should receive your trial license via email within a few minutes. You'll enable your license once your cluster is up-and-running. - -## Step 2. Configure your network - -{% include {{ page.version.version }}/performance/configure-network.md %} - -## Step 3. Provision VMs - -You need 9 VMs across 3 GCE regions, 3 per region with each VM in a distinct availability zone. You also need 3 extra VMs, 1 per region, for a region-specific version of MovR and the HAProxy load balancer. - -1. [Create 9 VMs](https://cloud.google.com/compute/docs/instances/create-start-instance) for CockroachDB nodes. - - When creating each VM: - - Use the [`n1-standard-4`](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machine type (4 vCPUs, 15 GB memory) and the Ubuntu 16.04 OS image. - - Select one of the following [region and availability zone](https://cloud.google.com/compute/docs/regions-zones/) configurations. Be sure to use each region/availability combination only once. - - VM | Region | Availability Zone - ---|--------|------------------ - 1 | `us-east1` | `us-east1-b` - 2 | `us-east1` | `us-east1-c` - 3 | `us-east1` | `us-east1-d` - 4 | `us-central1` | `us-central1-a` - 5 | `us-central1` | `us-central1-b` - 6 | `us-central1` | `us-central1-c` - 7 | `us-west1` | `us-west1-a` - 8 | `us-west1` | `us-west1-b` - 9 | `us-west1` | `us-west1-c` - - [Create and mount a local SSD](https://cloud.google.com/compute/docs/disks/local-ssd#create_local_ssd). - - To apply the Admin UI firewall rule you created earlier, click **Management, disk, networking, SSH keys**, select the **Networking** tab, and then enter `cockroachdb` in the **Network tags** field. - -2. [Create 3 VMs](https://cloud.google.com/compute/docs/instances/create-start-instance) for the region-specific versions of MovR and HAProxy, one in each of the regions mentioned above, using same machine types and OS image as mentioned above. - -3. Note the internal IP address of each VM. You'll need these addresses when starting the CockroachDB nodes, configuring HAProxy, and running the MovR application. - -## Step 4. Start CockroachDB - -Now that you have VMs in place, start your CockroachDB cluster across the three US regions. - -### Nodes in US East - -1. SSH to the first VM in the US East region where you want to run a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-east1,zone= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two CockroachDB nodes in the region. Each time, be sure to: - - Adjust the `--advertise-addr` flag. - - Use the appropriate availability zone of the VM in the `zone` portion of the `--locality` flag. - -5. On any of the VMs in the US East region, run the one-time [`cockroach init`](initialize-a-cluster.html) command to join the first nodes into a cluster: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
      - ~~~ - -### Nodes in US Central - -1. SSH to the first VM in the US Central region where you want to run a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-central1,zone= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two CockroachDB nodes in the region. Each time, be sure to: - - Adjust the `--advertise-addr` flag. - - Use the appropriate availability zone of the VM in the `zone` portion of the `--locality` flag. - -### Nodes in US West - -1. SSH to the first VM in the US West region where you want to run a CockroachDB node. - -2. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - -3. Run the [`cockroach start`](start-a-node.html) command: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --advertise-addr= \ - --join=:26257,:26257,:26257 \ - --locality=cloud=gce,region=us-west1,zone= \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -4. Repeat steps 1 - 3 for the other two CockroachDB nodes in the region. Each time, be sure to: - - Adjust the `--advertise-addr` flag. - - Use the appropriate availability zone of the VM in the `zone` portion of the `--locality` flag. - -## Step 5. Set up the client VMs - -Next, install Docker and HAProxy on each client VM. Docker is required so you can later run MovR from a Docker image, and HAProxy will serve as the region-specific load balancer for MovR in each region. - -1. SSH to the VM in the US East region where you want to run MovR and HAProxy. - -2. [Install Docker](https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-repository). - -3. Install HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get update - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo apt-get install haproxy - ~~~ - -4. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, extract the binary, and copy it into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - The `cockroach` binary needs to be on these VMs so you can run some client commands built into the binary, such as the command in the next step and the command for starting the built-in SQL shell. - -5. Run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command to generate an HAProxy config file, specifying the address of any CockroachDB node and the `--locality` of nodes in the US East region: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach gen haproxy \ - --insecure \ - --host=
      \ - --locality=region=us-east1 - ~~~ - - The generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated with just the nodes in US East based on the `--locality` flag used: - - ~~~ - global - maxconn 4096 - - defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - - listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 - ~~~ - -6. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - - {% include copy-clipboard.html %} - ~~~ shell - $ haproxy -f haproxy.cfg & - ~~~ - -7. Repeat the steps above for the client VMs in the other two regions. For each region, be sure to adjust the `--locality` flag when running the `cockroach gen haproxy` command. - -## Step 6. Configure the cluster - -Before you can run MovR against the cluster and demonstrate the geo-partitioning feature, you must create a `movr` database and enable an enterprise license. - -1. SSH to the client VM in the US East region. - -2. Use the [`cockroach sql`](use-the-built-in-sql-client.html) command to start the built-in SQL shell, specifying the address of the HAProxy load balancer in the region: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=
      - ~~~ - -3. In the SQL shell, create the `movr` database: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE DATABASE movr; - ~~~ - -4. Enable the trial license you requested earlier: - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.organization = ''; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING enterprise.license = ''; - ~~~ - -5. Set the longitude and latitude of the regions where you are running CockroachDB nodes: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT into system.locations VALUES - ('region', 'us-east1', 33.836082, -81.163727), - ('region', 'us-central1', 42.032974, -93.581543), - ('region', 'us-west1', 43.804133, -120.554201); - ~~~ - - Inserting these coordinates enables you to visualize your cluster on the [**Node Map**](enable-node-map.html) feature of the Admin UI. - -6. Exit the SQL shell: - - {% include copy-clipboard.html %} - ~~~ sql - \q - ~~~ - -## Step 7. Access the Admin UI - -Now that you've deployed and configured your cluster, take a look at it in the Admin UI: - -1. Open a browser and go to `http://:8080`. - -2. On the **Cluster Overview** page, select **View: Node Map** to access the [Node Map](enable-node-map.html), which visualizes your CockroachDB cluster on a map of the US: - - Geo-partitioning node map - -3. Drill down one level to see your nodes across 3 regions: - - Geo-partitioning node map - -4. Drill into a region to see that each node is in a distinct availability zone: - - Geo-partitioning node map - -## Step 8. Start MovR - -{{site.data.alerts.callout_info}} -Be sure to use the exact version of MovR specified in the commands: `movr:19.03.2`. This tutorial relies on this specific version. Later versions use an expanded schema and will be featured in future tutorials. -{{site.data.alerts.end}} - -### MovR in US East - -1. Still on the client VM in the US East region, load the MovR schema and initial data for the cities of New York, Chicago, and Seattle, pointing at the address of the US East load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm cockroachdb/movr:19.03.2 \ - --app-name "movr-load" \ - --url "postgres://root@
      :26257/movr?sslmode=disable" \ - load \ - --num-users 100 \ - --num-rides 100 \ - --num-vehicles 10 \ - --city-pair us_east:"new york" \ - --city-pair central:chicago \ - --city-pair us_west:seattle - ~~~ - - After the Docker image downloads, you'll see data being generated for the specified cities: - - ~~~ - ... - [INFO] (MainThread) initializing tables - [INFO] (MainThread) loading cities ['new york', 'chicago', 'seattle'] - [INFO] (MainThread) loading movr data with ~100 users, ~10 vehicles, and ~100 rides - [INFO] (MainThread) Only using 3 of 5 requested threads, since we only create at most one thread per city - [INFO] (Thread-1 ) Generating user data for new york... - [INFO] (Thread-2 ) Generating user data for chicago... - [INFO] (Thread-3 ) Generating user data for seattle... - [INFO] (Thread-2 ) Generating vehicle data for chicago... - [INFO] (Thread-3 ) Generating vehicle data for seattle... - [INFO] (Thread-1 ) Generating vehicle data for new york... - [INFO] (Thread-2 ) Generating ride data for chicago... - [INFO] (Thread-3 ) Generating ride data for seattle... - [INFO] (Thread-1 ) Generating ride data for new york... - [INFO] (Thread-2 ) populated chicago in 9.173931 seconds - [INFO] (Thread-3 ) populated seattle in 9.257723 seconds - [INFO] (Thread-1 ) populated new york in 9.386243 seconds - [INFO] (MainThread) populated 3 cities in 20.587325 seconds - [INFO] (MainThread) - 4.954505 users/second - [INFO] (MainThread) - 4.954505 rides/second - [INFO] (MainThread) - 0.582883 vehicles/second - ~~~ - -2. Start MovR in the US East region, representing users in New York. Be sure to point at the address of the US East load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm cockroachdb/movr:19.03.2 \ - --app-name "movr-east" \ - --url "postgres://root@
      :26257/movr?sslmode=disable" \ - --num-threads=15 \ - run \ - --city="new york" - ~~~ - -### MovR in US Central - -1. SSH to the client VM in the US Central region. - -2. Start MovR in the US Central region, representing users in Chicago. Be sure to point at the address of the US Central load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm cockroachdb/movr:19.03.2 \ - --app-name "movr-central" \ - --url "postgres://root@
      :26257/movr?sslmode=disable" \ - --num-threads=15 \ - run \ - --city="chicago" - ~~~ - -### MovR in US West - -1. SSH to the client VM in the US West region. - -2. Start MovR in the US West region, representing users in Seattle. Be sure to point at the address of the US West load balancer: - - {% include copy-clipboard.html %} - ~~~ shell - $ sudo docker run -it --rm cockroachdb/movr:19.03.2 \ - --app-name "movr-west" \ - --url "postgres://root@
      :26257/movr?sslmode=disable" \ - --num-threads=15 \ - run \ - --city="seattle" - ~~~ - -## Step 9. Check SQL latency - -Now that MovR is running in all 3 regions, use the Admin UI to check the SQL latency before partitioning. - -1. Go back to the Admin UI. - -2. Click **Metrics** on the left and hover over the **Service Latency: SQL, 99th percentile** timeseries graph: - - Geo-partitioning SQL latency - - For each node, you'll see that the max latency of 99% of queries is in the 100s of milliseconds. - -## Step 10. Check replica distribution - -To understand why SQL latency is so high, use the built-in SQL shell to check the distribution of replicas. - -1. SSH to the client VM in any region. - -2. Use the [`cockroach sql`](use-the-built-in-sql-client.html) command to start the built-in SQL shell, specifying the address of the HAProxy load balancer in the region: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --database=movr --host=
      - ~~~ - -3. In the SQL shell, use the [`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) statement to view the location of replicas for each of the 3 tables and their secondary indexes: - - {% include copy-clipboard.html %} - ~~~ sql - > SHOW EXPERIMENTAL_RANGES FROM TABLE users; - SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles; - SHOW EXPERIMENTAL_RANGES FROM INDEX vehicles_auto_index_fk_city_ref_users; - SHOW EXPERIMENTAL_RANGES FROM TABLE rides; - SHOW EXPERIMENTAL_RANGES FROM INDEX rides_auto_index_fk_city_ref_users; - SHOW EXPERIMENTAL_RANGES FROM INDEX rides_auto_index_fk_vehicle_city_ref_vehicles; - ~~~ - - ~~~ - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 21 | {3,6,9} | 6 - (1 row) - - Time: 4.205387878s - - start_key | end_key | range_id | replicas | lease_holder - +-------------+-------------+----------+----------+--------------+ - NULL | /"new york" | 23 | {1,4,7} | 1 - /"new york" | /"seattle" | 51 | {2,5,9} | 2 - /"seattle" | NULL | 32 | {1,5,9} | 5 - (3 rows) - - Time: 64.121µs - - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 32 | {1,5,9} | 5 - (1 row) - - Time: 37.14µs - - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 31 | {3,5,7} | 7 - (1 row) - - Time: 28.988µs - - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 31 | {3,5,7} | 7 - (1 row) - - Time: 25.707µs - - start_key | end_key | range_id | replicas | lease_holder - +-----------+---------+----------+----------+--------------+ - NULL | NULL | 31 | {3,5,7} | 7 - (1 row) - ~~~ - - Here's a node/region mapping: - - Nodes | Region - ------|------- - 1 - 3 | `us-east1` - 4 - 6 | `us-central1` - 7 - 9 | `us-west1` - - You'll see that most tables and indexes map to a single range. The one exception is the `vehicles` table, which maps to 3 ranges. In all cases, each range has 3 `replicas`, with each replica on a node in a distinct region. For each range, one of the replicas is the `lease_holder`. - - Thinking back to [how reads and writes work in CockroachDB](architecture/reads-and-writes-overview.html), this tells you that many reads are leaving their region to reach the relevant leaseholder replica, and all writes are spanning regions to achieve Raft consensus. This explains the currently high latencies. - - For example, based on output above, the replicas for the `users` table are on nodes 3, 6, and 9, with the leaseholder on node 6. This means that when a user in New York registers for the MovR service: - - 1. A request to write a row to the `users` table goes through the load balancer in the US east to a gateway node in the US east. - 2. The request is routed to the leaseholder on node 6 in the US central. - 3. The leaseholder waits for consensus from a replica in the US east or US west. - 4. The leaseholder returns acknowledgement to the gateway node in the US east. - 5. The gateway node responds to the client. - -{{site.data.alerts.callout_success}} -The **Network Latency** debug page, available at `http://:8080/#/reports/network`, gives you added insight into latency between regions. You'll see that nodes within a region experience sub-millisecond latency, while latency between nodes in different regions is much higher. -{{site.data.alerts.end}} - -## Step 11. Partition data by city - -The most effective way to prevent the high SQL latency resulting from cross-region operations is to partition each table and secondary index by `city`. Essentially, this will create a distinct set of ranges for each city partition, which you can then pin to nodes in the relevant region using replication zones (next step). - -1. Back in the SQL shell on one of your client VMs, partition the `users` table by city: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE users PARTITION BY LIST (city) ( - PARTITION new_york VALUES IN ('new york'), - PARTITION chicago VALUES IN ('chicago'), - PARTITION seattle VALUES IN ('seattle') - ); - ~~~ - -2. Partition the `vehicles` table and its secondary index by city: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE vehicles PARTITION BY LIST (city) ( - PARTITION new_york VALUES IN ('new york'), - PARTITION chicago VALUES IN ('chicago'), - PARTITION seattle VALUES IN ('seattle') - ); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX vehicles_auto_index_fk_city_ref_users PARTITION BY LIST (city) ( - PARTITION new_york_idx VALUES IN ('new york'), - PARTITION chicago_idx VALUES IN ('chicago'), - PARTITION seattle_idx VALUES IN ('seattle') - ); - ~~~ - -3. Partition the `rides` table and its secondary indexes by city: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE rides PARTITION BY LIST (city) ( - PARTITION new_york VALUES IN ('new york'), - PARTITION chicago VALUES IN ('chicago'), - PARTITION seattle VALUES IN ('seattle') - ); - - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX rides_auto_index_fk_city_ref_users PARTITION BY LIST (city) ( - PARTITION new_york_idx1 VALUES IN ('new york'), - PARTITION chicago_idx1 VALUES IN ('chicago'), - PARTITION seattle_idx1 VALUES IN ('seattle') - ); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX rides_auto_index_fk_vehicle_city_ref_vehicles PARTITION BY LIST (vehicle_city) ( - PARTITION new_york_idx2 VALUES IN ('new york'), - PARTITION chicago_idx2 VALUES IN ('chicago'), - PARTITION seattle_idx2 VALUES IN ('seattle') - ); - ~~~ - -## Step 12. Pin partitions to regions - -Now that all tables and secondary indexes have been partitioned by city, for each partition, you can create a replication zone that pins the partition's replicas to nodes in a specific region, using the localities specified when nodes were started. - -1. Still in the SQL shell on one of your client VMs, create replication zones for the partitions of the `users` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york OF TABLE movr.users - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago OF TABLE movr.users - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle OF TABLE movr.users - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -2. Create replication zones for the partitions of the `vehicles` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -3. Create replication zones for the partitions of the secondary index on the `vehicles` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york_idx OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago_idx OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle_idx OF TABLE movr.vehicles - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -4. Create replication zones for the partitions of the `rides` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -5. Create replication zones for the partitions of the first secondary index on the `rides` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york_idx1 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago_idx1 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle_idx1 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -6. Create replication zones for the partitions of the other secondary index on the `rides` table: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION new_york_idx2 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-east1]'; - ALTER PARTITION chicago_idx2 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-central1]'; - ALTER PARTITION seattle_idx2 OF TABLE movr.rides - CONFIGURE ZONE USING constraints='[+region=us-west1]'; - ~~~ - -## Step 13. Re-check replica distribution - -Still in the SQL shell on one of your client VMs, use the [`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) statement to check replica placement after partitioning: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM TABLE users] - WHERE "start_key" NOT LIKE '%Prefix%'; - SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM TABLE vehicles] - WHERE "start_key" NOT LIKE '%Prefix%'; - SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM INDEX vehicles_auto_index_fk_city_ref_users] - WHERE "start_key" NOT LIKE '%Prefix%'; - SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM TABLE rides] - WHERE "start_key" NOT LIKE '%Prefix%'; - SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM INDEX rides_auto_index_fk_city_ref_users] - WHERE "start_key" NOT LIKE '%Prefix%'; - SELECT * FROM [SHOW EXPERIMENTAL_RANGES FROM INDEX rides_auto_index_fk_vehicle_city_ref_vehicle] - WHERE "start_key" NOT LIKE '%Prefix%'; -~~~ - -~~~ - start_key | end_key | range_id | replicas | lease_holder -+-------------+-----------------------+----------+----------+--------------+ - /"chicago" | /"chicago"/PrefixEnd | 62 | {4,5,6} | 6 - /"new york" | /"new york"/PrefixEnd | 66 | {1,2,3} | 2 - /"seattle" | /"seattle"/PrefixEnd | 64 | {7,8,9} | 8 -(3 rows) - -Time: 2.858236902s - - start_key | end_key | range_id | replicas | lease_holder -+-------------+-----------------------+----------+----------+--------------+ - /"chicago" | /"chicago"/PrefixEnd | 52 | {4,5,6} | 5 - /"new york" | /"new york"/PrefixEnd | 51 | {1,2,3} | 3 - /"seattle" | /"seattle"/PrefixEnd | 32 | {7,8,9} | 7 -(3 rows) - -Time: 74.36µs - - start_key | end_key | range_id | replicas | lease_holder -+-------------+-----------------------+----------+----------+--------------+ - /"chicago" | /"chicago"/PrefixEnd | 37 | {4,5,6} | 4 - /"new york" | /"new york"/PrefixEnd | 35 | {1,2,3} | 3 - /"seattle" | /"seattle"/PrefixEnd | 39 | {7,8,9} | 8 -(3 rows) - -Time: 72.98µs - - start_key | end_key | range_id | replicas | lease_holder -+-------------+-----------------------+----------+----------+--------------+ - /"chicago" | /"chicago"/PrefixEnd | 113 | {4,5,6} | 4 - /"new york" | /"new york"/PrefixEnd | 111 | {1,2,3} | 3 - /"seattle" | /"seattle"/PrefixEnd | 115 | {7,8,9} | 9 -(3 rows) - -Time: 71.446µs -~~~ - -You'll see that the replicas for each partition are now located on nodes in the relevant region: - -- New York partitions are on nodes 1 - 3 -- Chicago partitions are on nodes 4 - 6 -- Seattle partitions are on nodes 7 - 9 - -This means that requests from users in a city no longer leave the region, thus removing all cross-region latencies. - -## Step 14. Re-check SQL latency - -Now that you've verified that replicas are located properly, go back to the Admin UI, click **Metrics** on the left, and hover over the **Service Latency: SQL, 99th percentile** timeseries graph: - -Geo-partitioning SQL latency - -For each node, you'll see that **99% of all queries are now 4 milliseconds or less**. - -99th percentile latency can be influenced by occasional slow queries. For a more accurate sense of typical SQL latency, go to the following URL to view a custom graph for 90th percentile latency: - -~~~ -http://:8080/#/debug/chart?charts=%5B%7B%22metrics%22%3A%5B%7B%22downsampler%22%3A3%2C%22aggregator%22%3A3%2C%22derivative%22%3A0%2C%22perNode%22%3Atrue%2C%22source%22%3A%22%22%2C%22metric%22%3A%22cr.node.sql.exec.latency-p90%22%7D%5D%2C%22axisUnits%22%3A2%7D%5D -~~~ - -Geo-partitioning SQL latency - -As you can see, **90% of all SQL queries execute in less than 2 milliseconds**. In some cases, latency is even sub-millisecond. - -## See also - -- [Table Partitioning](partitioning.html) -- [Replication Zones](configure-replication-zones.html) diff --git a/src/current/v19.1/demo-json-support.md b/src/current/v19.1/demo-json-support.md deleted file mode 100644 index 6418d755c8a..00000000000 --- a/src/current/v19.1/demo-json-support.md +++ /dev/null @@ -1,260 +0,0 @@ ---- -title: JSON Support -summary: Use a local cluster to explore how CockroachDB can store and query unstructured JSONB data. -toc: true ---- - -This page walks you through a simple demonstration of how CockroachDB can store and query unstructured [`JSONB`](jsonb.html) data from a third-party API, as well as how an [inverted index](inverted-indexes.html) can optimize your queries. - - -## Step 1. Install prerequisites - -
      - - -
      - -
      -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install the latest version of [Go](https://golang.org/dl/): `brew install go` -- Install the [PostgreSQL driver](https://github.com/lib/pq): `go get github.com/lib/pq` -
      - -
      -- Install the latest version of [CockroachDB](install-cockroachdb.html). -- Install the [Python psycopg2 driver](http://initd.org/psycopg/docs/install.html): `pip install psycopg2` -- Install the [Python Requests library](https://requests.readthedocs.io/en/latest/): `pip install requests` -
      - -## Step 2. Start a single-node cluster - -For the purpose of this tutorial, you need only one CockroachDB node running in insecure mode: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=json-test \ ---listen-addr=localhost:26257 \ ---http-addr=localhost:8080 -~~~ - -## Step 3. Create a user - -In a new terminal, as the `root` user, use the [`cockroach user`](create-and-manage-users.html) command to create a new user, `maxroach`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach user set maxroach --insecure --host=localhost:26257 -~~~ - -## Step 4. Create a database and grant privileges - -As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26257 -~~~ - -Next, create a database called `jsonb_test`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE jsonb_test; -~~~ - -Set the database as the default: - -{% include copy-clipboard.html %} -~~~ sql -> SET DATABASE = jsonb_test; -~~~ - -Then [grant privileges](grant.html) to the `maxroach` user: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT ALL ON DATABASE jsonb_test TO maxroach; -~~~ - -## Step 5. Create a table - -Still in the SQL shell, create a table called `programming`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE programming ( - id UUID DEFAULT uuid_v4()::UUID PRIMARY KEY, - posts JSONB - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE programming; -~~~ -~~~ -+--------------+-------------------------------------------------+ -| Table | CreateTable | -+--------------+-------------------------------------------------+ -| programming | CREATE TABLE programming ( | -| | id UUID NOT NULL DEFAULT uuid_v4()::UUID, | -| | posts JSON NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (id ASC), | -| | FAMILY "primary" (id, posts) | -| | ) | -+--------------+-------------------------------------------------+ -~~~ - -## Step 6. Run the code - -Now that you have a database, user, and a table, let's run code to insert rows into the table. - -
      - - -
      - -
      -The code queries the [Reddit API](https://www.reddit.com/dev/api/) for posts in [/r/programming](https://www.reddit.com/r/programming/). The Reddit API only returns 25 results per page; however, each page returns an `"after"` string that tells you how to get the next page. Therefore, the program does the following in a loop: - -1. Makes a request to the API. -2. Inserts the results into the table and grabs the `"after"` string. -3. Uses the new `"after"` string as the basis for the next request. - -Download the json-sample.go file, or create the file yourself and copy the code into it: - -{% include copy-clipboard.html %} -~~~ go -{% include {{ page.version.version }}/json/json-sample.go %} -~~~ - -In a new terminal window, navigate to your sample code file and run it: - -{% include copy-clipboard.html %} -~~~ shell -$ go run json-sample.go -~~~ -
      - -
      -The code queries the [Reddit API](https://www.reddit.com/dev/api/) for posts in [/r/programming](https://www.reddit.com/r/programming/). The Reddit API only returns 25 results per page; however, each page returns an `"after"` string that tells you how to get the next page. Therefore, the program does the following in a loop: - -1. Makes a request to the API. -2. Grabs the `"after"` string. -3. Inserts the results into the table. -4. Uses the new `"after"` string as the basis for the next request. - -Download the json-sample.py file, or create the file yourself and copy the code into it: - -{% include copy-clipboard.html %} -~~~ python -{% include {{ page.version.version }}/json/json-sample.py %} -~~~ - -In a new terminal window, navigate to your sample code file and run it: - -{% include copy-clipboard.html %} -~~~ shell -$ python json-sample.py -~~~ -
      - -The program will take awhile to finish, but you can start querying the data right away. - -## Step 7. Query the data - -Back in the terminal where the SQL shell is running, verify that rows of data are being inserted into your table: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT count(*) FROM programming; -~~~ -~~~ -+-------+ -| count | -+-------+ -| 1120 | -+-------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT count(*) FROM programming; -~~~ -~~~ -+-------+ -| count | -+-------+ -| 2400 | -+-------+ -~~~ - -Now, retrieve all the current entries where the link is pointing to somewhere on GitHub: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id FROM programming \ -WHERE posts @> '{"data": {"domain": "github.com"}}'; -~~~ -~~~ -+--------------------------------------+ -| id | -+--------------------------------------+ -| 0036d489-3fe3-46ec-8219-2eaee151af4b | -| 00538c2f-592f-436a-866f-d69b58e842b6 | -| 00aff68c-3867-4dfe-82b3-2a27262d5059 | -| 00cc3d4d-a8dd-4c9a-a732-00ed40e542b0 | -| 00ecd1dd-4d22-4af6-ac1c-1f07f3eba42b | -| 012de443-c7bf-461a-b563-925d34d1f996 | -| 014c0ac8-4b4e-4283-9722-1dd6c780f7a6 | -| 017bfb8b-008e-4df2-90e4-61573e3a3f62 | -| 0271741e-3f2a-4311-b57f-a75e5cc49b61 | -| 02f31c61-66a7-41ba-854e-1ece0736f06b | -| 035f31a1-b695-46be-8b22-469e8e755a50 | -| 03bd9793-7b1b-4f55-8cdd-99d18d6cb3ea | -| 03e0b1b4-42c3-4121-bda9-65bcb22dcf72 | -| 0453bc77-4349-4136-9b02-3a6353ea155e | -... -+--------------------------------------+ -(334 rows) - -Time: 105.877736ms -~~~ - -{{site.data.alerts.callout_info}}Since you are querying live data, your results for this and the following steps may vary from the results documented in this tutorial.{{site.data.alerts.end}} - -## Step 8. Create an inverted index to optimize performance - -The query in the previous step took 105.877736ms. To optimize the performance of queries that filter on the `JSONB` column, let's create an [inverted index](inverted-indexes.html) on the column: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INVERTED INDEX ON programming(posts); -~~~ - -## Step 9. Run the query again - -Now that there is an inverted index, the same query will run much faster: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT id FROM programming \ -WHERE posts @> '{"data": {"domain": "github.com"}}'; -~~~ -~~~ -(334 rows) - -Time: 28.646769ms -~~~ - -Instead of 105.877736ms, the query now takes 28.646769ms. - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} - -You may also want to learn more about the [`JSONB`](jsonb.html) data type and [inverted indexes](inverted-indexes.html). diff --git a/src/current/v19.1/demo-serializable.md b/src/current/v19.1/demo-serializable.md deleted file mode 100644 index 4ef0c650436..00000000000 --- a/src/current/v19.1/demo-serializable.md +++ /dev/null @@ -1,555 +0,0 @@ ---- -title: Serializable Transactions -summary: -toc: true ---- - -In contrast to most databases, CockroachDB always uses `SERIALIZABLE` isolation, which is the strongest of the four [transaction isolation levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)) defined by the SQL standard and is stronger than the `SNAPSHOT` isolation level developed later. `SERIALIZABLE` isolation guarantees that even though transactions may execute in parallel, the result is the same as if they had executed one at a time, without any concurrency. This ensures data correctness by preventing all "anomalies" allowed by weaker isolation levels. - -In this tutorial, you'll work through a hypothetical scenario that demonstrates the importance of `SERIALIZABLE` isolation for data correctness. - -1. You'll start by reviewing the scenario and its schema. -2. You'll then execute the scenario at one of the weaker isolation levels, `READ COMMITTED`, observing the write skew anomaly and its implications. Because CockroachDB always uses `SERIALIZABLE` isolation, you'll run this portion of the tutorial on Postgres, which defaults to `READ COMMITTED`. -3. You'll finish by executing the scenario at `SERIALIZABLE` isolation, observing how it guarantees correctness. You'll use CockroachDB for this portion. - -{{site.data.alerts.callout_info}} -For a deeper discussion of transaction isolation and the write skew anomaly, see the [Real Transactions are Serializable](https://www.cockroachlabs.com/blog/acid-rain/) and [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/) blog posts. -{{site.data.alerts.end}} - -## Overview - -### Scenario - -- A hospital has an application for doctors to manage their on-call shifts. -- The hospital has a rule that at least one doctor must be on call at any one time. -- Two doctors are on-call for a particular shift, and both of them try to request leave for the shift at approximately the same time. -- In Postgres, with the default `READ COMMITTED` isolation level, the [write skew](#write-skew) anomaly results in both doctors successfully booking leave and the hospital having no doctors on call for that particular shift. -- In CockroachDB, with the `SERIALIZABLE` isolation level, write skew is prevented, one doctor is allowed to book leave and the other is left on-call, and lives are saved. - -#### Write skew - -When write skew happens, a transaction reads something, makes a decision based on the value it saw, and writes the decision to the database. However, by the time the write is made, the premise of the decision is no longer true. Only `SERIALIZABLE` and some implementations of `REPEATABLE READ` isolation prevent this anomaly. - -### Schema - -Schema for serializable transaction tutorial - -## Scenario on Postgres - -### Step 1. Start Postgres - -1. If you haven't already, install Postgres locally. On Mac, you can use [Homebrew](https://brew.sh/): - - {% include copy-clipboard.html %} - ~~~ shell - $ brew install postgres - ~~~ - -2. [Start Postgres](https://www.postgresql.org/docs/10/static/server-start.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ postgres -D /usr/local/var/postgres & - ~~~ - -### Step 2. Create the schema - -1. Open a SQL connection to Postgres: - - {% include copy-clipboard.html %} - ~~~ shell - $ psql - ~~~ - -2. Create the `doctors` table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE doctors ( - id INT PRIMARY KEY, - name TEXT - ); - ~~~ - -3. Create the `schedules` table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE schedules ( - day DATE, - doctor_id INT REFERENCES doctors (id), - on_call BOOL, - PRIMARY KEY (day, doctor_id) - ); - ~~~ - -### Step 3. Insert data - -1. Add two doctors to the `doctors` table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO doctors VALUES - (1, 'Abe'), - (2, 'Betty'); - ~~~ - -2. Insert one week's worth of data into the `schedules` table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO schedules VALUES - ('2018-10-01', 1, true), - ('2018-10-01', 2, true), - ('2018-10-02', 1, true), - ('2018-10-02', 2, true), - ('2018-10-03', 1, true), - ('2018-10-03', 2, true), - ('2018-10-04', 1, true), - ('2018-10-04', 2, true), - ('2018-10-05', 1, true), - ('2018-10-05', 2, true), - ('2018-10-06', 1, true), - ('2018-10-06', 2, true), - ('2018-10-07', 1, true), - ('2018-10-07', 2, true); - ~~~ - -3. Confirm that at least one doctor is on call each day of the week: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT day, count(*) AS doctors_on_call FROM schedules - WHERE on_call = true - GROUP BY day - ORDER BY day; - ~~~ - - ~~~ - day | doctors_on_call - ------------+----------------- - 2018-10-01 | 2 - 2018-10-02 | 2 - 2018-10-03 | 2 - 2018-10-04 | 2 - 2018-10-05 | 2 - 2018-10-06 | 2 - 2018-10-07 | 2 - (7 rows) - ~~~ - -### Step 4. Doctor 1 requests leave - -Doctor 1, Abe, starts to request leave for 10/5/18 using the hospital's schedule management application. - -1. The application starts a transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - -2. The application checks to make sure at least one other doctor is on call for the requested date: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT count(*) FROM schedules - WHERE on_call = true - AND day = '2018-10-05' - AND doctor_id != 1; - ~~~ - - ~~~ - count - ------- - 1 - (1 row) - ~~~ - -### Step 5. Doctor 2 requests leave - -Around the same time, doctor 2, Betty, starts to request leave for the same day using the hospital's schedule management application. - -1. In a new terminal, start a second SQL session: - - {% include copy-clipboard.html %} - ~~~ shell - $ psql - ~~~ - -2. The application starts a transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - -3. The application checks to make sure at least one other doctor is on call for the requested date: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT count(*) FROM schedules - WHERE on_call = true - AND day = '2018-10-05' - AND doctor_id != 2; - ~~~ - - ~~~ - count - ------- - 1 - (1 row) - ~~~ - -### Step 6. Leave is incorrectly booked for both doctors - -1. In the terminal for doctor 1, since the previous check confirmed that another doctor is on call for 10/5/18, the application tries to update doctor 1's schedule: - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE schedules SET on_call = false - WHERE day = '2018-10-05' - AND doctor_id = 1; - ~~~ - -2. In the terminal for doctor 2, since the previous check confirmed the same thing, the application tries to update doctor 2's schedule: - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE schedules SET on_call = false - WHERE day = '2018-10-05' - AND doctor_id = 2; - ~~~ - -3. In the terminal for doctor 1, the application commits the transaction, despite the fact that the previous check (the `SELECT` query) is no longer true: - - {% include copy-clipboard.html %} - ~~~ sql - > COMMIT; - ~~~ - -4. In the terminal for doctor 2, the application commits the transaction, despite the fact that the previous check (the `SELECT` query) is no longer true: - - {% include copy-clipboard.html %} - ~~~ sql - > COMMIT; - ~~~ - -### Step 7. Check data correctness - -So what just happened? Each transaction started by reading a value that, before the end of the transaction, became incorrect. Despite that fact, each transaction was allowed to commit. This is known as write skew, and the result is that 0 doctors are scheduled to be on call on 10/5/18. - -To check this, in either terminal, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schedules WHERE day = '2018-10-05'; -~~~ - -~~~ - day | doctor_id | on_call -------------+-----------+--------- - 2018-10-05 | 1 | f - 2018-10-05 | 2 | f -(2 rows) -~~~ - -Again, this anomaly is the result of Postgres' default isolation level of `READ COMMITTED`, but note that this would happen with any isolation level except `SERIALIZABLE` and some implementations of `REPEATABLE READ`: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TRANSACTION_ISOLATION; -~~~ - -~~~ - transaction_isolation ------------------------ - read committed -(1 row) -~~~ - -### Step 8. Stop Postgres - -Exit each SQL shell with `\q` and then terminate the Postgres server: - -{% include copy-clipboard.html %} -~~~ shell -$ pkill -9 postgres -~~~ - -## Scenario on CockroachDB - -When you repeat the scenario on CockroachDB, you'll see that the anomaly is prevented by CockroachDB's `SERIALIZABLE` transaction isolation. - -### Step 1. Start CockroachDB - -1. If you haven't already, [install CockroachDB](install-cockroachdb.html) locally. - -2. Start a one-node CockroachDB cluster in insecure mode: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --insecure \ - --store=serializable-demo \ - --listen-addr=localhost \ - --background - ~~~ - -### Step 2. Create the schema - -1. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html): - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost - ~~~ - -2. Create the `doctors` table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE doctors ( - id INT PRIMARY KEY, - name TEXT - ); - ~~~ - -3. Create the `schedules` table: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE schedules ( - day DATE, - doctor_id INT REFERENCES doctors (id), - on_call BOOL, - PRIMARY KEY (day, doctor_id) - ); - ~~~ - -### Step 3. Insert data - -1. Add two doctors to the `doctors` table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO doctors VALUES - (1, 'Abe'), - (2, 'Betty'); - ~~~ - -2. Insert one week's worth of data into the `schedules` table: - - {% include copy-clipboard.html %} - ~~~ sql - > INSERT INTO schedules VALUES - ('2018-10-01', 1, true), - ('2018-10-01', 2, true), - ('2018-10-02', 1, true), - ('2018-10-02', 2, true), - ('2018-10-03', 1, true), - ('2018-10-03', 2, true), - ('2018-10-04', 1, true), - ('2018-10-04', 2, true), - ('2018-10-05', 1, true), - ('2018-10-05', 2, true), - ('2018-10-06', 1, true), - ('2018-10-06', 2, true), - ('2018-10-07', 1, true), - ('2018-10-07', 2, true); - ~~~ - -3. Confirm that at least one doctor is on call each day of the week: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT day, count(*) AS on_call FROM schedules - WHERE on_call = true - GROUP BY day - ORDER BY day; - ~~~ - - ~~~ - day | on_call - +---------------------------+---------+ - 2018-10-01 00:00:00+00:00 | 2 - 2018-10-02 00:00:00+00:00 | 2 - 2018-10-03 00:00:00+00:00 | 2 - 2018-10-04 00:00:00+00:00 | 2 - 2018-10-05 00:00:00+00:00 | 2 - 2018-10-06 00:00:00+00:00 | 2 - 2018-10-07 00:00:00+00:00 | 2 - (7 rows) - ~~~ - -### Step 4. Doctor 1 requests leave - -Doctor 1, Abe, starts to request leave for 10/5/18 using the hospital's schedule management application. - -1. The application starts a transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - -2. The application checks to make sure at least one other doctor is on call for the requested date: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT count(*) FROM schedules - WHERE on_call = true - AND day = '2018-10-05' - AND doctor_id != 1; - ~~~ - - Press enter a second time to have the server return the result: - - ~~~ - count - +-------+ - 1 - (1 row) - ~~~ - -### Step 5. Doctor 2 requests leave - -Around the same time, doctor 2, Betty, starts to request leave for the same day using the hospital's schedule management application. - -1. In a new terminal, start a second SQL session: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --insecure --host=localhost - ~~~ - -2. The application starts a transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN; - ~~~ - -3. The application checks to make sure at least one other doctor is on call for the requested date: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT count(*) FROM schedules - WHERE on_call = true - AND day = '2018-10-05' - AND doctor_id != 2; - ~~~ - - Press enter a second time to have the server return the result: - - ~~~ - count - +-------+ - 1 - (1 row) - ~~~ - -### Step 6. Leave is booked for only 1 doctor - -1. In the terminal for doctor 1, since the previous check confirmed that another doctor is on call for 10/5/18, the application tries to update doctor 1's schedule: - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE schedules SET on_call = false - WHERE day = '2018-10-05' - AND doctor_id = 1; - ~~~ - -2. In the terminal for doctor 2, since the previous check confirmed the same thing, the application tries to update doctor 2's schedule: - - {% include copy-clipboard.html %} - ~~~ sql - > UPDATE schedules SET on_call = false - WHERE day = '2018-10-05' - AND doctor_id = 2; - ~~~ - -3. In the terminal for doctor 1, the application tries to commit the transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > COMMIT; - ~~~ - - Since CockroachDB uses `SERIALIZABLE` isolation, the database detects that the previous check (the `SELECT` query) is no longer true due to a concurrent transaction. It therefore prevents the transaction from committing, returning a retry error that indicates that the transaction must be attempted again: - - ~~~ - pq: restart transaction: HandledRetryableTxnError: TransactionRetryError: retry txn (RETRY_SERIALIZABLE): "sql txn" id=57dd0454 key=/Table/53/1/17809/1/0 rw=true pri=0.00710012 iso=SERIALIZABLE stat=PENDING epo=0 ts=1539116499.676097000,2 orig=1539115078.961557000,0 max=1539115078.961557000,0 wto=false rop=false seq=4 - ~~~ - - {{site.data.alerts.callout_success}} - For this kind of error, CockroachDB recommends a [client-side transaction retry loop](transactions.html#client-side-intervention) that would transparently observe that the one doctor cannot take time off because the other doctor already succeeded in asking for it. You can find generic transaction retry functions for various languages in our [Build an App](build-an-app-with-cockroachdb.html) tutorials. - {{site.data.alerts.end}} - -4. In the terminal for doctor 2, the application tries to commit the transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > COMMIT; - ~~~ - - Since the transaction for doctor 1 failed, the transaction for doctor 2 can commit without causing any data correctness problems. - -### Step 7. Check data correctness - -In either terminal, confirm that one doctor is still on call for 10/5/18: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schedules WHERE day = '2018-10-05'; -~~~ - -~~~ - day | doctor_id | on_call -+---------------------------+-----------+---------+ - 2018-10-05 00:00:00+00:00 | 1 | true - 2018-10-05 00:00:00+00:00 | 2 | false -(2 rows) -~~~ - -Again, the write skew anomaly was prevented by CockroachDB using the `SERIALIZABLE` isolation level: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TRANSACTION_ISOLATION; -~~~ - -~~~ - transaction_isolation -+-----------------------+ - serializable -(1 row) -~~~ - -### Step 8. Stop CockroachDB - -Once you're done with your test cluster, exit each SQL shell with `\q` and then stop the node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --insecure --host=localhost -~~~ - -If you do not plan to restart the cluster, you may want to remove the node's data store: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf serializable-demo -~~~ - -## What's next? - -Explore other core CockroachDB benefits and features: - -{% include {{ page.version.version }}/misc/explore-benefits-see-also.md %} - -You might also want to learn more about how transactions work in CockroachDB and in general: - -- [Transactions Overview](transactions.html) -- [Real Transactions are Serializable](https://www.cockroachlabs.com/blog/acid-rain/) -- [What Write Skew Looks Like](https://www.cockroachlabs.com/blog/what-write-skew-looks-like/) diff --git a/src/current/v19.1/deploy-cockroachdb-on-aws-insecure.md b/src/current/v19.1/deploy-cockroachdb-on-aws-insecure.md deleted file mode 100644 index 593b742dc46..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-aws-insecure.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: Deploy CockroachDB on AWS EC2 (Insecure) -summary: Learn how to deploy CockroachDB on Amazon's AWS EC2 platform. -toc: true -toc_not_nested: true -ssh-link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). - -## Step 1. Create instances - -Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-8-run-a-sample-workload) against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to ensure survivability. - -- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. - -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. - - - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. - -- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - -- Make sure all your instances are in the same security group. - - - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 2. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check - -If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. - -#### Inter-node and load balancer-node communication - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **26257** - Source | The ID of your security group (e.g., *sg-07ab277a*) - -#### Application data - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rules - Protocol | TCP - Port Range | **26257** - Source | Your application's IP ranges - -If you plan to [run our sample workload](#step-8-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges - -You can set your network IP by selecting "My IP" in the Source field. - -#### Load balancer-health check communication - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) - -To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -AWS offers fully-managed load balancing to distribute traffic between instances. - -1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: - - Select a **Network Load Balancer** and use the ports we specify below. - - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - - Set the load balancer port to **26257**. - - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. -2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Monitor the cluster - -In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the AWS load balancer, not to a CockroachDB node. - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-aws.md b/src/current/v19.1/deploy-cockroachdb-on-aws.md deleted file mode 100644 index 1c234776b97..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-aws.md +++ /dev/null @@ -1,167 +0,0 @@ ---- -title: Deploy CockroachDB on AWS EC2 -summary: Learn how to deploy CockroachDB on Amazon's AWS EC2 platform. -toc: true -toc_not_nested: true -ssh-link: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html - ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Amazon's AWS EC2 platform, using AWS's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -- All Amazon EC2 instances running CockroachDB should be members of the same [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html). - -## Step 1. Create instances - -Open the [Amazon EC2 console](https://console.aws.amazon.com/ec2/) and [launch an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html#launch-instance-console) for each node you plan to have in your cluster. If you plan to [run our sample workload](#step-9-run-a-sample-workload) against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to ensure survivability. - -- Your instances will rely on Amazon Time Sync Service for clock synchronization. When choosing an AMI, note that some machines are preconfigured to use Amazon Time Sync Service (e.g., Amazon Linux AMIs) and others are not. - -- Use `m` (general purpose), `c` (compute-optimized), or `i` (storage-optimized) instance types, with SSD-backed [EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html) or [Instance Store volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ssd-instance-store.html). For example, Cockroach Labs has used `c5d.4xlarge` (16 vCPUs and 32 GiB of RAM per instance, EBS) for internal testing. - - - **Do not** use ["burstable" `t2` instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html), which limit the load on a single core. - -- Note the ID of the VPC you select. You will need to look up its IP range when setting inbound rules for your security group. - -- Make sure all your instances are in the same security group. - - - If you are creating a new security group, add the [inbound rules](#step-2-configure-your-network) from the next step. Otherwise note the ID of the security group. - -- When creating the instance, you will download a private key file used to securely connect to your instances. Decide where to place this file, and note the file path for later commands. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 2. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- `8080` for exposing your Admin UI, and for routing from the load balancer to the health check - -If you haven't already done so, [create inbound rules](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html#adding-security-group-rule) for your security group. - -#### Inter-node and load balancer-node communication - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **26257** - Source | The ID of your security group (e.g., *sg-07ab277a*) - -#### Application data - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rules - Protocol | TCP - Port Range | **26257** - Source | Your application's IP ranges - -If you plan to [run our sample workload](#step-9-run-a-sample-workload) on an instance, the traffic source is the internal (private) IP address of that instance. To find this, open the Instances section of the Amazon EC2 console and click on the instance. - -#### Admin UI - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | Your network's IP ranges - -You can set your network IP by selecting "My IP" in the Source field. - -#### Load balancer-health check communication - - Field | Recommended Value --------|------------------- - Type | Custom TCP Rule - Protocol | TCP - Port Range | **8080** - Source | The IP range of your VPC in CIDR notation (e.g., 10.12.0.0/16) - -To get the IP range of a VPC, open the [Amazon VPC console](https://console.aws.amazon.com/vpc/) and find the VPC listed in the section called Your VPCs. You can also click on the VPC where it is listed in the EC2 console. - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -AWS offers fully-managed load balancing to distribute traffic between instances. - -1. [Add AWS load balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancer-getting-started.html). Be sure to: - - Select a **Network Load Balancer** and use the ports we specify below. - - Select the VPC and *all* availability zones of your instances. This is important, as you cannot change the availability zones once the load balancer is created. The availability zone of an instance is determined by its subnet, found by inspecting the instance in the Amazon EC2 Console. - - Set the load balancer port to **26257**. - - Create a new target group that uses TCP port **26257**. Traffic from your load balancer is routed to this target group, which contains your instances. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - Register your instances with the target group you created, specifying port **26257**. You can add and remove instances later. -2. To test load balancing and connect your application to the cluster, you will need the provisioned internal (private) **IP address** for the load balancer. To find this, open the Network Interfaces section of the Amazon EC2 console and look up the load balancer by its name. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of AWS's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test your cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Monitor the cluster - -In the Target Groups section of the Amazon EC2 console, [check the health](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html) of your instances by inspecting your target group and opening the Targets tab. - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -Before adding a new node, [create a new instance](#step-1-create-instances) as you did earlier. Then [generate and upload a certificate and key](#step-5-generate-certificates) for the new node. - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-digital-ocean-insecure.md b/src/current/v19.1/deploy-cockroachdb-on-digital-ocean-insecure.md deleted file mode 100644 index 1872514839b..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-digital-ocean-insecure.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Deploy CockroachDB on Digital Ocean (Insecure) -summary: Learn how to deploy a CockroachDB cluster on Digital Ocean. -toc: true -toc_not_nested: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-connect-to-your-droplet-with-ssh ---- - - - -This page shows you how to deploy an insecure multi-node CockroachDB cluster on Digital Ocean, using Digital Ocean's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -- If all of your CockroachDB nodes and clients will run on Droplets in a single region, consider using [private networking](https://docs.digitalocean.com/products/networking/vpc/how-to/create/). - -## Step 1. Create Droplets - -[Create Droplets](https://www.digitalocean.com/community/tutorials/how-to-create-your-first-digitalocean-droplet) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate droplet for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use any [droplets](https://www.digitalocean.com/pricing/) except standard droplets with only 1 GB of RAM, which is below our minimum requirement. All Digital Ocean droplets use SSD storage. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 2. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 3. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Digital Ocean offers fully-managed load balancers to distribute traffic between Droplets. - -1. [Create a Digital Ocean Load Balancer](https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-load-balancers). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the node Droplets. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Digital Ocean's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 4. Configure your network - -Set up a firewall for each of your Droplets, allowing TCP communication on the following two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -For guidance, you can use Digital Ocean's guide to configuring firewalls based on the Droplet's OS: - -- Ubuntu and Debian can use [`ufw`](https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server). -- FreeBSD can use [`ipfw`](https://www.digitalocean.com/community/tutorials/recommended-steps-for-new-freebsd-10-1-servers). -- Fedora can use [`iptables`](https://www.digitalocean.com/community/tutorials/initial-setup-of-a-fedora-22-server). -- CoreOS can use [`iptables`](https://www.digitalocean.com/community/tutorials/how-to-secure-your-coreos-cluster-with-tls-ssl-and-firewall-rules). -- CentOS can use [`firewalld`](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-firewalld-on-centos-7). - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the Digital Ocean Load Balancer, not to a CockroachDB node. - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-digital-ocean.md b/src/current/v19.1/deploy-cockroachdb-on-digital-ocean.md deleted file mode 100644 index e93737681fa..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-digital-ocean.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Deploy CockroachDB on Digital Ocean -summary: Learn how to deploy a CockroachDB cluster on Digital Ocean. -toc: true -toc_not_nested: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-connect-to-your-droplet-with-ssh ---- - -
      - - -
      - -This page shows you how to deploy a secure multi-node CockroachDB cluster on Digital Ocean, using Digital Ocean's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -- If all of your CockroachDB nodes and clients will run on Droplets in a single region, consider using [private networking](https://docs.digitalocean.com/products/networking/vpc/how-to/create/). - -## Step 1. Create Droplets - -[Create Droplets](https://www.digitalocean.com/community/tutorials/how-to-create-your-first-digitalocean-droplet) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate Droplet for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use any [droplets](https://www.digitalocean.com/pricing/) except standard droplets with only 1 GB of RAM, which is below our minimum requirement. All Digital Ocean droplets use SSD storage. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 2. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 3. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Digital Ocean offers fully-managed load balancers to distribute traffic between Droplets. - -1. [Create a Digital Ocean Load Balancer](https://www.digitalocean.com/community/tutorials/an-introduction-to-digitalocean-load-balancers). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the node Droplets. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Digital Ocean's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 4. Configure your network - -Set up a firewall for each of your Droplets, allowing TCP communication on the following two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -For guidance, you can use Digital Ocean's guide to configuring firewalls based on the Droplet's OS: - -- Ubuntu and Debian can use [`ufw`](https://www.digitalocean.com/community/tutorials/how-to-setup-a-firewall-with-ufw-on-an-ubuntu-and-debian-cloud-server). -- FreeBSD can use [`ipfw`](https://www.digitalocean.com/community/tutorials/recommended-steps-for-new-freebsd-10-1-servers). -- Fedora can use [`iptables`](https://www.digitalocean.com/community/tutorials/initial-setup-of-a-fedora-22-server). -- CoreOS can use [`iptables`](https://www.digitalocean.com/community/tutorials/how-to-secure-your-coreos-cluster-with-tls-ssl-and-firewall-rules). -- CentOS can use [`firewalld`](https://www.digitalocean.com/community/tutorials/how-to-set-up-a-firewall-using-firewalld-on-centos-7). - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform-insecure.md b/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform-insecure.md deleted file mode 100644 index 4169dab5ecf..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform-insecure.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: Deploy CockroachDB on Google Cloud Platform GCE (Insecure) -summary: Learn how to deploy CockroachDB on Google Cloud Platform's Compute Engine. -toc: true -toc_not_nested: true -ssh-link: https://cloud.google.com/compute/docs/instances/connecting-to-instance ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -- This article covers the use of Linux instances with GCE. You may wish to review the instructions for [connecting to Windows instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance#windows). - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster) -- `8080` for exposing your Admin UI - -To expose your Admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, [create firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls) for your project. When creating firewall rules, we recommend using Google Cloud Platform's **tag** feature to apply the rule only to instances with the same tag. - -#### Admin UI - - Field | Recommended Value --------|------------------- - Name | **cockroachadmin** - Source filter | IP ranges - Source IP ranges | Your local network's IP ranges - Allowed protocols... | **tcp:8080** - Target tags | **cockroachdb** - -#### Application data - -Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in [Step 4](#step-4-set-up-load-balancing). - -## Step 2. Create instances - -[Create an instance](https://cloud.google.com/compute/docs/instances/create-start-instance) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use `n1-standard` or `n1-highcpu` [predefined VMs](https://cloud.google.com/compute/pricing#predefined_machine_types), or [custom VMs](https://cloud.google.com/compute/pricing#custommachinetypepricing), with [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) or [SSD persistent disks](https://cloud.google.com/compute/docs/disks/#pdspecs). For example, Cockroach Labs has used `n1-standard-16` (16 vCPUs and 60 GB of RAM per VM, local SSD) for internal testing. - -- **Do not** use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on a single core. - -- If you used a tag for your firewall rules, when you create the instance, click **Management, security, disks, networking, sole tenancy**. Under the **Networking** tab, in the **Network tags** field, enter **cockroachdb**. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -GCE offers fully-managed [TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/). This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user. - -{{site.data.alerts.callout_danger}} -When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using [Network TCP Load Balancing](https://cloud.google.com/compute/docs/load-balancing/network/) instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see the [On-Premises](deploy-cockroachdb-on-premises-insecure.html) tutorial for guidance). -{{site.data.alerts.end}} - -To use GCE's TCP Proxy Load Balancing service: - -1. For each zone in which you're running an instance, [create a distinct instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances). - - To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with `tcp26257` as the **Port name** and `26257` as the **Port number**. -2. [Add the relevant instances to each instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances). -3. [Configure Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#configure_load_balancer). - - During backend configuration, create a health check, setting the **Protocol** to `HTTP`, the **Port** to `8080`, and the **Request path** to path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly. - - During frontend configuration, reserve a static IP address and choose a port. Note this address/port combination, as you'll use it for all of you client connections. -4. [Create a firewall rule](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall) to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud. - - Be sure to set **Source IP ranges** to `130.211.0.0/22` and `35.191.0.0/16` and set **Target tags** to `cockroachdb` (not to the value specified in the linked instructions). - -## Step 5. Start nodes - -{{site.data.alerts.callout_info}} -By default, inter-node communication uses the internal IP addresses of your GCE instances. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-user.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the GCE load balancer, not to a CockroachDB node. - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform.md b/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform.md deleted file mode 100644 index 408a28038b1..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-google-cloud-platform.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: Deploy CockroachDB on Google Cloud Platform GCE -summary: Learn how to deploy CockroachDB on Google Cloud Platform's Compute Engine. -toc: true -toc_not_nested: true -ssh-link: https://cloud.google.com/compute/docs/instances/connecting-to-instance ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Google Cloud Platform's Compute Engine (GCE), using Google's TCP Proxy Load Balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -- This article covers the use of Linux instances with GCE. You may wish to review the instructions for [connecting to Windows instances](https://cloud.google.com/compute/docs/instances/connecting-to-instance#windows). - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- `26257` for inter-node communication (i.e., working as a cluster) -- `8080` for exposing your Admin UI - -To expose your Admin UI and allow traffic from the TCP proxy load balancer and health checker to your instances, [create firewall rules](https://cloud.google.com/compute/docs/vpc/firewalls) for your project. When creating firewall rules, we recommend using Google Cloud Platform's **tag** feature to apply the rule only to instances with the same tag. - -#### Admin UI - - Field | Recommended Value --------|------------------- - Name | **cockroachadmin** - Source filter | IP ranges - Source IP ranges | Your local network's IP ranges - Allowed protocols... | **tcp:8080** - Target tags | **cockroachdb** - -#### Application data - -Applications will not connect directly to your CockroachDB nodes. Instead, they'll connect to GCE's TCP Proxy Load Balancing service, which automatically routes traffic to the instances that are closest to the user. Because this service is implemented at the edge of the Google Cloud, you'll need to create a firewall rule to allow traffic from the load balancer and health checker to your instances. This is covered in [Step 4](#step-4-set-up-load-balancing). - -## Step 2. Create instances - -[Create an instance](https://cloud.google.com/compute/docs/instances/create-start-instance) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate instance for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use `n1-standard` or `n1-highcpu` [predefined VMs](https://cloud.google.com/compute/pricing#predefined_machine_types), or [custom VMs](https://cloud.google.com/compute/pricing#custommachinetypepricing), with [Local SSDs](https://cloud.google.com/compute/docs/disks/#localssds) or [SSD persistent disks](https://cloud.google.com/compute/docs/disks/#pdspecs). For example, Cockroach Labs has used `n1-standard-16` (16 vCPUs and 60 GB of RAM per VM, local SSD) for internal testing. - -- **Do not** use `f1` or `g1` [shared-core machines](https://cloud.google.com/compute/docs/machine-types#sharedcore), which limit the load on a single core. - -- If you used a tag for your firewall rules, when you create the instance, click **Management, security, disks, networking, sole tenancy**. Under the **Networking** tab, in the **Network tags** field, enter **cockroachdb**. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -GCE offers fully-managed [TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/). This service lets you use a single IP address for all users around the world, automatically routing traffic to the instances that are closest to the user. - -{{site.data.alerts.callout_danger}} -When using TCP Proxy Load Balancing, you cannot use firewall rules to control access to the load balancer. If you need such control, consider using [Network TCP Load Balancing](https://cloud.google.com/compute/docs/load-balancing/network/) instead, but note that it cannot be used across regions. You might also consider using the HAProxy load balancer (see the [On-Premises](deploy-cockroachdb-on-premises.html) tutorial for guidance). -{{site.data.alerts.end}} - -To use GCE's TCP Proxy Load Balancing service: - -1. For each zone in which you're running an instance, [create a distinct instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances). - - To ensure that the load balancer knows where to direct traffic, specify a port name mapping, with `tcp26257` as the **Port name** and `26257` as the **Port number**. -2. [Add the relevant instances to each instance group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances#addinstances). -3. [Configure TCP Proxy Load Balancing](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#configure_load_balancer). - - During backend configuration, create a health check, setting the **Protocol** to `HTTPS`, the **Port** to `8080`, and the **Request path** to path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - - If you want to maintain long-lived SQL connections that may be idle for more than tens of seconds, increase the backend timeout setting accordingly. - - During frontend configuration, reserve a static IP address and note the IP address and the port you select. You'll use this address and port for all client connections. -4. [Create a firewall rule](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall) to allow traffic from the load balancer and health checker to your instances. This is necessary because TCP Proxy Load Balancing is implemented at the edge of the Google Cloud. - - Be sure to set **Source IP ranges** to `130.211.0.0/22` and `35.191.0.0/16` and set **Target tags** to `cockroachdb` (not to the value specified in the linked instructions). - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{{site.data.alerts.callout_info}} -By default, inter-node communication uses the internal IP addresses of your GCE instances. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure-insecure.md b/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure-insecure.md deleted file mode 100644 index b766cfe81d0..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure-insecure.md +++ /dev/null @@ -1,144 +0,0 @@ ---- -title: Deploy CockroachDB on Microsoft Azure (Insecure) -summary: Learn how to deploy CockroachDB on Microsoft Azure. -toc: true -toc_not_nested: true -ssh-link: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys ---- - - - -This page shows you how to manually deploy an insecure multi-node CockroachDB cluster on Microsoft Azure, using Azure's managed load balancing service to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -To enable this in Azure, you must create a Resource Group, Virtual Network, and Network Security Group. - -1. [Create a Resource Group](https://azure.microsoft.com/en-us/updates/create-empty-resource-groups/). - -2. [Create a Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-vnet-arm-pportal) that uses your **Resource Group**. - -3. [Create a Network Security Group](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-nsg-arm-pportal) that uses your **Resource Group**, and then add the following **inbound** rules to it: - - **Admin UI support**: - - Field | Recommended Value - -------|------------------- - Name | **cockroachadmin** - Source | **IP Addresses** - Source IP addresses/CIDR ranges | Your local network’s IP ranges - Source port ranges | * - Destination | **Any** - Destination port range | **8080** - Protocol | **TCP** - Action | **Allow** - Priority | Any value > 1000 - - **Application support**: - - {{site.data.alerts.callout_success}}If your application is also hosted on the same Azure Virtual Network, you will not need to create a firewall rule for your application to communicate with your load balancer.{{site.data.alerts.end}} - - Field | Recommended Value - -------|------------------- - Name | **cockroachapp** - Source | **IP Addresses** - Source IP addresses/CIDR ranges | Your local network’s IP ranges - Source port ranges | * - Destination | **Any** - Destination port range | **26257** - Protocol | **TCP** - Action | **Allow** - Priority | Any value > 1000 - - -## Step 2. Create VMs - -[Create Linux VMs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate VM for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use compute-optimized [F-series](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series) VMs with [Premium Storage](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage) or local SSD storage with a Linux filesystem such as `ext4` (not the Windows `ntfs` filesystem). For example, Cockroach Labs has used `Standard_F16s_v2` VMs (16 vCPUs and 32 GiB of RAM per VM) for internal testing. - - - If you choose local SSD storage, on reboot, the VM can come back with the `ntfs` filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially. - -- **Do not** use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - -- When creating the VMs, make sure to select the **Resource Group**, **Virtual Network**, and **Network Security Group** you created. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Microsoft Azure offers fully-managed load balancing to distribute traffic between instances. - -1. [Add Azure load balancing](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Azure's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 6. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 7. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 8. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 9. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 10. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 11. Use the cluster - -Now that your deployment is working, you can: - -1. [Implement your data model](sql-statements.html). -2. [Create users](create-and-manage-users.html) and [grant them privileges](grant.html). -3. [Connect your application](install-client-drivers.html). Be sure to connect your application to the Azure load balancer, not to a CockroachDB node. - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure.md b/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure.md deleted file mode 100644 index 2678bf7cad8..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-microsoft-azure.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: Deploy CockroachDB on Microsoft Azure -summary: Learn how to deploy CockroachDB on Microsoft Azure. -toc: true -toc_not_nested: true -ssh-link: https://docs.microsoft.com/en-us/azure/virtual-machines/linux/mac-create-ssh-keys ---- - -
      - - -
      - -This page shows you how to manually deploy a secure multi-node CockroachDB cluster on Microsoft Azure, using Azure's managed load balancing service to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Configure your network - -CockroachDB requires TCP communication on two ports: - -- **26257** (`tcp:26257`) for inter-node communication (i.e., working as a cluster), for applications to connect to the load balancer, and for routing from the load balancer to nodes -- **8080** (`tcp:8080`) for exposing your Admin UI - -To enable this in Azure, you must create a Resource Group, Virtual Network, and Network Security Group. - -1. [Create a Resource Group](https://azure.microsoft.com/en-us/updates/create-empty-resource-groups/). -2. [Create a Virtual Network](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-vnet-arm-pportal) that uses your **Resource Group**. -3. [Create a Network Security Group](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-create-nsg-arm-pportal) that uses your **Resource Group**, and then add the following **inbound** rules to it: - - **Admin UI support**: - - Field | Recommended Value - -------|------------------- - Name | **cockroachadmin** - Source | **IP Addresses** - Source IP addresses/CIDR ranges | Your local network’s IP ranges - Source port ranges | * - Destination | **Any** - Destination port range | **8080** - Protocol | **TCP** - Action | **Allow** - Priority | Any value > 1000 - - **Application support**: - - {{site.data.alerts.callout_success}}If your application is also hosted on the same Azure Virtual Network, you will not need to create a firewall rule for your application to communicate with your load balancer.{{site.data.alerts.end}} - - Field | Recommended Value - -------|------------------- - Name | **cockroachapp** - Source | **IP Addresses** - Source IP addresses/CIDR ranges | Your local network’s IP ranges - Source port ranges | * - Destination | **Any** - Destination port range | **26257** - Protocol | **TCP** - Action | **Allow** - Priority | Any value > 1000 - -## Step 2. Create VMs - -[Create Linux VMs](https://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal) for each node you plan to have in your cluster. If you plan to run a sample workload against the cluster, create a separate VM for that workload. - -- Run at least 3 nodes to [ensure survivability](recommended-production-settings.html#topology). - -- Use compute-optimized [F-series](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series) VMs with [Premium Storage](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage) or local SSD storage with a Linux filesystem such as `ext4` (not the Windows `ntfs` filesystem). For example, Cockroach Labs has used `Standard_F16s_v2` VMs (16 vCPUs and 32 GiB of RAM per VM) for internal testing. - - - If you choose local SSD storage, on reboot, the VM can come back with the `ntfs` filesystem. Be sure your automation monitors for this and reformats the disk to the Linux filesystem you chose initially. - -- **Do not** use ["burstable" B-series](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/b-series-burstable) VMs, which limit the load on a single core. Also, Cockroach Labs has experienced data corruption issues on A-series VMs and irregular disk performance on D-series VMs, so we recommend avoiding those as well. - -- When creating the VMs, make sure to select the **Resource Group**, **Virtual Network**, and **Network Security Group** you created. - -For more details, see [Hardware Recommendations](recommended-production-settings.html#hardware) and [Cluster Topology](recommended-production-settings.html#topology). - -## Step 3. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 4. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - -Microsoft Azure offers fully-managed load balancing to distribute traffic between instances. - -1. [Add Azure load balancing](https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview). Be sure to: - - Set forwarding rules to route TCP traffic from the load balancer's port **26257** to port **26257** on the nodes. - - Configure health checks to use HTTP port **8080** and path `/health?ready=1`. This [health endpoint](monitoring-and-alerting.html#health-ready-1) ensures that load balancers do not direct traffic to nodes that are live but not ready to receive requests. - -2. Note the provisioned **IP Address** for the load balancer. You'll use this later to test load balancing and to connect your application to the cluster. - -{{site.data.alerts.callout_info}}If you would prefer to use HAProxy instead of Azure's managed load balancing, see the On-Premises tutorial for guidance.{{site.data.alerts.end}} - -## Step 5. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 6. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 7. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 8. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 9. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 10. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 11. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 12. Use the database - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-premises-insecure.md b/src/current/v19.1/deploy-cockroachdb-on-premises-insecure.md deleted file mode 100644 index 6bcbeb98388..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-premises-insecure.md +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: Deploy CockroachDB On-Premises (Insecure) -summary: Learn how to manually deploy an insecure, multi-node CockroachDB cluster on multiple machines. -toc: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 ---- - - - -This tutorial shows you how to manually deploy an insecure multi-node CockroachDB cluster on multiple machines, using [HAProxy](http://www.haproxy.org/) load balancers to distribute client traffic. - -{{site.data.alerts.callout_danger}}If you plan to use CockroachDB in production, we strongly recommend using a secure cluster instead. Select Secure above for instructions.{{site.data.alerts.end}} - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/insecure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/insecure-recommendations.md %} - -## Step 1. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 2. Start nodes - -{% include {{ page.version.version }}/prod-deployment/insecure-start-nodes.md %} - -## Step 3. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-initialize-cluster.md %} - -## Step 4. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-test-cluster.md %} - -## Step 5. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - {{site.data.alerts.callout_success}}With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.{{site.data.alerts.end}} - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster, so we feature that tool here. - -1. SSH to the machine where you want to run HAProxy. - -2. Install HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ apt-get install haproxy - ~~~ - -3. Download the [CockroachDB archive](https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz) for Linux, and extract the binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.linux-amd64.tgz \ - | tar -xz - ~~~ - -4. Copy the binary into the `PATH`: - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/ - ~~~ - - If you get a permissions error, prefix the command with `sudo`. - -5. Run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command, specifying the address of any CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach gen haproxy --insecure \ - --host=
      \ - --port=26257 - ~~~ - - {% include {{ page.version.version }}/misc/haproxy.md %} - -6. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - - {% include copy-clipboard.html %} - ~~~ shell - $ haproxy -f haproxy.cfg - ~~~ - -7. Repeat these steps for each additional instance of HAProxy you want to run. - -## Step 6. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/insecure-test-load-balancing.md %} - -## Step 7. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 8. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/insecure-scale-cluster.md %} - -## Step 9. Use the cluster - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/deploy-cockroachdb-on-premises.md b/src/current/v19.1/deploy-cockroachdb-on-premises.md deleted file mode 100644 index c0ef9dce358..00000000000 --- a/src/current/v19.1/deploy-cockroachdb-on-premises.md +++ /dev/null @@ -1,113 +0,0 @@ ---- -title: Deploy CockroachDB On-Premises -summary: Learn how to manually deploy a secure, multi-node CockroachDB cluster on multiple machines. -toc: true -ssh-link: https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys--2 - ---- - - - -This tutorial shows you how to manually deploy a secure multi-node CockroachDB cluster on multiple machines, using [HAProxy](http://www.haproxy.org/) load balancers to distribute client traffic. - -If you are only testing CockroachDB, or you are not concerned with protecting network communication with TLS encryption, you can use an insecure cluster instead. Select **Insecure** above for instructions. - -## Before you begin - -### Requirements - -{% include {{ page.version.version }}/prod-deployment/secure-requirements.md %} - -### Recommendations - -{% include {{ page.version.version }}/prod-deployment/secure-recommendations.md %} - -## Step 1. Synchronize clocks - -{% include {{ page.version.version }}/prod-deployment/synchronize-clocks.md %} - -## Step 2. Generate certificates - -{% include {{ page.version.version }}/prod-deployment/secure-generate-certificates.md %} - -## Step 3. Start nodes - -{% include {{ page.version.version }}/prod-deployment/secure-start-nodes.md %} - -## Step 4. Initialize the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-initialize-cluster.md %} - -## Step 5. Test the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-test-cluster.md %} - -## Step 6. Set up load balancing - -Each CockroachDB node is an equally suitable SQL gateway to your cluster, but to ensure client performance and reliability, it's important to use load balancing: - -- **Performance:** Load balancers spread client traffic across nodes. This prevents any one node from being overwhelmed by requests and improves overall cluster performance (queries per second). - -- **Reliability:** Load balancers decouple client health from the health of a single CockroachDB node. In cases where a node fails, the load balancer redirects client traffic to available nodes. - {{site.data.alerts.callout_success}}With a single load balancer, client connections are resilient to node failure, but the load balancer itself is a point of failure. It's therefore best to make load balancing resilient as well by using multiple load balancing instances, with a mechanism like floating IPs or DNS to select load balancers for clients.{{site.data.alerts.end}} - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster, so we feature that tool here. - -1. On your local machine, run the [`cockroach gen haproxy`](generate-cockroachdb-resources.html) command with the `--host` flag set to the address of any node and security flags pointing to the CA cert and the client cert and key: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach gen haproxy \ - --certs-dir=certs \ - --host=
      - ~~~ - - {% include {{ page.version.version }}/misc/haproxy.md %} - -2. Upload the `haproxy.cfg` file to the machine where you want to run HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ scp haproxy.cfg @:~/ - ~~~ - -3. SSH to the machine where you want to run HAProxy. - -4. Install HAProxy: - - {% include copy-clipboard.html %} - ~~~ shell - $ apt-get install haproxy - ~~~ - -5. Start HAProxy, with the `-f` flag pointing to the `haproxy.cfg` file: - - {% include copy-clipboard.html %} - ~~~ shell - $ haproxy -f haproxy.cfg - ~~~ - -6. Repeat these steps for each additional instance of HAProxy you want to run. - -## Step 7. Run a sample workload - -{% include {{ page.version.version }}/prod-deployment/secure-test-load-balancing.md %} - -## Step 8. Monitor the cluster - -{% include {{ page.version.version }}/prod-deployment/monitor-cluster.md %} - -## Step 9. Scale the cluster - -{% include {{ page.version.version }}/prod-deployment/secure-scale-cluster.md %} - -## Step 10. Use the cluster - -{% include {{ page.version.version }}/prod-deployment/use-cluster.md %} - -## See also - -{% include {{ page.version.version }}/prod-deployment/prod-see-also.md %} diff --git a/src/current/v19.1/diagnostics-reporting.md b/src/current/v19.1/diagnostics-reporting.md deleted file mode 100644 index b366f2b4307..00000000000 --- a/src/current/v19.1/diagnostics-reporting.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: Diagnostics Reporting -summary: Learn about the diagnostic details that get shared with CockroachDB and how to opt out of sharing. -toc: true ---- - -By default, the Admin UI and each node of a CockroachDB cluster share anonymous usage details with Cockroach Labs. These details, which are completely scrubbed of identifiable information, greatly help us understand and improve how the system behaves in real-world scenarios. - -This page summarizes the details that get shared, how to view the details yourself, and how to opt out of sharing. - -{{site.data.alerts.callout_success}} -For insights into your cluster's performance and health, use the built-in [Admin UI](admin-ui-overview.html) or a third-party monitoring tool like [Prometheus](monitor-cockroachdb-with-prometheus.html). -{{site.data.alerts.end}} - -## What gets shared - -When diagnostics reporting is on, each node of a CockroachDB cluster shares anonymized details on an hourly basis, including (but not limited to): - -- Deployment and configuration characteristics, such as size of hardware, [cluster settings](cluster-settings.html) that have been altered from defaults, number of [replication zones](configure-replication-zones.html) configured, etc. -- Usage and cluster health details, such as crashes, unexpected errors, attempts to use unsupported features, types of queries run and their execution characteristics as well as types of schemas used, etc. - -To view the full diagnostics details that a node reports to Cockroach Labs, use the `http://:/_status/diagnostics/local` JSON endpoint. - -{{site.data.alerts.callout_info}} -In all cases, names and other string values are scrubbed and replaced with underscores. Also, the details that get shared may change over time, but as that happens, we will announce the changes in release notes. -{{site.data.alerts.end}} - -## Opt out of diagnostics reporting - -### At cluster initialization - -To make sure that absolutely no diagnostic details are shared, you can set the environment variable `COCKROACH_SKIP_ENABLING_DIAGNOSTIC_REPORTING=true` before starting the first node of the cluster. Note that this works only when set before starting the first node of the cluster. Once the cluster is running, you need to use the `SET CLUSTER SETTING` method described below. - -### After cluster initialization - -To stop sending diagnostic details to Cockroach Labs once a cluster is running, [use the built-in SQL client](use-the-built-in-sql-client.html) to execute the following [`SET CLUSTER SETTING`](set-cluster-setting.html) statement, which switches the `diagnostics.reporting.enabled` [cluster setting](cluster-settings.html) to `false`: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING diagnostics.reporting.enabled = false; -~~~ - -This change will not be instantaneous, as it must be propagated to other nodes in the cluster. - -## Check the state of diagnostics reporting - -To check the state of diagnostics reporting, [use the built-in SQL client](use-the-built-in-sql-client.html) to execute the following [`SHOW CLUSTER SETTING`](show-cluster-setting.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING diagnostics.reporting.enabled; -~~~ - -~~~ - diagnostics.reporting.enabled -+-------------------------------+ - false -(1 row) -~~~ - -If the setting is `false`, diagnostics reporting is off; if the setting is `true`, diagnostics reporting is on. - -## See also - -- [Cluster Settings](cluster-settings.html) -- [Start a Node](start-a-node.html) diff --git a/src/current/v19.1/drop-column.md b/src/current/v19.1/drop-column.md deleted file mode 100644 index 4129215c632..00000000000 --- a/src/current/v19.1/drop-column.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: DROP COLUMN -summary: Use the ALTER COLUMN statement to remove columns from tables. -toc: true ---- - -The `DROP COLUMN` [statement](sql-statements.html) is part of `ALTER TABLE` and removes columns from a table. - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_column.html %}
      - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table with the column you want to drop. - `name` | The name of the column you want to drop.

      When a column with a `CHECK` constraint is dropped, the `CHECK` constraint is also dropped. - `CASCADE` | Drop the column even if objects (such as [views](views.html)) depend on it; drop the dependent objects, as well.

      `CASCADE` does not list objects it drops, so should be used cautiously. However, `CASCADE` will not drop dependent indexes; you must use [`DROP INDEX`](drop-index.html).

      `CASCADE` will drop a column with a foreign key constraint if it is the only column in the reference. - `RESTRICT` | *(Default)* Do not drop the column if any objects (such as [views](views.html)) depend on it. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Drop columns - -If you no longer want a column in a table, you can drop it. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders DROP COLUMN billing_zip; -~~~ - -### Prevent dropping columns with dependent objects (`RESTRICT`) - -If the column has dependent objects, such as [views](views.html), CockroachDB will not drop the column by default; however, if you want to be sure of the behavior you can include the `RESTRICT` clause. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders DROP COLUMN customer RESTRICT; -~~~ -~~~ -pq: cannot drop column "customer" because view "customer_view" depends on it -~~~ - -### Drop column and dependent objects (`CASCADE`) - -If you want to drop the column and all of its dependent options, include the `CASCADE` clause. - -{{site.data.alerts.callout_danger}}CASCADE does not list objects it drops, so should be used cautiously.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE customer_view; -~~~ - -~~~ -+---------------+----------------------------------------------------------------+ -| table_name | create_statement | -+---------------+----------------------------------------------------------------+ -| customer_view | CREATE VIEW customer_view AS SELECT customer FROM store.orders | -+---------------+----------------------------------------------------------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders DROP COLUMN customer CASCADE; -~~~ - -{% include copy-clipboard.html %} -~~~ -> SHOW CREATE customer_view; -~~~ - -~~~ -pq: view "customer_view" does not exist -~~~ - -## See also - -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/drop-constraint.md b/src/current/v19.1/drop-constraint.md deleted file mode 100644 index 7ffedc6b28d..00000000000 --- a/src/current/v19.1/drop-constraint.md +++ /dev/null @@ -1,80 +0,0 @@ ---- -title: DROP CONSTRAINT -summary: Use the ALTER CONSTRAINT statement to remove constraints from columns. -toc: true ---- - -The `DROP CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and removes Check and Foreign Key constraints from columns. - -{{site.data.alerts.callout_info}} -For information about removing other constraints, see [Constraints: Remove Constraints](constraints.html#remove-constraints). -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_constraint.html %}
      - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - - Parameter | Description ------------|------------- - `table_name` | The name of the table with the constraint you want to drop. - `name` | The name of the constraint you want to drop. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------------------------+-------------+-----------+----------------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------------------------+-------------+-----------+----------------+ -| orders | fk_customer_ref_customers | FOREIGN KEY | customer | customers.[id] | -| orders | primary | PRIMARY KEY | id | NULL | -+--------+---------------------------+-------------+-----------+----------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders DROP CONSTRAINT fk_customer_ref_customers; -~~~ -~~~ -ALTER TABLE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ -~~~ -+--------+---------+-------------+-----------+---------+ -| Table | Name | Type | Column(s) | Details | -+--------+---------+-------------+-----------+---------+ -| orders | primary | PRIMARY KEY | id | NULL | -+--------+---------+-------------+-----------+---------+ -~~~ - -{{site.data.alerts.callout_info}}You cannot drop the primary constraint, which indicates your table's Primary Key.{{site.data.alerts.end}} - -## See also - -- [`ADD CONSTRAINT`](add-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`RENAME CONSTRAINT`](rename-constraint.html) -- [`VALIDATE CONSTRAINT`](validate-constraint.html) -- [`DROP COLUMN`](drop-column.html) -- [`DROP INDEX`](drop-index.html) -- [`ALTER TABLE`](alter-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/drop-database.md b/src/current/v19.1/drop-database.md deleted file mode 100644 index 95a1cbe9ce4..00000000000 --- a/src/current/v19.1/drop-database.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: DROP DATABASE -summary: The DROP DATABASE statement removes a database and all its objects from a CockroachDB cluster. -toc: true ---- - -The `DROP DATABASE` [statement](sql-statements.html) removes a database and all its objects from a CockroachDB cluster. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the database and on all tables in the database. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_database.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the database if it exists; if it does not exist, do not return an error. -`name` | The name of the database you want to drop. You cannot drop a database if it is set as the [current database](sql-name-resolution.html#current-database) or if [`sql_safe_updates = true`](set-vars.html). -`CASCADE` | _(Default)_ Drop all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables.

      `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | Do not drop the database if it contains any [tables](create-table.html) or [views](create-view.html). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Drop a database and its objects (`CASCADE`) - -For non-interactive sessions (e.g., client applications), `DROP DATABASE` applies the `CASCADE` option by default, which drops all tables and views in the database as well as all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on those tables. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -+------------+ -| table_name | -+------------+ -| t1 | -| v1 | -+------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP DATABASE db2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -pq: database "db2" does not exist -~~~ - -For interactive sessions from the [built-in SQL client](use-the-built-in-sql-client.html), either the `CASCADE` option must be set explicitly or the `--unsafe-updates` flag must be set when starting the shell. - -### Prevent dropping a non-empty database (`RESTRICT`) - -When a database is not empty, the `RESTRICT` option prevents the database from being dropped: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM db2; -~~~ - -~~~ -+------------+ -| table_name | -+------------+ -| t1 | -| v1 | -+------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP DATABASE db2 RESTRICT; -~~~ - -~~~ -pq: database "db2" is not empty and CASCADE was not specified -~~~ - -## See also - -- [`CREATE DATABASE`](create-database.html) -- [`SHOW DATABASES`](show-databases.html) -- [`RENAME DATABASE`](rename-database.html) -- [`SET DATABASE`](set-vars.html) -- [`SHOW JOBS`](show-jobs.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/drop-index.md b/src/current/v19.1/drop-index.md deleted file mode 100644 index 61017f54794..00000000000 --- a/src/current/v19.1/drop-index.md +++ /dev/null @@ -1,139 +0,0 @@ ---- -title: DROP INDEX -summary: The DROP INDEX statement removes indexes from tables. -toc: true ---- - -The `DROP INDEX` [statement](sql-statements.html) removes indexes from tables. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_index.html %}
      - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on each specified table. - -## Parameters - - Parameter | Description ------------|------------- - `IF EXISTS` | Drop the named indexes if they exist; if they do not exist, do not return an error. - `table_name` | The name of the table with the index you want to drop. Find table names with [`SHOW TABLES`](show-tables.html). - `index_name` | The name of the index you want to drop. Find index names with [`SHOW INDEX`](show-index.html).

      You cannot drop a table's `primary` index. - `CASCADE` | Drop all objects (such as [constraints](constraints.html)) that depend on the indexes. To drop a `UNIQUE INDEX`, you must use `CASCADE`.

      `CASCADE` does not list objects it drops, so should be used cautiously. - `RESTRICT` | _(Default)_ Do not drop the indexes if any objects (such as [constraints](constraints.html)) depend on them. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Remove an index (no dependencies) - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM tl; -~~~ - -~~~ -+------------+-------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+-------------+------------+--------------+-------------+-----------+---------+----------+ -| t1 | primary | false | 1 | id | ASC | false | false | -| t1 | t1_name_idx | true | 1 | name | ASC | false | false | -| t1 | t1_name_idx | true | 2 | id | ASC | false | true | -+------------+-------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP INDEX t1@t1_name_idx; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM tbl; -~~~ - -~~~ -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -| t1 | primary | false | 1 | id | ASC | false | false | -+------------+------------+------------+--------------+-------------+-----------+---------+----------+ -(1 row) -~~~ - -### Remove an index and dependent objects with `CASCADE` - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SHOW INDEX FROM orders; -~~~ - -~~~ -+------------+---------------------------------------------+------------+--------------+-------------+-----------+---------+----------+ -| table_name | index_name | non_unique | seq_in_index | column_name | direction | storing | implicit | -+------------+---------------------------------------------+------------+--------------+-------------+-----------+---------+----------+ -| orders | primary | false | 1 | id | ASC | false | false | -| orders | orders_auto_index_fk_customer_ref_customers | true | 1 | customer | ASC | false | false | -| orders | orders_auto_index_fk_customer_ref_customers | true | 2 | id | ASC | false | true | -+------------+---------------------------------------------+------------+--------------+-------------+-----------+---------+----------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP INDEX orders_auto_index_fk_customer_ref_customers; -~~~ - -~~~ -pq: index "orders_auto_index_fk_customer_ref_customers" is in use as a foreign key constraint -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ - -~~~ -+------------+---------------------------+-----------------+--------------------------------------------------+-----------+ -| table_name | constraint_name | constraint_type | details | validated | -+------------+---------------------------+-----------------+--------------------------------------------------+-----------+ -| orders | fk_customer_ref_customers | FOREIGN KEY | FOREIGN KEY (customer) REFERENCES customers (id) | true | -| orders | primary | PRIMARY KEY | PRIMARY KEY (id ASC) | true | -+------------+---------------------------+-----------------+--------------------------------------------------+-----------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP INDEX orders_auto_index_fk_customer_ref_customers CASCADE; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CONSTRAINTS FROM orders; -~~~ - -~~~ -+------------+-----------------+-----------------+----------------------+-----------+ -| table_name | constraint_name | constraint_type | details | validated | -+------------+-----------------+-----------------+----------------------+-----------+ -| orders | primary | PRIMARY KEY | PRIMARY KEY (id ASC) | true | -+------------+-----------------+-----------------+----------------------+-----------+ -(1 row) -~~~ - -## See Also - -- [Indexes](indexes.html) -- [Online Schema Changes](online-schema-changes.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/drop-role.md b/src/current/v19.1/drop-role.md deleted file mode 100644 index 202e79dd27c..00000000000 --- a/src/current/v19.1/drop-role.md +++ /dev/null @@ -1,70 +0,0 @@ ---- -title: DROP ROLE (Enterprise) -summary: The DROP ROLE statement removes one or more SQL roles. -toc: true ---- - -The `DROP ROLE` [statement](sql-statements.html) removes one or more SQL roles. - -{{site.data.alerts.callout_info}}DROP ROLE is an enterprise-only feature.{{site.data.alerts.end}} - - -## Considerations - -- The `admin` role cannot be dropped, and `root` must always be a member of `admin`. -- A role cannot be dropped if it has privileges. Use [`REVOKE`](revoke.html) to remove privileges. - -## Required privileges - -Roles can only be dropped by super users, i.e., members of the `admin` role. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_role.html %}
      - - -## Parameters - - Parameter | Description -------------|-------------- -`name` | The name of the role to remove. To remove multiple roles, use a comma-separate list of roles.

      You can use [`SHOW ROLES`](show-roles.html) to find the names of roles. - -## Example - -In this example, first check a role's privileges. Then, revoke the role's privileges and remove the role. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON documents FOR dev_ops; -~~~ -~~~ -+------------+--------+-----------+---------+------------+ -| Database | Schema | Table | User | Privileges | -+------------+--------+-----------+---------+------------+ -| jsonb_test | public | documents | dev_ops | INSERT | -+------------+--------+-----------+---------+------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE INSERT ON documents FROM dev_ops; -~~~ - -{{site.data.alerts.callout_info}}All of a role's privileges must be revoked before the role can be dropped.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> DROP ROLE dev_ops; -~~~ -~~~ -DROP ROLE 1 -~~~ - -## See also - -- [Authorization](authorization.html) -- [`CREATE ROLE` (Enterprise)](create-role.html) -- [`SHOW ROLES`](show-roles.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/drop-sequence.md b/src/current/v19.1/drop-sequence.md deleted file mode 100644 index f055f4844be..00000000000 --- a/src/current/v19.1/drop-sequence.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -title: DROP SEQUENCE -summary: -toc: true ---- - -The `DROP SEQUENCE` [statement](sql-statements.html) removes a sequence from a database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the specified sequence(s). - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_sequence.html %}
      - -## Parameters - - - - Parameter | Description ------------|------------ -`IF EXISTS` | Drop the sequence only if it exists; if it does not exist, do not return an error. -`sequence_name` | The name of the sequence you want to drop. Find the sequence name with `SHOW CREATE` on the table that uses the sequence. -`RESTRICT` | _(Default)_ Do not drop the sequence if any objects (such as [constraints](constraints.html) and tables) use it. -`CASCADE` | Not yet implemented. Currently, you can only drop a sequence if nothing depends on it. - - - -## Examples - -### Remove a sequence (no dependencies) - -In this example, other objects do not depend on the sequence being dropped. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences; -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | customer_seq | INT | 64 | 2 | 0 | 101 | 1 | 9223372036854775807 | 2 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP SEQUENCE customer_seq; -~~~ -~~~ -DROP SEQUENCE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.sequences -~~~ -~~~ -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| sequence_catalog | sequence_schema | sequence_name | data_type | numeric_precision | numeric_precision_radix | numeric_scale | start_value | minimum_value | maximum_value | increment | cycle_option | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -| def | db_2 | test_4 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -| def | test_db | desc_customer_list | INT | 64 | 2 | 0 | 1000 | -9223372036854775808 | -1 | -2 | NO | -| def | test_db | test_sequence3 | INT | 64 | 2 | 0 | 1 | 1 | 9223372036854775807 | 1 | NO | -+------------------+-----------------+--------------------+-----------+-------------------+-------------------------+---------------+-------------+----------------------+---------------------+-----------+--------------+ -(4 rows) -~~~ - - - - -## See also -- [`CREATE SEQUENCE`](create-sequence.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`RENAME SEQUENCE`](rename-sequence.html) -- [`SHOW SEQUENCES`](show-sequences.html) -- [Functions and Operators](functions-and-operators.html) -- [Other SQL Statements](sql-statements.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/drop-table.md b/src/current/v19.1/drop-table.md deleted file mode 100644 index 1414c155989..00000000000 --- a/src/current/v19.1/drop-table.md +++ /dev/null @@ -1,143 +0,0 @@ ---- -title: DROP TABLE -summary: The DROP TABLE statement removes a table and all its indexes from a database. -toc: true ---- - -The `DROP TABLE` [statement](sql-statements.html) removes a table and all its indexes from a database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the specified table(s). If `CASCADE` is used, the user must have the privileges required to drop each dependent object as well. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_table.html %}
      - -## Parameters - -Parameter | Description -----------|------------ -`IF EXISTS` | Drop the table if it exists; if it does not exist, do not return an error. -`table_name` | A comma-separated list of table names. To find table names, use [`SHOW TABLES`](show-tables.html). -`CASCADE` | Drop all objects (such as [constraints](constraints.html) and [views](views.html)) that depend on the table.

      `CASCADE` does not list objects it drops, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not drop the table if any objects (such as [constraints](constraints.html) and [views](views.html)) depend on it. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Remove a table (no dependencies) - -In this example, other objects do not depend on the table being dropped. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| table_name | -+--------------------+ -| accounts | -| branches | -| user_accounts_view | -+--------------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE bank.branches; -~~~ - -~~~ -DROP TABLE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| table_name | -+--------------------+ -| accounts | -| user_accounts_view | -+--------------------+ -(2 rows) -~~~ - -### Remove a table and dependent objects with `CASCADE` - -In this example, a view depends on the table being dropped. Therefore, it's only possible to drop the table while simultaneously dropping the dependent view using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+--------------------+ -| table_name | -+--------------------+ -| accounts | -| user_accounts_view | -+--------------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE bank.accounts; -~~~ - -~~~ -pq: cannot drop table "accounts" because view "user_accounts_view" depends on it -~~~ - -{% include copy-clipboard.html %} -~~~sql -> DROP TABLE bank.accounts CASCADE; -~~~ - -~~~ -DROP TABLE -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+------------+ -| table_name | -+------------+ -+------------+ -(0 rows) -~~~ - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`CREATE TABLE`](create-table.html) -- [`INSERT`](insert.html) -- [`RENAME TABLE`](rename-table.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW TABLES`](show-tables.html) -- [`UPDATE`](update.html) -- [`DELETE`](delete.html) -- [`DROP INDEX`](drop-index.html) -- [`DROP VIEW`](drop-view.html) -- [`SHOW JOBS`](show-jobs.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/drop-user.md b/src/current/v19.1/drop-user.md deleted file mode 100644 index 1165e5f4da6..00000000000 --- a/src/current/v19.1/drop-user.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: DROP USER -summary: The DROP USER statement removes one or more SQL users. -toc: true ---- - -The `DROP USER` [statement](sql-statements.html) removes one or more SQL users. - -{{site.data.alerts.callout_success}}You can also use the cockroach user rm command to remove users.{{site.data.alerts.end}} - - -## Required privileges - -The user must have the `DELETE` [privilege](authorization.html#assign-privileges) on the `system.users` table. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_user.html %}
      - -## Parameters - - Parameter | Description ------------|------------- -`user_name` | The username of the user to remove. To remove multiple users, use a comma-separate list of usernames.

      You can use [`SHOW USERS`](show-users.html) to find usernames. - -## Example - -All of a user's privileges must be revoked before the user can be dropped. - -In this example, first check a user's privileges. Then, revoke the user's privileges before removing the user. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON test.customers FOR mroach; -~~~ - -~~~ -+-----------+--------+------------+ -| Table | User | Privileges | -+-----------+--------+------------+ -| customers | mroach | CREATE | -| customers | mroach | INSERT | -| customers | mroach | UPDATE | -+-----------+--------+------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> REVOKE CREATE,INSERT,UPDATE ON test.customers FROM mroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP USER mroach; -~~~ - -## See also - -- [`cockroach user` command](create-and-manage-users.html) -- [`CREATE USER`](create-user.html) -- [`SHOW USERS`](show-users.html) -- [`GRANT`](grant.html) -- [`SHOW GRANTS`](show-grants.html) -- [Create Security Certificates](create-security-certificates.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/drop-view.md b/src/current/v19.1/drop-view.md deleted file mode 100644 index bd7fb919008..00000000000 --- a/src/current/v19.1/drop-view.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: DROP VIEW -summary: The DROP VIEW statement removes a view from a database. -toc: true ---- - -The `DROP VIEW` [statement](sql-statements.html) removes a [view](views.html) from a database. - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the specified view(s). If `CASCADE` is used to drop dependent views, the user must have the `DROP` privilege on each dependent view as well. - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/drop_view.html %}
      - -## Parameters - - Parameter | Description -----------|------------- - `IF EXISTS` | Drop the view if it exists; if it does not exist, do not return an error. - `table_name` | A comma-separated list of view names. To find view names, use:

      `SELECT * FROM information_schema.tables WHERE table_type = 'VIEW';` - `CASCADE` | Drop other views that depend on the view being dropped.

      `CASCADE` does not list views it drops, so should be used cautiously. - `RESTRICT` | _(Default)_ Do not drop the view if other views depend on it. - -## Examples - -### Remove a view (no dependencies) - -In this example, other views do not depend on the view being dropped. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP VIEW bank.user_emails; -~~~ - -~~~ -DROP VIEW -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -### Remove a view (with dependencies) - -In this example, another view depends on the view being dropped. Therefore, it's only possible to drop the view while simultaneously dropping the dependent view using `CASCADE`. - -{{site.data.alerts.callout_danger}}CASCADE drops all dependent views without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | user_accounts | VIEW | 1 | -| def | bank | user_emails | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> DROP VIEW bank.user_accounts; -~~~ - -~~~ -pq: cannot drop view "user_accounts" because view "user_emails" depends on it -~~~ - -{% include copy-clipboard.html %} -~~~sql -> DROP VIEW bank.user_accounts CASCADE; -~~~ - -~~~ -DROP VIEW -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.tables WHERE table_type = 'VIEW'; -~~~ - -~~~ -+---------------+-------------------+--------------------+------------+---------+ -| TABLE_CATALOG | TABLE_SCHEMA | TABLE_NAME | TABLE_TYPE | VERSION | -+---------------+-------------------+--------------------+------------+---------+ -| def | bank | create_test | VIEW | 1 | -+---------------+-------------------+--------------------+------------+---------+ -(1 row) -~~~ - -## See also - -- [Views](views.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE`](show-create.html) -- [`ALTER VIEW`](alter-view.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/enable-node-map.md b/src/current/v19.1/enable-node-map.md deleted file mode 100644 index 56a9a5f8707..00000000000 --- a/src/current/v19.1/enable-node-map.md +++ /dev/null @@ -1,191 +0,0 @@ ---- -title: Enable the Node Map -summary: Learn how to enable the node map in the Admin UI. -toc: true ---- - -{{site.data.alerts.callout_info}} -On a secure cluster, this area of the Admin UI can only be accessed by an `admin` user. See [Admin UI access](admin-ui-overview.html#admin-ui-access). -{{site.data.alerts.end}} - -The **Node Map** visualizes the geographical configuration of a multi-regional cluster by plotting the node localities on a world map. The **Node Map** also provides real-time cluster metrics, with the ability to drill down to individual nodes to monitor and troubleshoot the cluster health and performance. - -This page walks you through the process of setting up and enabling the **Node Map**. - -{{site.data.alerts.callout_info}}The Node Map is an enterprise-only feature. However, you can request a trial license to try it out. {{site.data.alerts.end}} - -CockroachDB Admin UI - -## Set up and enable the Node Map - -To enable the **Node Map**, you need to start the cluster with the correct `--locality` flags and assign the latitudes and longitudes for each locality. - -{{site.data.alerts.callout_info}}The Node Map will not be displayed until all nodes are started with the correct --locality flags and all localities are assigned the corresponding latitudes and longitudes. {{site.data.alerts.end}} - -Consider a scenario of a four-node geo-distributed cluster with the following configuration: - -| Node | Region | Datacenter | -| ------ | ------ | ------ | -| Node1 | us-east-1 | us-east-1a | -| Node2 | us-east-1 | us-east-1b | -| Node3 | us-west-1 | us-west-1a | -| Node4 | eu-west-1 | eu-west-1a | - -### Step 1. Start the nodes with the correct `--locality` flags - -To start a new cluster with the correct `--locality` flags: - -Start Node 1: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-east-1,datacenter=us-east-1a \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=,,, -~~~ - -Start Node 2: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-east-1,datacenter=us-east-1b \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=,,, -~~~ - -Start Node 3: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=us-west-1,datacenter=us-west-1a \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=,,, -~~~ - -Start Node 4: - -{% include copy-clipboard.html %} -~~~ -$ cockroach start \ ---insecure \ ---locality=region=eu-west-1,datacenter=eu-west-1a \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 \ ---join=,,, -~~~ - -Use the [`cockroach init`](initialize-a-cluster.html) command to perform a one-time initialization of the cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --insecure --host=
      -~~~ - -[Access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui). The following page is displayed: - -CockroachDB Admin UI - -### Step 2. [Set the enterprise license](enterprise-licensing.html) and refresh the Admin UI - -The following page should be displayed: - -CockroachDB Admin UI - -### Step 3. Set the latitudes and longitudes for the localities - -Launch the built-in SQL client: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=
      -~~~ - -Insert the approximate latitudes and longitudes of each region into the `system.locations` table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO system.locations VALUES - ('region', 'us-east-1', 37.478397, -76.453077), - ('region', 'us-west-1', 38.837522, -120.895824), - ('region', 'eu-west-1', 53.142367, -7.692054); -~~~ - -{{site.data.alerts.callout_info}}The Node Map will not be displayed until all regions are assigned the corresponding latitudes and longitudes. {{site.data.alerts.end}} - -For the latitudes and longitudes of AWS, Azure, and Google Cloud regions, see [Location Coordinates for Reference](#location-coordinates-for-reference). - -### Step 4. View the Node Map - -[Open the **Overview page**](admin-ui-access-and-navigate.html) and select **Node Map** from the **View** drop-down menu. The **Node Map** will be displayed: - -CockroachDB Admin UI - -### Step 5. Navigate the Node Map - -Let's say you want to navigate to Node 2, which is in datacenter `us-east-1a` in the `us-east-1` region: - -1. Click on the map component marked as **region=us-east-1** on the **Node Map**. The datacenter view is displayed. -2. Click on the datacenter component marked as **datacenter=us-east-1a**. The individual node components are displayed. -3. To navigate back to the cluster view, either click on **Cluster** in the bread-crumb trail at the top of the **Node Map**, or click **Up to region=us-east-1** and then click **Up to Cluster** in the lower left-hand side of the **Node Map**. - -CockroachDB Admin UI - -## Troubleshoot the Node Map - -### Node Map not displayed - -The **Node Map** will not be displayed until all nodes have localities and are assigned the corresponding latitudes and longitudes. To verify if you have assigned localities as well as latitude and longitudes assigned to all nodes, navigate to the Localities debug page (`https://
      :8080/#/reports/localities`) in the Admin UI. - -The Localities debug page displays the following: - -- Localities configuration that you set up while starting the nodes with the `--locality` flags. -- Nodes corresponding to each locality. -- Latitude and longitude coordinates for each locality/node. - -On the page, ensure that every node has a locality as well as latitude/longitude coordinates assigned to them. - -### Node Map not displayed for all locality levels - -The **Node Map** is displayed only for the locality levels that have latitude/longitude coordinates assigned to them: - -- If you assign the latitude/longitude coordinates at the region level, the **Node Map** shows the regions on the world map. However, when you drill down to the datacenter and further to the individual nodes, the world map disappears and the datacenters/nodes are plotted in a circular layout. -- If you assign the latitude/longitude coordinates at the datacenter level, the **Node Map** shows the regions with single datacenters at the same location assigned to the datacenter, while regions with multiple datacenters are shown at the center of the datacenter coordinates in the region. When you drill down to the datacenter levels, the **Node Map** shows the datacenter at their assigned coordinates. Further drilling down to individual nodes shows the nodes in a circular layout. - -[Assign latitude/longitude coordinates](#step-3-set-the-latitudes-and-longitudes-for-the-localities) at the locality level that you want to view on the **Node Map**. - -## Known limitations - -### Unable to assign latitude/longitude coordinates to localities - -{% include {{ page.version.version }}/known-limitations/node-map.md %} - -### **Capacity Used** value displayed is more than configured Capacity - -{% include {{ page.version.version }}/misc/available-capacity-metric.md %} - -## Location coordinates for reference - -### AWS locations - -{% include {{ page.version.version }}/misc/aws-locations.md %} - -### Azure locations - -{% include {{ page.version.version }}/misc/azure-locations.md %} - -### Google Cloud locations - -{% include {{ page.version.version }}/misc/gce-locations.md %} diff --git a/src/current/v19.1/encryption.md b/src/current/v19.1/encryption.md deleted file mode 100644 index ce803845824..00000000000 --- a/src/current/v19.1/encryption.md +++ /dev/null @@ -1,230 +0,0 @@ ---- -title: Encryption -summary: Learn about the encryption features for secure CockroachDB clusters. -toc: true ---- - -Data encryption and decryption is the process of transforming plaintext data to cipher-text and vice versa using a key or password. - -## Encryption in flight - -CockroachDB uses TLS 1.2 for inter-node and client-node [authentication](authentication.html) as well as setting up a secure communication channel. Once the secure channel is set up, all inter-node and client-node network communication is encrypted using a [shared encryption key](https://en.wikipedia.org/wiki/Transport_Layer_Security) as per the TLS 1.2 protocol. This feature is enabled by default for all secure clusters and needs no additional configuration. - -## Encryption at Rest (Enterprise) - -Encryption at Rest provides transparent encryption of a node's data on the local disk. It allows encryption of all files on disk using [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in [counter mode](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Counter_(CTR)), with all key -sizes allowed. - -Encryption is performed in the [storage layer](architecture/storage-layer.html) and configured per store. -All files used by the store, regardless of contents, are encrypted with the desired algorithm. - -To allow arbitrary rotation schedules and ensure security of the keys, we use two layers of keys: - -- **Store keys** are provided by the user in a file. They are used to encrypt the list of data keys (see below). This is known as a **key encryption key**: its only purpose is to encrypt other keys. Store keys are never persisted by CockroachDB. Since very little data is encrypted using this key, it can have a very long lifetime without risk of reuse. - -- **Data keys** are automatically generated by CockroachDB. They are used to encrypt all files on disk. This is known as a **data encryption key**. Data keys are persisted in a key registry file, encrypted using the store key. The key has a short lifetime to avoid reuse. - -Store keys are specified at node startup by passing a path to a locally readable file. The file must contain 32 bytes (the key ID) followed by the key (16, 24, or 32 bytes). The size of the key dictates the version of AES to use (AES-128, AES-192, or AES-256). For an example showing how to create a store key, see [Generating key files](#generating-store-key-files) below. - -Also during node startup, CockroachDB uses a data key with the same length as the store key. If encryption has just been enabled, -the key size has changed, or the data key is too old (default lifetime is one week), CockroachDB generates a new data key. - -Any new file created by the store uses the currently-active data key. All data keys (both active and previous) are stored in a key registry file and encrypted with the active store key. - -After startup, if the active data key is too old, CockroachDB generates a new data key and marks it as active, using it for all further encryption. - -CockroachDB does not currently force re-encryption of older files but instead relies on normal RocksDB churn to slowly rewrite all data with the desired encryption. - -### Rotating keys - -Key rotation is necessary for Encryption at Rest for multiple reasons: - -- To prevent key reuse with the same encryption parameters (after encrypting many files). -- To reduce the risk of key exposure. - -Store keys are specified by the user and must be rotated by specifying different keys. -This is done by setting the `key` parameter of the `--enterprise-encryption` flag to the path to the new key, -and `old-key` to the previously-used key. - -Data keys will automatically be rotated at startup if any of the following conditions are met: - -- The active store key has changed. -- The encryption type has changed (different key size, or plaintext to/from encryption). -- The current data key is `rotation-period` old or more. - -Data keys will automatically be rotated at runtime if the current data key is `rotation-period` old or more. - -Once rotated, an old store key cannot be made the active key again. - -Upon store key rotation the data keys registry is decrypted using the old key and encrypted with the new -key. The newly-generated data key is used to encrypt all new data from this point on. - -### Changing encryption type - -The user can change the encryption type from plaintext to encryption, between different encryption algorithms (using various key sizes), or from encryption to plaintext. - -When changing the encryption type to plaintext, the data key registry is no longer encrypted and all previous data keys are readable by anyone. All data on the store is effectively readable. - -When changing from plaintext to encryption, it will take some time for all data to eventually be re-written and encrypted. - -### Recommendations - -There are a number of considerations to keep in mind when running with encryption. - -#### Deployment configuration - -To prevent key leakage, production deployments should: - -* Use encrypted swap, or disable swap entirely. -* Disable core files. - -CockroachDB attempts to disable core files at startup when encryption is requested, but it may fail. - -#### Key handling - -Key management is the most dangerous aspect of encryption. The following rules should be kept in mind: - -* Make sure that only the UNIX user running the `cockroach` process has access to the keys. -* Do not store the keys on the same partition/drive as the CockroachDB data. It is best to load keys at run time from a separate system (e.g., [Keywhiz](https://square.github.io/keywhiz/), Vault). -* Rotate store keys frequently (every few weeks to months). -* Keep the data key rotation period low (default is one week). - -#### Other recommendations - -A few other recommendations apply for best security practices: - -* Do not switch from encrypted to plaintext, this leaks data keys. When plaintext is selected, all previously encrypted data must be considered reachable. -* Do not copy the encrypted files, as the data keys are not easily available. -* If encryption is desired, start a node with it enabled from the first run, without ever running in plaintext. - -{{site.data.alerts.callout_danger}} -Note that backups taken with the [`BACKUP`](backup.html) statement **are not encrypted** even if Encryption at Rest is enabled. Encryption at Rest only applies to the CockroachDB node's data on the local disk. If you want encrypted backups, you will need to encrypt your backup files using your preferred encryption method. -{{site.data.alerts.end}} - -### Examples - -#### Generating store key files - -Cockroach determines which encryption algorithm to use based on the size of the key file. -The key file must contain random data making up the key ID (32 bytes) and the actual key (16, 24, or 32 -bytes depending on the encryption algorithm). - -| Algorithm | Key size | Key file size | -|-|-|-| -| AES-128 | 128 bits (16 bytes) | 48 bytes | -| AES-192 | 192 bits (24 bytes) | 56 bytes | -| AES-256 | 256 bits (32 bytes) | 64 bytes | - -Generating a key file can be done using the `cockroach` CLI: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen encryption-key -s 128 /path/to/my/aes-128.key -~~~ - -Or the equivalent [openssl](https://www.openssl.org/docs/man1.1.1/man1/openssl.html) CLI command: - -{% include copy-clipboard.html %} -~~~ shell -$ openssl rand -out /path/to/my/aes-128.key 48 -~~~ - -#### Starting a node with encryption - -Encryption at Rest is configured at node start time using the `--enterprise-encryption` command line flag. -The flag specifies the encryption options for one of the stores on the node. If multiple stores exist, -the flag must be specified for each store. - -The flag takes the form: `--enterprise-encryption=path=,key=,old-key=,rotation-period=`. - -The allowed components in the flag are: - -| Component | Requirement | Description | -|-|-|-| -| `path` | Required | Path of the store to apply encryption to. | -| `key` | Required | Path to the key file to encrypt data with, or `plain` for plaintext. | -| `old-key` | Required | Path to the key file the data is encrypted with, or `plain` for plaintext. | -| `rotation-period` | Optional | How often data keys should be automatically rotated. Default: one week. | - -The `key` and `old-key` components must **always** be specified. They allow for transitions between -encryption algorithms, and between plaintext and encrypted. - -Starting a node for the first time using AES-128 encryption can be done using: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=/path/to/my/aes-128.key,old-key=plain -~~~ - -{{site.data.alerts.callout_danger}} -Once specified for a given store, the `--enterprise-encryption` flag must always be present. -{{site.data.alerts.end}} - -#### Checking encryption status - -Encryption status can be seen on the node's stores report, reachable through: `http(s)://nodeaddress:8080/#/reports/stores/local` (or replace `local` with the node ID). For example, if you are running a [local cluster](secure-a-cluster.html), you can see the node's stores report at `https://localhost:8888/#/reports/stores/local`. - -The report shows encryption status for all stores on the selected node, including: - -* Encryption algorithm. -* Active store key information. -* Active data key information. -* The fraction of files/bytes encrypted using the active data key. - -CockroachDB relies on RocksDB compactions to write new files using the latest encryption key. It may take several days for all files to be replaced. Some files are only rewritten at startup, and some keep older copies around, requiring multiple restarts. You can force RocksDB compaction with the `cockroach debug compact` command (the node must first be [stopped](stop-a-node.html)). - -Information about keys is written to [the logs](debug-and-error-logs.html), including: - -* Active/old key information at startup. -* New key information after data key rotation. - -Alternatively, you can use the [`cockroach debug encryption-active-key`](debug-encryption-active-key.html) command to view information about a store's encryption algorithm and store key. - -#### Changing encryption algorithm or keys - -Encryption type and keys can be changed at any time by restarting the node. -To change keys or encryption type, the `key` component of the `--enterprise-encryption` flag is set to the new key, -while the key previously used must be specified in the `old-key` component. - -For example, we can switch from AES-128 to AES-256 using: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --store=cockroach-data --enterprise-encryption=path=cockroach-data,key=/path/to/my/aes-256.key,old-key=/path/to/my/aes-128.key -~~~ - -Upon starting, the node will read the existing data keys using the old encryption key (`aes-128.key`), then rewrite -the data keys using the new key (`aes-256.key`). A new data key will be generated to match the desired AES-256 algorithm. - -To check that the new key is active, use the stores report page in the Admin UI to [check the encryption status](#checking-encryption-status). - -To disable encryption, specify `key=plain`. The data keys will be stored in plaintext and new data will not be encrypted. - -To rotate keys, specify `key=/path/to/my/new-aes-128.key` and `old-key=/path/to/my/old-aes-128.key`. The data keys -will be decrypted using the old key and then encrypted using the new key. A new data key will also be generated. - -## Encryption caveats - -### Unencrypted backups - -Backups taken with the `BACKUP` statement are not encrypted even if Encryption at Rest is enabled. Encryption at Rest only applies to the CockroachDB node's data on the local disk. If you want encrypted backups, you will need to encrypt your backup files using your preferred encryption method. - -A workaround for the issue is to use a cloud storage provider that is configured to transparently encrypt your data (e.g., AWS S3 default encryption). - -### Higher CPU utilization - -Enabling Encryption at Rest might result in a higher CPU utilization. We estimate a 5-10% increase in CPU utilization. - -### Encryption for touchpoints with other services - -- S3 backup encryption -- Encrypted comms with Kafka - - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](secure-a-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/enterprise-licensing.md b/src/current/v19.1/enterprise-licensing.md deleted file mode 100644 index 73742b43234..00000000000 --- a/src/current/v19.1/enterprise-licensing.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Enterprise Features -summary: Request and set trial and enterprise license keys for CockroachDB -toc: true ---- - -CockroachDB distributes a single binary that contains both core and [enterprise features](https://www.cockroachlabs.com/pricing/). You can use core features without any license key. However, to use the enterprise features, you need either a trial or an enterprise license key. - -This page lists enterprise features, and shows you how to obtain and set trial and enterprise license keys for CockroachDB. - -## Enterprise features - -{% include {{ page.version.version }}/misc/enterprise-features.md %} - -## Types of licenses - -Type | Description --------------|------------ -**Trial License** | A trial license enables you to try out CockroachDB enterprise features for 30 days for free. -**Enterprise License** | A paid enterprise license enables you to use CockroachDB enterprise features for longer periods (one year or more). - -## Obtain a license - -To obtain a trial license key, fill out [the registration form](https://www.cockroachlabs.com/get-cockroachdb/enterprise/) and receive your trial license key via email within a few minutes. - -To upgrade to an enterprise license, contact Sales. - -## Set a license - -As the CockroachDB `root` user, open the [built-in SQL shell](use-the-built-in-sql-client.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. Then use the `SET CLUSTER SETTING` command to set the name of your organization and the license key: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ - -## Verify a license - -To verify a license, open the [built-in SQL shell](use-the-built-in-sql-client.html) and use the `SHOW CLUSTER SETTING` command to check the organization name and license key: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING cluster.organization; -~~~ -~~~ - cluster.organization -+----------------------+ - Acme Company -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING enterprise.license; -~~~ -~~~ - enterprise.license -+-------------------------------------------+ - crl-0-ChB1x... -(1 row) -~~~ - -The license setting is also logged in the cockroach.log on the node where the command is run: - -{% include copy-clipboard.html %} -~~~ sql -$ cat cockroach.log | grep license -~~~ -~~~ -I171116 18:11:48.279604 1514 sql/event_log.go:102 [client=[::1]:56357,user=root,n1] Event: "set_cluster_setting", target: 0, info: {SettingName:enterprise.license Value:xxxxxxxxxxxx User:root} -~~~ - -## Renew an expired license - -After your license expires, the enterprise features stop working, but your production setup is unaffected. For example, the backup and restore features would not work until the license is renewed, but you would be able to continue using all other features of CockroachDB without interruption. - -To renew an expired license, contact Sales and then [set](enterprise-licensing.html#set-a-license) the new license. - -## See also - -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) -- [Enterprise Trial –– Get Started](get-started-with-enterprise-trial.html) diff --git a/src/current/v19.1/experimental-audit.md b/src/current/v19.1/experimental-audit.md deleted file mode 100644 index 2b13b2cc478..00000000000 --- a/src/current/v19.1/experimental-audit.md +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: EXPERIMENTAL_AUDIT -summary: Use the EXPERIMENTAL_AUDIT subcommand to turn SQL audit logging on or off for a table. -toc: true ---- - -`EXPERIMENTAL_AUDIT` is a subcommand of [`ALTER TABLE`](alter-table.html) that is used to turn [SQL audit logging](sql-audit-logging.html) on or off for a table. - -The audit logs contain detailed information about queries being executed against your system, including: - -- Full text of the query (which may include personally identifiable information (PII)) -- Date/Time -- Client address -- Application name - -For a detailed description of exactly what is logged, see the [Audit Log File Format](#audit-log-file-format) section below. - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Synopsis - -
      -{% include {{ page.version.version }}/sql/diagrams/experimental_audit.html %} -
      - -## Required privileges - -Only members of the `admin` role can enable audit logs on a table. By default, the `root` user belongs to the `admin` role. - -## Parameters - - Parameter | Description ---------------+---------------------------------------------------------- - `table_name` | The name of the table you want to create audit logs for. - `READ` | Log all table reads to the audit log file. - `WRITE` | Log all table writes to the audit log file. - `OFF` | Turn off audit logging. - -{{site.data.alerts.callout_info}} -As of version 2.0, this command logs all reads and writes, and both the READ and WRITE parameters are required (as shown in the examples below). In a future release, this should change to allow logging only reads, only writes, or both. -{{site.data.alerts.end}} - -## Audit log file format - -The audit log file format is as shown below. The numbers above each column are not part of the format; they correspond to the descriptions that follow. - -~~~ -[1] [2] [3] [4] [5a] [5b] [5c] [6] [7a] [7b] [7c] [7d] [7e] [7f] [7g] [7h] [7i] -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503, user=root, n1] 13 exec "cockroach" {"ab"[53]:READ} "SELECT nonexistent FROM ab" {} 0.123 12 ERROR 0 -~~~ - -1. Date -2. Time (in UTC) -3. Goroutine ID - this column is used for troubleshooting CockroachDB and may change its meaning at any time -4. Where the log line was generated -5. Logging tags - - a. Client address - - b. Username - - c. Node ID -6. Log entry counter -7. Log message: - - a. Label indicating where the data was generated (useful for troubleshooting) - - b. Current value of the [`application_name`](set-vars.html) session setting - - c. Logging trigger: - - The list of triggering tables and access modes for audit logs, since only certain (read/write) activities are added to the audit log - - d. Full text of the query (Note: May contain PII) - - e. Placeholder values, if any - - f. Query execution time (in milliseconds) - - g. Number of rows produced (e.g., for `SELECT`) or processed (e.g., for `INSERT` or `UPDATE`). - - h. Status of the query - - `OK` for success - - `ERROR` otherwise - - i. Number of times the statement was [retried automatically](transactions.html#automatic-retries) by the server so far. - -## Audit log file storage location - -By default, audit logs are stored in the same directory as the other logs generated by CockroachDB. - -To store the audit log files in a specific directory, pass the `--sql-audit-dir` flag to [`cockroach start`](start-a-node.html). - -{{site.data.alerts.callout_success}} -If your deployment requires particular lifecycle and access policies for audit log files, point `--sql-audit-dir` at a directory that has permissions set so that only CockroachDB can create/delete files. -{{site.data.alerts.end}} - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Turn on audit logging - -Let's say you have a `customers` table that contains personally identifiable information (PII). To turn on audit logs for that table, run the following command: - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -Now, every access of customer data is added to the audit log with a line that looks like the following: - -~~~ -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503,user=root,n1] 13 exec "cockroach" {"customers"[53]:READ} "SELECT * FROM customers" {} 123.45 12 OK -I180211 07:30:48.832004 317 sql/exec_log.go:90 [client=127.0.0.1:62503,user=root,n1] 13 exec "cockroach" {"customers"[53]:READ} "SELECT nonexistent FROM customers" {} 0.123 12 ERROR -~~~ - -To turn on auditing for more than one table, issue a separate `ALTER` statement for each table. - -For a description of the log file format, see the [Audit Log File Format](#audit-log-file-format) section. - -{{site.data.alerts.callout_success}} -For a more detailed example, see [SQL Audit Logging](sql-audit-logging.html). -{{site.data.alerts.end}} - -### Turn off audit logging - -To turn off logging, issue the following command: - -{% include copy-clipboard.html %} -~~~ sql -ALTER TABLE customers EXPERIMENTAL_AUDIT SET OFF; -~~~ - -## See also - -- [SQL Audit Logging](sql-audit-logging.html) -- [`ALTER TABLE`](alter-table.html) -- [`cockroach start` logging flags](start-a-node.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/experimental-features.md b/src/current/v19.1/experimental-features.md deleted file mode 100644 index 4111679a533..00000000000 --- a/src/current/v19.1/experimental-features.md +++ /dev/null @@ -1,149 +0,0 @@ ---- -title: Experimental Features -summary: Learn about the experimental features available in CockroachDB -toc: true ---- - -This page lists the experimental features that are available in CockroachDB v19.1. - -{{site.data.alerts.callout_danger}} -**This page describes experimental features.** Their interfaces and outputs are subject to change, and there may be bugs. -
      -
      -If you encounter a bug, please [file an issue](file-an-issue.html). -{{site.data.alerts.end}} - -## Session variables - -The table below lists the experimental session settings that are available. For a complete list of session variables, see [`SHOW` (session settings)](show-vars.html). - -| Variable | Default Value | Description | -|-------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `experimental_force_split_at` | `'off'` | Indicates whether checks to prevent incorrect usage of [`ALTER TABLE ... SPLIT AT`](split-at.html) should be skipped. | -| `experimental_enable_zigzag_join` | `'off'` | Indicates whether the [cost-based optimizer](cost-based-optimizer.html) will plan certain queries using a zig-zag merge join algorithm, which searches for the desired intersection by jumping back and forth between the indexes based on the fact that they share a sorted order in their key suffix. | -| `experimental_serial_normalization` | `'rowid'` | If set to `'virtual_sequence'`, make the [`SERIAL`](serial.html) pseudo-type optionally auto-create a sequence for [better compatibility with Hibernate sequences](https://forum.cockroachlabs.com/t/hibernate-sequence-generator-returns-negative-number-and-ignore-unique-rowid/). | - -## SQL statements - -### Keep SQL audit logs - -Log queries against a table to a file. For more information, see [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html). - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t EXPERIMENTAL_AUDIT SET READ WRITE; -~~~ - -### Relocate leases and replicas - -You have the following options for controlling lease and replica location: - -1. Relocate leases and replicas using `EXPERIMENTAL_RELOCATE` -2. Relocate just leases using `EXPERIMENTAL_RELOCATE LEASE` - -For example, to distribute leases and ranges for N primary keys across N stores in the cluster, run a statement with the following structure: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t EXPERIMENTAL_RELOCATE SELECT ARRAY[, , ..., ], , , ..., ; -~~~ - -To relocate just the lease without moving the replicas, run a statement like the one shown below, which moves the lease for the range containing primary key 'foo' to store 1. - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE t EXPERIMENTAL_RELOCATE LEASE SELECT 1, 'foo'; -~~~ - -### Show table fingerprints - -Table fingerprints are used to compute an identification string of an entire table, for the purpose of gauging whether two tables have the same data. This is useful, for example, when restoring a table from backup. - -Example: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW EXPERIMENTAL_FINGERPRINTS FROM TABLE t; -~~~ - -~~~ - index_name | fingerprint -------------+--------------------- - primary | 1999042440040364641 -(1 row) -~~~ - -### Show a table's ranges - -Show the ranges that make up a table or index. For more information, see [`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html). - -{% include copy-clipboard.html %} -~~~ sql -SHOW EXPERIMENTAL_RANGES FROM TABLE t; -~~~ - -### Turn on KV event tracing - -Use session tracing (via [`SHOW TRACE FOR SESSION`](show-trace.html)) to report the replicas of all KV events that occur during its execution. - -Example: - -{% include copy-clipboard.html %} -~~~ sql -> SET tracing = on; -> SELECT * from t; -> SET tracing = off; -> SHOW EXPERIMENTAL_REPLICA TRACE FOR SESSION; -~~~ - -~~~ - timestamp | node_id | store_id | replica_id -----------------------------------+---------+----------+------------ - 2018-10-18 15:50:13.345879+00:00 | 3 | 3 | 7 - 2018-10-18 15:50:20.628383+00:00 | 2 | 2 | 26 -~~~ - -### Check for constraint violations with `SCRUB` - -Checks the consistency of [`UNIQUE`](unique.html) indexes, [`CHECK`](check.html) constraints, and more. Partially implemented; see [cockroachdb/cockroach#10425](https://github.com/cockroachdb/cockroach/issues/10425) for details. - -{{site.data.alerts.callout_info}} -This example uses the `users` table from our open-source, fictional peer-to-peer ride-sharing application,[MovR](https://github.com/cockroachdb/movr). -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> EXPERIMENTAL SCRUB table movr.users; -~~~ - -~~~ - job_uuid | error_type | database | table | primary_key | timestamp | repaired | details -----------+--------------------------+----------+-------+----------------------------------------------------------+---------------------------+----------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - | index_key_decoding_error | movr | users | ('boston','0009eeb5-d779-4bf8-b1bd-8566533b105c') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'06484 Christine Villages\\nGrantport, TN 01572'", "city": "'boston'", "credit_card": "'4634253150884'", "id": "'0009eeb5-d779-4bf8-b1bd-8566533b105c'", "name": "'Jessica Webb'"}} - | index_key_decoding_error | movr | users | ('los angeles','0001252c-fc16-4006-b6dc-c6b1a0fd1f5b') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'91309 Warner Springs\\nLake Danielmouth, PR 33400'", "city": "'los angeles'", "credit_card": "'3584736360686445'", "id": "'0001252c-fc16-4006-b6dc-c6b1a0fd1f5b'", "name": "'Rebecca Gibson'"}} - | index_key_decoding_error | movr | users | ('new york','000169a5-e337-4441-b664-dae63e682980') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'0787 Christopher Highway Apt. 363\\nHamptonmouth, TX 91864-2620'", "city": "'new york'", "credit_card": "'4578562547256688'", "id": "'000169a5-e337-4441-b664-dae63e682980'", "name": "'Christopher Johnson'"}} - | index_key_decoding_error | movr | users | ('paris','00089fc4-e5b1-48f6-9f0b-409905f228c4') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'46735 Martin Summit\\nMichaelview, OH 10906-5889'", "city": "'paris'", "credit_card": "'5102207609888778'", "id": "'00089fc4-e5b1-48f6-9f0b-409905f228c4'", "name": "'Nicole Fuller'"}} - | index_key_decoding_error | movr | users | ('rome','000209fc-69a1-4dd5-8053-3b5e5769876d') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'473 Barrera Vista Apt. 890\\nYeseniaburgh, CO 78087'", "city": "'rome'", "credit_card": "'3534605564661093'", "id": "'000209fc-69a1-4dd5-8053-3b5e5769876d'", "name": "'Sheryl Shea'"}} - | index_key_decoding_error | movr | users | ('san francisco','00058767-1e83-4e18-999f-13b5a74d7225') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'5664 Acevedo Drive Suite 829\\nHernandezview, MI 13516'", "city": "'san francisco'", "credit_card": "'376185496850202'", "id": "'00058767-1e83-4e18-999f-13b5a74d7225'", "name": "'Kevin Turner'"}} - | index_key_decoding_error | movr | users | ('seattle','0002e904-1256-4528-8b5f-abad16e695ff') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'81499 Samuel Crescent Suite 631\\nLake Christopherborough, PR 50401'", "city": "'seattle'", "credit_card": "'38743493725890'", "id": "'0002e904-1256-4528-8b5f-abad16e695ff'", "name": "'Mark Williams'"}} - | index_key_decoding_error | movr | users | ('washington dc','00007caf-2014-4696-85b0-840e7d8b6db9') | 2018-10-18 16:00:38.65916 | f | {"error_message": "key ordering did not match datum ordering. IndexDescriptor=ASC", "index_name": "primary", "row_data": {"address": "e'4578 Holder Trafficway\\nReynoldsside, IL 23520-7418'", "city": "'washington dc'", "credit_card": "'30454993082943'", "id": "'00007caf-2014-4696-85b0-840e7d8b6db9'", "name": "'Marie Miller'"}} -(8 rows) -~~~ - -## Functions and Operators - -The table below lists the experimental SQL functions and operators available in CockroachDB. For more information, see each function's documentation at [Functions and Operators](functions-and-operators.html). - -| Function | Description | -|----------------------------------------------------------------------------------+-------------------------------------------------| -| [`experimental_strftime`](functions-and-operators.html#date-and-time-functions) | Format time using standard `strftime` notation. | -| [`experimental_strptime`](functions-and-operators.html#date-and-time-functions) | Format time using standard `strptime` notation. | -| [`experimental_uuid_v4()`](functions-and-operators.html#id-generation-functions) | Return a UUID. | - -## See Also - -- [`SHOW` (session)](show-vars.html) -- [Functions and Operators](functions-and-operators.html) -- [`ALTER TABLE ... EXPERIMENTAL_AUDIT`](experimental-audit.html) -- [`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) -- [`SHOW TRACE FOR SESSION`](show-trace.html) diff --git a/src/current/v19.1/explain-analyze.md b/src/current/v19.1/explain-analyze.md deleted file mode 100644 index f87b9579b9a..00000000000 --- a/src/current/v19.1/explain-analyze.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: EXPLAIN ANALYZE -summary: The EXPLAIN ANALYZE statement executes a query and generates a physical query plan with execution statistics. -toc: true ---- - -The `EXPLAIN ANALYZE` [statement](sql-statements.html) **executes a SQL query** and generates a URL for a physical query plan with execution statistics. Query plans provide information around SQL execution, which can be used to troubleshoot slow queries by figuring out where time is being spent, how long a processor (i.e., a component that takes streams of input rows and processes them according to a specification) is not doing work, etc. For more information about distributed SQL queries, see the [DistSQL section of our SQL Layer Architecture docs](architecture/sql-layer.html#distsql). - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/sql/physical-plan-url.md %} -{{site.data.alerts.end}} - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/explain_analyze.html %}
      - -## Required privileges - -The user requires the appropriate [privileges](authorization.html#assign-privileges) for the statement being explained. - -## Parameters - -Parameter | Description --------------------|----------- -`DISTSQL` | _(Default)_ Generate a link to a distributed SQL physical query plan tree. -`preparable_stmt` | The [statement](sql-grammar.html#preparable_stmt) you want details about. All preparable statements are explainable. - -## Success responses - -Successful `EXPLAIN ANALYZE` statements return a table with the following columns: - - Column | Description ---------|------------ -**automatic** | If `true`, the query is distributed. For more information about distributed SQL queries, see the [DistSQL section of our SQL Layer Architecture docs](architecture/sql-layer.html#distsql). -**url** | The URL generated for a physical query plan that provides high level information about how a query will be executed. For details about reading the physical query plan, see [DistSQL Plan Viewer](#distsql-plan-viewer).

      {% include {{ page.version.version }}/sql/physical-plan-url.md %} - -#### DistSQL Plan Viewer - -The DistSQL Plan Viewer displays the physical query plan, as well as execution statistics: - -Field | Description -------+------------ -<ProcessorName>/<n> | The processor and processor ID used to read data into the SQL execution engine.

      A processor is a component that takes streams of input rows, processes them according to a specification, and outputs one stream of rows. For example, an "aggregator" aggregates input rows. -<index>@<table> | The index used. -Out | The output columns. -@<n> | The index of the column relative to the input. -Render | The stage that renders the output. -unordered / ordered | _(Blue box)_ A synchronizer that takes one or more output streams and merges them to be consumable by a processor. An ordered synchronizer is used to merge ordered streams and keeps the rows in sorted order. -left(@<n>)=right(@<n>) | The equality columns used in the join. -rows read | The number of rows read by the processor. -stall time | How long the processor spent not doing work. This is aggregated into the stall time numbers as the query progresses down the tree (i.e., stall time is added up and overlaps with previous time). -stored side | The smaller table that was stored as an in-memory hash table. -max memory used | How much memory (if any) is used to buffer rows. -by hash | _(Orange box)_ The router, which is a component that takes one stream of input rows and sends them to a node according to a routing algorithm.

      For example, a hash router hashes columns of a row and sends the results to the node that is aggregating the result rows. -max disk used | How much disk (if any) is used to buffer rows. Routers and processors will spill to disk buffering if there is not enough memory to buffer the rows. -rows routed | How many rows were sent by routers, which can be used to understand network usage. -bytes sent | The number of actual bytes sent (i.e., encoding of the rows). This is only relevant when doing network communication. -Response | The response back to the client. - -{{site.data.alerts.callout_info}} -Any or all of the above fields may display for a given query plan. -{{site.data.alerts.end}} - -## Example - -`EXPLAIN ANALYZE` executes a query and generates a link to a physical query plan, with execution statistics. - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN ANALYZE SELECT l_shipmode, AVG(l_extendedprice) FROM lineitem GROUP BY l_shipmode; -~~~ - -~~~ - automatic | url -+-----------+----------------------------------------------+ - true | https://cockroachdb.github.io/distsqlplan... -~~~ - -To view the [DistSQL Plan Viewer](#distsql-plan-viewer), point your browser to the URL provided: - -EXPLAIN ANALYZE (DISTSQL) - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html) -- [`CREATE DATABASE`](create-database.html) -- [`DROP DATABASE`](drop-database.html) -- [`EXPLAIN`](explain.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- [`IMPORT`](import.html) -- [Indexes](indexes.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v19.1/explain.md b/src/current/v19.1/explain.md deleted file mode 100644 index 8af36e15ec4..00000000000 --- a/src/current/v19.1/explain.md +++ /dev/null @@ -1,444 +0,0 @@ ---- -title: EXPLAIN -summary: The EXPLAIN statement provides information you can use to optimize SQL queries. -toc: true ---- - -The `EXPLAIN` [statement](sql-statements.html) returns CockroachDB's query plan for an [explainable statement](sql-grammar.html#preparable_stmt). You can then use this information to optimize the query. - -{{site.data.alerts.callout_success}} -To actually execute a statement and return a physical query plan with execution statistics, use [`EXPLAIN ANALYZE`](explain-analyze.html). -{{site.data.alerts.end}} - -## Query optimization - -Using `EXPLAIN`'s output, you can optimize your queries by taking the following points into consideration: - -- Queries with fewer levels execute more quickly. Restructuring queries to require fewer levels of processing will generally improve performance. - -- Avoid scanning an entire table, which is the slowest way to access data. You can avoid this by [creating indexes](indexes.html) that contain at least one of the columns that the query is filtering in its `WHERE` clause. - -You can find out if your queries are performing entire table scans by using `EXPLAIN` to see which: - -- Indexes the query uses; shown as the **Description** value of rows with the **Field** value of `table` - -- Key values in the index are being scanned; shown as the **Description** value of rows with the **Field** value of `spans` - -For more information, see [Find the Indexes and Key Ranges a Query Uses](#find-the-indexes-and-key-ranges-a-query-uses). - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/explain.html %}
      - -## Required privileges - -The user requires the appropriate [privileges](authorization.html#assign-privileges) for the statement being explained. - -## Parameters - - Parameter | Description ---------------------+------------ - `VERBOSE` | Show as much information as possible about the query plan. - `TYPES` | Include the intermediate [data types](data-types.html) CockroachDB chooses to evaluate intermediate SQL expressions. - `OPT` | Display a query plan tree if the query will be run with the [cost-based optimizer](cost-based-optimizer.html). If it returns an "unsupported statement" error, the query will not be run with the cost-based optimizer and will be run with the heuristic planner.

      New in v19.1: To include cost details used by the optimizer in planning the query, use `OPT, VERBOSE`. To include cost and type details, use `OPT, TYPES`. To include all details used by the optimizer, including statistics, use `OPT, ENV`. - `DISTSQL` | Generate a URL to a [distributed SQL physical query plan tree](explain-analyze.html#distsql-plan-viewer).

      {% include {{ page.version.version }}/sql/physical-plan-url.md %} - `preparable_stmt` | The [statement](sql-grammar.html#preparable_stmt) you want details about. All preparable statements are explainable. - -{{site.data.alerts.callout_danger}} -`EXPLAIN` also includes other modes besides query plans that are useful only to CockroachDB developers, which are not documented here. -{{site.data.alerts.end}} - -## Success responses - -Successful `EXPLAIN` statements return tables with the following columns: - - Column | Description ------------|------------- -**Tree** | A tree representation showing the hierarchy of the query plan. -**Field** | The name of a parameter relevant to the query plan node immediately above. -**Description** | Additional information for the parameter in **Field**. -**Columns** | The columns provided to the processes at lower levels of the hierarchy. Included in `TYPES` and `VERBOSE` output. -**Ordering** | The order in which results are presented to the processes at each level of the hierarchy, as well as other properties of the result set at each level. Included in `TYPES` and `VERBOSE` output. - -## Examples - -### Default query plans - - -By default, `EXPLAIN` includes the least detail about the query plan but can be useful to find out which indexes and index key ranges are used by a query. For example: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - tree | field | description -+-----------+--------+------------------+ - sort | | - │ | order | +season - └── scan | | - | table | episodes@primary - | spans | ALL - | filter | season > 3 -(6 rows) -~~~ - -The `tree` column of the output shows the tree structure of the query plan, in this case a `sort` and then a `scan`. - -The `field` and `description` columns describe a set of properties specific to an operation listed in the `tree` column (in this case, `sort` or `scan`): - -- `order`:`+season` -
      The sort will be ordered ascending on the `season` column. -- `table`:`episodes@primary` -
      The table is scanned on the `primary` index. -- `spans`:`ALL` -
      The table is scanned on all key ranges of the `primary` index (i.e., a full table scan). For more information on indexes and key ranges, see the [example](#find-the-indexes-and-key-ranges-a-query-uses) below. -- `filter`: `season > 3` -
      The scan filters on the `season` column. - -### `VERBOSE` option - -The `VERBOSE` option: - -- Includes SQL expressions that are involved in each processing stage, providing more granular detail about which portion of your query is represented at each level. -- Includes detail about which columns are being used by each level, as well as properties of the result set on that level. - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (VERBOSE) SELECT * FROM quotes AS q \ -JOIN episodes AS e ON q.episode = e.id \ -WHERE e.season = '1' \ -ORDER BY e.stardate ASC; -~~~ - -~~~ - tree | field | description | columns | ordering -+----------------+--------------------+------------------+--------------------------------------------------------------------------+-----------+ - sort | | | (quote, characters, stardate, episode, id, season, num, title, stardate) | +stardate - │ | order | +stardate | | - └── hash-join | | | (quote, characters, stardate, episode, id, season, num, title, stardate) | - │ | type | inner | | - │ | equality | (episode) = (id) | | - │ | right cols are key | | | - ├── scan | | | (quote, characters, stardate, episode) | - │ | table | quotes@primary | | - │ | spans | ALL | | - └── scan | | | (id, season, num, title, stardate) | - | table | episodes@primary | | - | spans | ALL | | - | filter | season = 1 | | -(13 rows) -~~~ - -### `TYPES` option - -The `TYPES` mode includes the types of the values used in the query plan. It also includes the SQL expressions that were involved in each processing stage, and includes the columns used by each level. - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (TYPES) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - tree | field | description | columns | ordering -+-----------+--------+----------------------------------+---------------------------------------------------------------+----------+ - sort | | | (id int, season int, num int, title string, stardate decimal) | +season - │ | order | +season | | - └── scan | | | (id int, season int, num int, title string, stardate decimal) | - | table | episodes@primary | | - | spans | ALL | | - | filter | ((season)[int] > (3)[int])[bool] | | -(6 rows) -~~~ - -### `OPT` option - -The `OPT` option displays a query plan tree, if the query will be run with the [cost-based optimizer](cost-based-optimizer.html). If it returns an "unsupported statement" error, the query will not be run with the cost-based optimizer and will be run with the legacy heuristic planner. - -For example, the following query returns the query plan tree, which means that it will be run with the cost-based optimizer: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - text -+---------------------------+ - sort - └── select - ├── scan episodes - └── filters - └── season > 3 -(5 rows) -~~~ - - - -New in v19.1: To include cost details used by the optimizer in planning the query, use `OPT, VERBOSE`: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, VERBOSE) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - text -+----------------------------------------------------------------------------------------------------------+ - sort - ├── columns: id:1 season:2 num:3 title:4 stardate:5 - ├── stats: [rows=26.3333333, distinct(1)=26.3333333, null(1)=0, distinct(2)=2.99993081, null(2)=0] - ├── cost: 90.7319109 - ├── key: (1) - ├── fd: (1)-->(2-5) - ├── ordering: +2 - ├── prune: (1,3-5) - └── select - ├── columns: id:1 season:2 num:3 title:4 stardate:5 - ├── stats: [rows=26.3333333, distinct(1)=26.3333333, null(1)=0, distinct(2)=2.99993081, null(2)=0] - ├── cost: 87.71 - ├── key: (1) - ├── fd: (1)-->(2-5) - ├── prune: (1,3-5) - ├── scan episodes - │ ├── columns: id:1 season:2 num:3 title:4 stardate:5 - │ ├── stats: [rows=79, distinct(1)=79, null(1)=0, distinct(2)=3, null(2)=0] - │ ├── cost: 86.91 - │ ├── key: (1) - │ ├── fd: (1)-->(2-5) - │ └── prune: (1-5) - └── filters - └── season > 3 [outer=(2), constraints=(/2: [/4 - ]; tight)] -(24 rows) -~~~ - - - -New in v19.1: To include cost and type details, use `OPT, TYPES`: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, TYPES) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - text -+----------------------------------------------------------------------------------------------------------+ - sort - ├── columns: id:1(int!null) season:2(int!null) num:3(int) title:4(string) stardate:5(decimal) - ├── stats: [rows=26.3333333, distinct(1)=26.3333333, null(1)=0, distinct(2)=2.99993081, null(2)=0] - ├── cost: 90.7319109 - ├── key: (1) - ├── fd: (1)-->(2-5) - ├── ordering: +2 - ├── prune: (1,3-5) - └── select - ├── columns: id:1(int!null) season:2(int!null) num:3(int) title:4(string) stardate:5(decimal) - ├── stats: [rows=26.3333333, distinct(1)=26.3333333, null(1)=0, distinct(2)=2.99993081, null(2)=0] - ├── cost: 87.71 - ├── key: (1) - ├── fd: (1)-->(2-5) - ├── prune: (1,3-5) - ├── scan episodes - │ ├── columns: id:1(int!null) season:2(int) num:3(int) title:4(string) stardate:5(decimal) - │ ├── stats: [rows=79, distinct(1)=79, null(1)=0, distinct(2)=3, null(2)=0] - │ ├── cost: 86.91 - │ ├── key: (1) - │ ├── fd: (1)-->(2-5) - │ └── prune: (1-5) - └── filters - └── gt [type=bool, outer=(2), constraints=(/2: [/4 - ]; tight)] - ├── variable: season [type=int] - └── const: 3 [type=int] -(26 rows) -~~~ - - - -New in v19.1: To include all details used by the optimizer, including statistics, use `OPT, ENV`: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (OPT, ENV) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; -~~~ - -~~~ - text -+-------------------------------------------------------------------------------------------------------------------------------+ - Version: CockroachDB CCL v19.1.0-beta.20190318-377-gc45b9a400f (x86_64-apple-darwin17.7.0, built 2019/03/26 19:46:42, go1.11) - - CREATE TABLE episodes ( - id INT8 NOT NULL, - season INT8 NULL, - num INT8 NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - FAMILY "primary" (id, season, num, title, stardate) - ); - - ALTER TABLE startrek.public.episodes INJECT STATISTICS '[ - { - "columns": [ - "id" - ], - "created_at": "2019-03-26 19:49:53.18699+00:00", - "distinct_count": 79, - "histo_col_type": "", - "name": "__auto__", - "null_count": 0, - "row_count": 79 - }, - { - "columns": [ - "season" - ], - "created_at": "2019-03-26 19:49:53.18699+00:00", - "distinct_count": 3, - "histo_col_type": "", - "name": "__auto__", - "null_count": 0, - "row_count": 79 - }, - { - "columns": [ - "num" - ], - "created_at": "2019-03-26 19:49:53.18699+00:00", - "distinct_count": 29, - "histo_col_type": "", - "name": "__auto__", - "null_count": 0, - "row_count": 79 - }, - { - "columns": [ - "title" - ], - "created_at": "2019-03-26 19:49:53.18699+00:00", - "distinct_count": 79, - "histo_col_type": "", - "name": "__auto__", - "null_count": 0, - "row_count": 79 - }, - { - "columns": [ - "stardate" - ], - "created_at": "2019-03-26 19:49:53.18699+00:00", - "distinct_count": 75, - "histo_col_type": "", - "name": "__auto__", - "null_count": 4, - "row_count": 79 - } - ]'; - - EXPLAIN (OPT, ENV) SELECT * FROM episodes WHERE season > 3 ORDER BY season ASC; - ---- - sort - └── select - ├── scan episodes - └── filters - └── season > 3 -(77 rows) -~~~ - -### `DISTSQL` option - -The `DISTSQL` option generates a URL for a physical query plan that provides high level information about how a query will be executed. For details about reading the physical query plan, see [DistSQL Plan Viewer](explain-analyze.html#distsql-plan-viewer). For more information about distributed SQL queries, see the [DistSQL section of our SQL Layer Architecture docs](architecture/sql-layer.html#distsql). - -{{site.data.alerts.callout_info}} -{% include {{ page.version.version }}/sql/physical-plan-url.md %} -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN (DISTSQL) SELECT l_shipmode, AVG(l_extendedprice) FROM lineitem GROUP BY l_shipmode; -~~~ - -~~~ - automatic | url ------------+---------------------------------------------- - true | https://cockroachdb.github.io/distsqlplan... -~~~ - -To view the [DistSQL Plan Viewer](explain-analyze.html#distsql-plan-viewer), point your browser to the URL provided: - -EXPLAIN (DISTSQL) - -### Find the indexes and key ranges a query uses - -You can use `EXPLAIN` to understand which indexes and key ranges queries use, which can help you ensure a query isn't performing a full table scan. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE kv (k INT PRIMARY KEY, v INT); -~~~ - -Because column `v` is not indexed, queries filtering on it alone scan the entire table: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ - tree | field | description -+------+--------+-----------------------+ - scan | | - | table | kv@primary - | spans | ALL - | filter | (v >= 4) AND (v <= 5) -(4 rows) -~~~ - -If there were an index on `v`, CockroachDB would be able to avoid scanning the entire table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX v ON kv (v); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM kv WHERE v BETWEEN 4 AND 5; -~~~ - -~~~ - tree | field | description -------+-------+------------- - scan | | - | table | kv@v - | spans | /4-/6 -(3 rows) -~~~ - -Now, only part of the index `v` is getting scanned, specifically the key range starting at (and including) 4 and stopping before 6. - -## See also - -- [`ALTER TABLE`](alter-table.html) -- [`ALTER SEQUENCE`](alter-sequence.html) -- [`BACKUP`](backup.html) -- [`CANCEL JOB`](cancel-job.html) -- [`CREATE DATABASE`](create-database.html) -- [`DROP DATABASE`](drop-database.html) -- [`EXECUTE`](sql-grammar.html#execute_stmt) -- [`EXPLAIN ANALYZE`](explain-analyze.html) -- [`IMPORT`](import.html) -- [Indexes](indexes.html) -- [`INSERT`](insert.html) -- [`PAUSE JOB`](pause-job.html) -- [`RESET`](reset-vars.html) -- [`RESTORE`](restore.html) -- [`RESUME JOB`](resume-job.html) -- [`SELECT`](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`SET`](set-vars.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) diff --git a/src/current/v19.1/export.md b/src/current/v19.1/export.md deleted file mode 100644 index ff7895a356f..00000000000 --- a/src/current/v19.1/export.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: EXPORT -summary: Export tabular data from a CockroachDB cluster in CSV format. -toc: true ---- - -The `EXPORT` [statement](sql-statements.html) exports tabular data or the results of arbitrary `SELECT` statements to CSV files. - -Using the [CockroachDB distributed execution engine](architecture/sql-layer.html#distsql), `EXPORT` parallelizes CSV creation across all nodes in the cluster, making it possible to quickly get large sets of data out of CockroachDB in a format that can be ingested by downstream systems. If you do not need distributed exports, you can use the [non-enterprise feature to export tabular data in CSV format](#non-distributed-export-using-the-sql-shell). - -{{site.data.alerts.callout_danger}} -This is an [enterprise feature](enterprise-licensing.html). Also, it is in **beta** and is currently undergoing continued testing. Please [file a Github issue](file-an-issue.html) with us if you identify a bug. -{{site.data.alerts.end}} - -## Export file location - -You can use remote cloud storage (Amazon S3, Google Cloud Platform, etc.) to store the exported CSV data. Alternatively, you can use an [HTTP server](create-a-file-server.html) accessible from all nodes. - -For simplicity's sake, it's **strongly recommended** to use cloud/remote storage for the data you want to export. Local files are supported; however, they must be accessible identically from all nodes in the cluster. - -## Cancelling export - -After the export has been initiated, you can cancel it with [`CANCEL QUERY`](cancel-query.html). - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/export.html %}
      - -{{site.data.alerts.callout_info}}The EXPORT statement cannot be used within a transaction.{{site.data.alerts.end}} - -## Required privileges - -Only members of the `admin` role can run `EXPORT`. By default, the `root` user belongs to the `admin` role. - -## Parameters - - Parameter | Description ------------|------------- - `file_location` | Specify the URL of the file location where you want to store the exported CSV data. - `WITH kv_option` | Control your export's behavior with [these options](#export-options). - `select_stmt` | Specify the query whose result you want to export to CSV format. - `table_name` | Specify the name of the table you want to export to CSV format. - -### Export file URL - -URLs for the file directory location you want to export to must use the following format: - -{% include {{ page.version.version }}/misc/external-urls.md %} - -You can specify the base directory where you want to store the exported .csv files. CockroachDB will create several files in the specified directory with programmatically generated names (e.g., n1.1.csv, n1.2.csv, n2.1.csv, ...). - -### Export options - -You can control the [`EXPORT`](export.html) process's behavior using any of the following key-value pairs as a `kv_option`. - -#### `delimiter` - -If not using comma as your column delimiter, you can specify another ASCII character as the delimiter. - -
      - - - - - - - - - - - - - - - - - - -
      Required?No
      Keydelimiter
      ValueThe ASCII character that delimits columns in your rows
      ExampleTo use tab-delimited values: WITH delimiter = e'\t'
      - -#### `nullas` - -Convert SQL *NULL* values so they match the specified string. - - - - - - - - - - - - - - - - - - - - -
      Required?No
      Keynullas
      ValueThe string that should be used to represent NULL values. To avoid collisions, it is important to pick nullas values that does not appear in the exported data.
      ExampleTo use empty columns as NULL: WITH nullas = ''
      - -## Examples - -### Export a table - -{% include copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 'azure://acme-co/customer-export-data?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - WITH delimiter = '|' FROM TABLE bank.customers; -~~~ - -### Export using a `SELECT` statement - -{% include copy-clipboard.html %} -~~~ sql -> EXPORT INTO CSV - 'azure://acme-co/customer-export-data?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' - FROM SELECT * FROM bank.customers WHERE id >= 100; -~~~ - -### Non-distributed export using the SQL shell - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql -e "SELECT * from bank.customers WHERE id>=100;" --format=csv > my.csv -~~~ - -### View a running export - -View running exports by using [`SHOW QUERIES`](show-queries.html): - -{% include copy-clipboard.html %} -~~~ sql -> SHOW QUERIES; -~~~ - -### Cancel a running export - -Use [`SHOW QUERIES`](show-queries.html) to get a running export's `query_id`, which can be used to [cancel the export](cancel-query.html): - -{% include copy-clipboard.html %} -~~~ sql -> CANCEL QUERY '14dacc1f9a781e3d0000000000000001'; -~~~ - -## Known limitation - -`EXPORT` may fail with an error if the SQL statements are incompatible with DistSQL. In that case, use the [non-enterprise feature to export tabular data in CSV format](#non-distributed-export-using-the-sql-shell). - -## See also - -- [Create a File Server](create-a-file-server.html) diff --git a/src/current/v19.1/file-an-issue.md b/src/current/v19.1/file-an-issue.md deleted file mode 100644 index 065725484dd..00000000000 --- a/src/current/v19.1/file-an-issue.md +++ /dev/null @@ -1,66 +0,0 @@ ---- -title: File an Issue -summary: Learn how to file a GitHub issue with CockroachDB. -toc: false ---- - -If you've tried to [troubleshoot](troubleshooting-overview.html) an issue yourself, have [reached out for help](support-resources.html), and are still stumped, you can file an issue in GitHub. - -To file an issue in GitHub, we need the following information: - -1. A summary of the issue. - -2. The steps to reproduce the issue. - -3. The result you expected. - -4. The result that actually occurred. - -5. The first few lines of the log file from each node in the cluster in a timeframe as close as possible to reproducing the issue. On most Unix-based systems running with defaults, you can get this information using the following command: - - {% include copy-clipboard.html %} - ~~~ shell - $ grep -F '[config]' cockroach-data/logs/cockroach.log - ~~~ - {{site.data.alerts.callout_info}}You might need to replace cockroach-data/logs with the location of your logs.{{site.data.alerts.end}} - If the logs are not available, please include the output of `cockroach version` for each node in the cluster. - -### Template - -You can use this as a template for [filing an issue in GitHub](https://github.com/cockroachdb/cockroach/issues/new): - -~~~ - -## Summary - - - -## Steps to reproduce - -1. -2. -3. - -## Expected Result - - - -## Actual Result - - - -## Log files/version - -### Node 1 - - - -### Node 2 - - - -### Node 3 - - - -~~~ diff --git a/src/current/v19.1/float.md b/src/current/v19.1/float.md deleted file mode 100644 index 8a1df115e63..00000000000 --- a/src/current/v19.1/float.md +++ /dev/null @@ -1,106 +0,0 @@ ---- -title: FLOAT -summary: The FLOAT data type stores inexact, floating-point numbers with up to 17 digits in total and at least one digit to the right of the decimal point. -toc: true ---- - -CockroachDB supports various inexact, floating-point number [data types](data-types.html) with up to 17 digits of decimal precision. - -They are handled internally using the [standard double-precision (64-bit binary-encoded) IEEE754 format](https://en.wikipedia.org/wiki/IEEE_floating_point). - - -## Names and Aliases - -Name | Aliases ------|-------- -`FLOAT` | None -`REAL` | `FLOAT4` -`DOUBLE PRECISION` | `FLOAT8` - -## Syntax - -A constant value of type `FLOAT` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `1.414` or `-1234`. - -The special IEEE754 values for positive infinity, negative infinity -and [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) cannot be -entered using numeric literals directly and must be converted using an -[interpreted literal](sql-constants.html#interpreted-literals) or an -[explicit conversion](scalar-expressions.html#explicit-type-coercions) -from a string literal instead. - -The following values are recognized: - - Syntax | Value -----------------------------------------|------------------------------------------------ - `inf`, `infinity`, `+inf`, `+infinity` | +∞ - `-inf`, `-infinity` | -∞ - `nan` | [NaN (Not-a-Number)](https://en.wikipedia.org/wiki/NaN) - -For example: - -- `FLOAT '+Inf'` -- `'-Inf'::FLOAT` -- `CAST('NaN' AS FLOAT)` - -## Size - -A `FLOAT` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE floats (a FLOAT PRIMARY KEY, b REAL, c DOUBLE PRECISION); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM floats; -~~~ - -~~~ -+-------------+------------------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+------------------+-------------+----------------+-----------------------+-------------+ -| a | FLOAT | false | NULL | | {"primary"} | -| b | REAL | true | NULL | | {} | -| c | DOUBLE PRECISION | true | NULL | | {} | -+-------------+------------------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO floats VALUES (1.012345678901, 2.01234567890123456789, CAST('+Inf' AS FLOAT)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM floats; -~~~ - -~~~ -+----------------+--------------------+------+ -| a | b | c | -+----------------+--------------------+------+ -| 1.012345678901 | 2.0123456789012346 | +Inf | -+----------------+--------------------+------+ -(1 row) -# Note that the value in "b" has been limited to 17 digits. -~~~ - -## Supported casting and conversion - -`FLOAT` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Truncates decimal precision and requires values to be between -2^63 and 2^63-1 -`DECIMAL` | Causes an error to be reported if the value is NaN or +/- Inf. -`BOOL` | **0** converts to `false`; all other values convert to `true` -`STRING` | -- - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/follower-reads.md b/src/current/v19.1/follower-reads.md deleted file mode 100644 index bb90e260103..00000000000 --- a/src/current/v19.1/follower-reads.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: Follower Reads -summary: To reduce latency for read queries, you can choose to have the closest node serve the request using the follower reads feature. -toc: true ---- - -New in v19.1: To reduce latency for read queries, you can use the follower reads feature, which lets the closest replica serve the read request at the expense of only not guaranteeing that data is up to date. - -{{site.data.alerts.callout_danger}} -The follower reads feature is only available to [enterprise](https://www.cockroachlabs.com/product/cockroachdb/) users. -{{site.data.alerts.end}} - -## What are Follower reads? - -Follower reads are a mechanism to let any replica of a range serve a read request, but are only available for read queries that are sufficiently in the past, i.e., using `AS OF SYSTEM TIME`. Currently, follower reads are available for any read operation at least 48 seconds in the past, though there is active work to reduce that window. - -In widely distributed deployments, using follower reads can reduce the latency of read operations (which can also increase throughput) by letting the replica closest to the gateway serve the request, instead of forcing the gateway to communicate with the leaseholder, which could be geographically distant. - -To make it easier to know when it's safe for your application to make follower reads, we've also included a convenience function (`experimental_follower_read_timestamp()`) that runs your queries at a time as close as possible to the present time. - -## Settings - -### Enable/disable follower reads - -Use [`SET CLUSTER SETTING`](set-cluster-setting.html) to set `kv.closed_timestamp.follower_reads_enabled` to: - -- `true` to enable follower reads _(default)_ -- `false` to disable follower reads - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.closed_timestamp.follower_reads_enabled = false; -~~~ - -#### When to use follower reads - -Follower reads return consistent historical reads; currently a minimum of 48 seconds in the past, though we are actively working on reducing that number. - -As long as your `SELECT` operations can tolerate slightly outdated data, Follower reads can reduce read latencies and increase throughput. - -#### When not to use follower reads - -You should not use follower reads when you need up-to-date data. - -### Make follower read-compatible queries - -Any `SELECT` statement with an `AS OF SYSTEM TIME` value at least 48 seconds in the past can be served by any replica (i.e., can be a Follower Read). - -To simplify this calculation, we've added a convenience function that will always set the `AS OF SYSTEM TIME` value to the minimum required for follower reads, `experimental_follower_read_timestamp()`: - -``` sql -SELECT ... FROM ... AS OF SYSTEM TIME experimental_follower_read_timestamp(); -``` - -### Make follower read-compatible transactions - -You can set the `AS OF SYSTEM TIME` value for all operations in a read-only transaction: - -```sql -BEGIN AS OF SYSTEM TIME experimental_follower_read_timestamp(); - -SAVEPOINT cockroach_restart; - -SELECT ... -SELECT ... - -COMMIT; -``` - -## How follower reads works - -In CockroachDB's general architecture, all reads are served by a range's [leaseholder](architecture/replication-layer.html#leases), which is a replica elected to coordinate all write operations. Because this node contains information about all of a range's writes, it can also serve reads for the range while still guaranteeing `SERIALIZABLE` isolation. With this architecture, the client might need to communicate with a machine that is far away, creating greater network latencies. - -However, if you were to lower the isolation requirements of an operation, it's possible to serve the read from _any_ replica, not only the leaseholder, given that the data can be sufficiently old. - -To accomplish this in CockroachDB, we've created a mechanism to let you express that any node can serve the request (`kv.closed_timestamp.follower_reads_enabled`) and that a historical read is sufficient (`AS OF SYSTEM TIME`), given that the argument to `AS OF SYSTEM TIME` is sufficiently in the past (`experimental_follower_read_timestamp()`. - -For a more detailed explanation, you can also read the [follower reads RFC](https://github.com/cockroachdb/cockroach/blob/master/docs/RFCS/20180603_follower_reads.md). - -### Reading from followers - -Each CockroachDB node tracks a property called its "closed timestamp", which means that no new writes can ever be introduced below that timestamp. The closed timestamp advances forward by some target interval behind the current time. If the replica receives a write at a timestamp less than its closed timestamp, it rejects the write. - -With follower reads enabled, any replica on a node can serve a read for a key as long as the time at which the operation is performed (i.e., the `AS OF SYSTEM TIME` value) is less or equal to the node's closed timestamp. - -### Determining which node to read from - -Every node keeps a record of its latency with all other nodes in the system. When a gateway node in cluster with follower reads enabled receives a request to read a key with a sufficiently old `AS OF SYSTEM TIME` value, it forwards the request to the closest node that contains a replica of the data––whether it be a follower or the leaseholder. - -### Interactions with long-running writes - -Long-running write transactions will create write intents with a timestamp near when the transaction began. When a follower read encounters a write intent, it will often end up in a `Wait Queue`, waiting for the operation to complete; however, this runs counter to the benefit follower reads provides. - -To counteract this, you can issue all follower reads in explicit transactions set with `HIGH` priority: - -```sql -BEGIN PRIORITY HIGH AS OF SYSTEM TIME experimental_follower_read_timestamp(); - -SAVEPOINT cockroach_restart; - -SELECT ... -SELECT ... - -COMMIT; -``` - -## See Also - -- [Cluster Settings Overview](cluster-settings.html) -- [Load-Based Splitting](load-based-splitting.html) diff --git a/src/current/v19.1/foreign-key.md b/src/current/v19.1/foreign-key.md deleted file mode 100644 index a422f4f8887..00000000000 --- a/src/current/v19.1/foreign-key.md +++ /dev/null @@ -1,769 +0,0 @@ ---- -title: Foreign Key Constraint -summary: The `FOREIGN KEY` constraint specifies a column can contain only values exactly matching existing values from the column it references. -toc: true ---- - -A foreign key is a column (or combination of columns) in a table whose values must match values of a column in some other table. `FOREIGN KEY` constraints enforce [referential integrity](https://en.wikipedia.org/wiki/Referential_integrity), which essentially says that if column value A refers to column value B, then column value B must exist. - -For example, given an `orders` table and a `customers` table, if you create a column `orders.customer_id` that references the `customers.id` primary key: - -- Each value inserted or updated in `orders.customer_id` must exactly match a value in `customers.id`, or be `NULL`. -- Values in `customers.id` that are referenced by `orders.customer_id` cannot be deleted or updated, unless you have [cascading actions](#use-a-foreign-key-constraint-with-cascade). However, values of `customers.id` that are _not_ present in `orders.customer_id` can be deleted or updated. - -## Details - -### Rules for creating foreign keys - -**Foreign Key Columns** - -- Foreign key columns must use their referenced column's [type](data-types.html). -- Each column cannot belong to more than 1 `FOREIGN KEY` constraint. -- A foreign key column cannot be a [computed column](computed-columns.html). -- Foreign key columns must be [indexed](indexes.html). This is required because updates and deletes on the referenced table will need to search the referencing table for any matching records to ensure those operations would not violate existing references. In practice, such indexes are likely also needed by applications using these tables, since finding all records which belong to some entity, for example all orders for a given customer, is very common. - - If you are adding the `FOREIGN KEY` constraint to an existing table, and the columns you want to constraint are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the [`ADD CONSTRAINT`](add-constraint.html) statement to add the `FOREIGN KEY` constraint to the columns. - - If you are creating a new table, there are a number of ways that you can meet the indexing requirement: - - - You can create indexes explicitly using the [`INDEX`](create-table.html#create-a-table-with-secondary-and-inverted-indexes) clause of `CREATE TABLE`. - - You can rely on indexes created by the [`PRIMARY KEY`](primary-key.html) or [`UNIQUE`](unique.html) constraints. - - New in v19.1: If you add a foreign key constraint to an empty table, and an index on the referencing columns does not already exist, CockroachDB automatically creates one. For an example, see [Add the foreign key constraint with `CASCADE`](add-constraint.html#add-the-foreign-key-constraint-with-cascade). It's important to note that if you later remove the `FOREIGN KEY` constraint, this automatically created index _is not_ removed. - - {{site.data.alerts.callout_success}} - Using the foreign key columns as the prefix of an index's columns also satisfies the requirement for an index. For example, if you create foreign key columns `(A, B)`, an index of columns `(A, B, C)` satisfies the requirement for an index. - {{site.data.alerts.end}} - -**Referenced Columns** - -- Referenced columns must contain only unique sets of values. This means the `REFERENCES` clause must use exactly the same columns as a [`UNIQUE`](unique.html) or [`PRIMARY KEY`](primary-key.html) constraint on the referenced table. For example, the clause `REFERENCES tbl (C, D)` requires `tbl` to have either the constraint `UNIQUE (C, D)` or `PRIMARY KEY (C, D)`. -- In the `REFERENCES` clause, if you specify a table but no columns, CockroachDB references the table's primary key. In these cases, the `FOREIGN KEY` constraint and the referenced table's primary key must contain the same number of columns. -- Referenced columns must be [indexed](indexes.html). There are a number of ways to meet this requirement: - - - You can create indexes explicitly using the [`INDEX`](create-table.html#create-a-table-with-secondary-and-inverted-indexes) clause of `CREATE TABLE`. - - You can rely on indexes created by the [`PRIMARY KEY`](primary-key.html) or [`UNIQUE`](unique.html) constraints. - - New in v19.1: If an index on the referenced column does not already exist, CockroachDB automatically creates one. It's important to note that if you later remove the `FOREIGN KEY` constraint, this automatically created index _is not_ removed. - - {{site.data.alerts.callout_success}} - Using the referenced columns as the prefix of an index's columns also satisfies the requirement for an index. For example, if you create foreign key that references the columns `(A, B)`, an index of columns `(A, B, C)` satisfies the requirement for an index. - {{site.data.alerts.end}} - -### Null values - -Single-column foreign keys accept null values. - -Multiple-column (composite) foreign keys only accept null values in the following scenarios: - -- The write contains null values for all foreign key columns (if `MATCH FULL` is specified). -- The write contains null values for at least one foreign key column (if `MATCH SIMPLE` is specified). - -For more information about composite foreign keys, see the [composite foreign key matching](#composite-foreign-key-matching) section. - -Note that allowing null values in either your foreign key or referenced columns can degrade their referential integrity, since any key with a null value is never checked against the referenced table. To avoid this, you can use a [`NOT NULL` constraint](not-null.html) on foreign keys when [creating your tables](create-table.html). - -{{site.data.alerts.callout_info}} -A `NOT NULL` constraint cannot be added to existing tables. -{{site.data.alerts.end}} - -### Composite foreign key matching - -New in v19.1: By default, composite foreign keys are matched using the `MATCH SIMPLE` algorithm (which is the same default as Postgres). `MATCH FULL` is available if specified. - -In versions 2.1 and earlier, the only option for composite foreign key matching was an incorrect implementation of `MATCH FULL`. This allowed null values in the referencing key columns to correspond to null values in the referenced key columns. This was incorrect in two ways: - -1. `MATCH FULL` should not allow mixed null and non-null values. See below for more details on the differences between comparison methods. -2. Null values cannot ever be compared to each other. - -To correct these issues, all composite key matches defined prior to version 19.1 will now use the `MATCH SIMPLE` comparison method. We have also added the ability to specify both `MATCH FULL` and `MATCH SIMPLE`. If you had a composite foreign key constraint and have just upgraded to version 19.1, then please check that `MATCH SIMPLE` works for your schema and consider replacing that foreign key constraint with a `MATCH FULL` one. - -#### How it works - -For matching purposes, composite foreign keys can be in one of three states: - -- **Valid**: Keys that can be used for matching foreign key relationships. - -- **Invalid**: Keys that will not be used for matching (including for any cascading operations). - -- **Unacceptable**: Keys that cannot be inserted at all (an error is signalled). - -`MATCH SIMPLE` stipulates that: - -- **Valid** keys may not contain any null values. - -- **Invalid** keys contain one or more null values. - -- **Unacceptable** keys do not exist from the point of view of `MATCH SIMPLE`; all composite keys are acceptable. - -`MATCH FULL` stipulates that: - -- **Valid** keys may not contain any null values. - -- **Invalid** keys must have all null values. - -- **Unacceptable** keys have any combination of both null and non-null values. In other words, `MATCH FULL` requires that if any column of a composite key is `NULL`, then all columns of the key must be `NULL`. - -For examples showing how these key matching algorithms work, see [Match composite foreign keys with `MATCH SIMPLE` and `MATCH FULL`](#match-composite-foreign-keys-with-match-simple-and-match-full). - -{{site.data.alerts.callout_info}} -CockroachDB does not support `MATCH PARTIAL`. For more information, see issue [#20305](https://github.com/cockroachdb/cockroach/issues/20305). -{{site.data.alerts.end}} - -### Foreign key actions - -When you set a foreign key constraint, you can control what happens to the constrained column when the column it's referencing (the foreign key) is deleted or updated. - -Parameter | Description -----------|------------ -`ON DELETE NO ACTION` | _Default action._ If there are any existing references to the key being deleted, the transaction will fail at the end of the statement. The key can be updated, depending on the `ON UPDATE` action.

      Alias: `ON DELETE RESTRICT` -`ON UPDATE NO ACTION` | _Default action._ If there are any existing references to the key being updated, the transaction will fail at the end of the statement. The key can be deleted, depending on the `ON DELETE` action.

      Alias: `ON UPDATE RESTRICT` -`ON DELETE RESTRICT` / `ON UPDATE RESTRICT` | `RESTRICT` and `NO ACTION` are currently equivalent until options for deferring constraint checking are added. To set an existing foreign key action to `RESTRICT`, the foreign key constraint must be dropped and recreated. -`ON DELETE CASCADE` / `ON UPDATE CASCADE` | When a referenced foreign key is deleted or updated, all rows referencing that key are deleted or updated, respectively. If there are other alterations to the row, such as a `SET NULL` or `SET DEFAULT`, the delete will take precedence.

      Note that `CASCADE` does not list objects it drops or updates, so it should be used cautiously. -`ON DELETE SET NULL` / `ON UPDATE SET NULL` | When a referenced foreign key is deleted or updated, respectively, the columns of all rows referencing that key will be set to `NULL`. The column must allow `NULL` or this update will fail. -`ON DELETE SET DEFAULT` / `ON UPDATE SET DEFAULT` | When a referenced foreign key is deleted or updated, respectively, the columns of all rows referencing that key are set to the default value for that column. If the default value for the column is null, this will have the same effect as `ON DELETE SET NULL` or `ON UPDATE SET NULL`. The default value must still conform with all other constraints, such as `UNIQUE`. - -### Performance - -Because the foreign key constraint requires per-row checks on two tables, statements involving foreign key or referenced columns can take longer to execute. You're most likely to notice this with operations like bulk inserts into the table with the foreign keys. For bulk inserts into new tables, use the [`IMPORT`](import.html) statement instead of [`INSERT`](insert.html). - -You can improve the performance of some statements that use foreign keys by also using [`INTERLEAVE IN PARENT`](interleave-in-parent.html), but there are tradeoffs. For more information about the performance implications of interleaved tables (as well as the limitations), see the **Interleave tables** section of [Performance best practices](performance-best-practices-overview.html#interleave-tables). - -## Syntax - -Foreign key constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -{{site.data.alerts.callout_info}} -You can also add the `FOREIGN KEY` constraint to existing tables through [`ADD CONSTRAINT`](add-constraint.html#add-the-foreign-key-constraint-with-cascade). -{{site.data.alerts.end}} - -### Column level - -
      {% include {{ page.version.version }}/sql/diagrams/foreign_key_column_level.html %}
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_name` | The name of the foreign key column. | -| `column_type` | The foreign key column's [data type](data-types.html). | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

      If you do not include the `ref_column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. | -| `column_def` | Definitions for any other columns in the table. | -| `table_constraints` | Any table-level [constraints](constraints.html) you want to apply. | - -**Example** - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id) ON DELETE CASCADE, - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ -{{site.data.alerts.callout_danger}} -`CASCADE` does not list objects it drops or updates, so it should be used cautiously. -{{site.data.alerts.end}} - -### Table level - -
      {% include {{ page.version.version }}/sql/diagrams/foreign_key_table_level.html %}
      - -| Parameter | Description | -|-----------|-------------| -| `table_name` | The name of the table you're creating. | -| `column_def` | Definitions for the table's columns. | -| `name` | The name of the constraint. | -| `fk_column_name` | The name of the foreign key column. | -| `parent_table` | The name of the table the foreign key references. | -| `ref_column_name` | The name of the column the foreign key references.

      If you do not include the `column_name` you want to reference from the `parent_table`, CockroachDB uses the first column of `parent_table`'s primary key. -| `table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. | - -**Example** - -{% include copy-clipboard.html %} -~~~ sql -CREATE TABLE packages ( - customer INT, - "order" INT, - id INT, - address STRING(50), - delivered BOOL, - delivery_date DATE, - PRIMARY KEY (customer, "order", id), - CONSTRAINT fk_order FOREIGN KEY (customer, "order") REFERENCES orders - ) INTERLEAVE IN PARENT orders (customer, "order") - ; -~~~ - -## Usage examples - -### Use a foreign key constraint with default actions - -In this example, we'll create a table with a foreign key constraint with the default [actions](#foreign-key-actions) (`ON UPDATE NO ACTION ON DELETE NO ACTION`). - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers (id INT PRIMARY KEY, email STRING UNIQUE); -~~~ - -Next, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT PRIMARY KEY, - customer INT NOT NULL REFERENCES customers (id), - orderTotal DECIMAL(9,2), - INDEX (customer) - ); -~~~ - -Let's insert a record into each table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers VALUES (1001, 'a@co.tld'), (1234, 'info@cockroachlabs.com'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1002, 29.99); -~~~ -~~~ -pq: foreign key violation: value [1002] not found in customers@primary [id] -~~~ - -The second record insertion returns an error because the customer `1002` doesn't exist in the referenced table. - -Let's insert a record into the referencing table and try to update the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders VALUES (1, 1001, 29.99); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1002 WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -The update to the referenced table returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions) is enabled (`ON UPDATE NO ACTION`). However, `id = 1234` is not referenced and can be updated: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers SET id = 1111 WHERE id = 1234; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ -+------+------------------------+ -| id | email | -+------+------------------------+ -| 1001 | a@co.tld | -| 1111 | info@cockroachlabs.com | -+------+------------------------+ -~~~ - -Now let's try to delete a referenced row: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1001; -~~~ -~~~ -pq: foreign key violation: value(s) [1001] in columns [id] referenced in table "orders" -~~~ - -Similarly, the deletion returns an error because `id = 1001` is referenced and the default [foreign key action](#foreign-key-actions) is enabled (`ON DELETE NO ACTION`). However, `id = 1111` is not referenced and can be deleted: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers WHERE id = 1111; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ -~~~ -+------+----------+ -| id | email | -+------+----------+ -| 1001 | a@co.tld | -+------+----------+ -~~~ - -### Use a Foreign Key Constraint with `CASCADE` - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE CASCADE` and `ON DELETE CASCADE`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_2 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_2 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_2(id) ON UPDATE CASCADE ON DELETE CASCADE - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_2 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_2 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_2 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -| 23 | -+----+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ -+-----+--------------+ -| id | customers_id | -+-----+--------------+ -| 100 | 23 | -| 101 | 2 | -| 102 | 3 | -| 103 | 23 | -+-----+--------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_2`, the update propagated to the referencing table `orders_2`. - -Similarly, a deletion will cascade. Let's delete `id = 23` from `customers_2`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_2 WHERE id = 23; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_2; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -+----+ -~~~ - -Let's check to make sure the rows in `orders_2` where `customers_id = 23` were also deleted: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_2; -~~~ -~~~ -+-----+--------------+ -| id | customers_id | -+-----+--------------+ -| 101 | 2 | -| 102 | 3 | -+-----+--------------+ -~~~ - -### Use a Foreign Key Constraint with `SET NULL` - -In this example, we'll create a table with a foreign key constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE SET NULL` and `ON DELETE SET NULL`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_3 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_3 ( - id INT PRIMARY KEY, - customer_id INT REFERENCES customers_3(id) ON UPDATE SET NULL ON DELETE SET NULL - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_3 VALUES (1), (2), (3); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_3 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 1 | -| 101 | 2 | -| 102 | 3 | -| 103 | 1 | -+-----+-------------+ -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_3 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+----+ -| id | -+----+ -| 2 | -| 3 | -| 23 | -+----+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | NULL | -| 101 | 2 | -| 102 | 3 | -| 103 | NULL | -+-----+-------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_3`, the referencing `customer_id` was set to `NULL`. - -Similarly, a deletion will set the referencing `customer_id` to `NULL`. Let's delete `id = 2` from `customers_3`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_3 WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_3; -~~~ -~~~ -+----+ -| id | -+----+ -| 3 | -| 23 | -+----+ -~~~ - -Let's check to make sure the row in `orders_3` where `customers_id = 2` was updated to `NULL`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_3; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | NULL | -| 101 | NULL | -| 102 | 3 | -| 103 | NULL | -+-----+-------------+ -~~~ - -### Use a Foreign Key Constraint with `SET DEFAULT` - -In this example, we'll create a table with a `FOREIGN` constraint with the [foreign key actions](#foreign-key-actions) `ON UPDATE SET DEFAULT` and `ON DELETE SET DEFAULT`. - -First, create the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers_4 ( - id INT PRIMARY KEY - ); -~~~ - -Then, create the referencing table with the `DEFAULT` value for `customer_id` set to `9999`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders_4 ( - id INT PRIMARY KEY, - customer_id INT DEFAULT 9999 REFERENCES customers_4(id) ON UPDATE SET DEFAULT ON DELETE SET DEFAULT - ); -~~~ - -Insert a few records into the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers_4 VALUES (1), (2), (3), (9999); -~~~ - -Insert some records into the referencing table: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders_4 VALUES (100,1), (101,2), (102,3), (103,1); -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 1 | -| 101 | 2 | -| 102 | 3 | -| 103 | 1 | -+-----+-------------+ -~~~ - -Now, let's update an `id` in the referenced table: - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE customers_4 SET id = 23 WHERE id = 1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ -+------+ -| id | -+------+ -| 2 | -| 3 | -| 23 | -| 9999 | -+------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 9999 | -| 101 | 2 | -| 102 | 3 | -| 103 | 9999 | -+-----+-------------+ -~~~ - -When `id = 1` was updated to `id = 23` in `customers_4`, the referencing `customer_id` was set to `DEFAULT` (i.e., `9999`). You can see this in the first and last rows of `orders_4`, where `id = 100` and the `customer_id` is now `9999` - -Similarly, a deletion will set the referencing `customer_id` to the `DEFAULT` value. Let's delete `id = 2` from `customers_4`: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM customers_4 WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers_4; -~~~ -~~~ -+------+ -| id | -+------+ -| 3 | -| 23 | -| 9999 | -+------+ -~~~ - -Let's check to make sure the corresponding `customer_id` value to `id = 101`, was updated to the `DEFAULT` value (i.e., `9999`) in `orders_4`: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders_4; -~~~ -~~~ -+-----+-------------+ -| id | customer_id | -+-----+-------------+ -| 100 | 9999 | -| 101 | 9999 | -| 102 | 3 | -| 103 | 9999 | -+-----+-------------+ -~~~ - -### Match composite foreign keys with `MATCH SIMPLE` and `MATCH FULL` - -The examples in this section show how composite foreign key matching works for both the `MATCH SIMPLE` and `MATCH FULL` algorithms. For a conceptual overview, see [Composite foreign key matching](#composite-foreign-key-matching). - -First, let's create some tables. `parent` is a table with a composite key: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE parent (x INT, y INT, z INT, UNIQUE (x, y, z)); -~~~ - -`full_test` has a foreign key on `parent` that uses the `MATCH FULL` algorithm: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE full_test ( - x INT, - y INT, - z INT, - FOREIGN KEY (x, y, z) REFERENCES parent (x, y, z) MATCH FULL ON DELETE CASCADE ON UPDATE CASCADE - ); -~~~ - -`simple_test` has a foreign key on `parent` that uses the `MATCH SIMPLE` algorithm (the default): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE simple_test ( - x INT, - y INT, - z INT, - FOREIGN KEY (x, y, z) REFERENCES parent (x, y, z) ON DELETE CASCADE ON UPDATE CASCADE - ); -~~~ - -Next, we populate `parent` with some values: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT - INTO parent - VALUES (1, 1, 1), - (2, 1, 1), - (1, 2, 1), - (1, 1, 2), - (NULL, NULL, NULL), - (1, NULL, NULL), - (NULL, 1, NULL), - (NULL, NULL, 1), - (1, 1, NULL), - (1, NULL, 1), - (NULL, 1, 1); -~~~ - -Now let's look at some `INSERT` statements to see how the different key matching algorithms work. - -- [MATCH SIMPLE](#match-simple) -- [MATCH FULL](#match-full) - -#### MATCH SIMPLE - -Inserting values into the table using the `MATCH SIMPLE` algorithm (described [above](#composite-foreign-key-matching)) gives the following results: - -| Statement | Can insert? | Throws error? | Notes | -|---------------------------------------------------+-------------+---------------+-------------------------------| -| `INSERT INTO simple_test VALUES (1,1,1)` | Yes | No | References `parent (1,1,1)`. | -| `INSERT INTO simple_test VALUES (NULL,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,1,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,NULL,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,1,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (1,NULL,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (NULL,1,1)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (2,2,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO simple_test VALUES (2,2,2)` | No | Yes | No `parent` reference exists. | - -#### MATCH FULL - -Inserting values into the table using the `MATCH FULL` algorithm (described [above](#composite-foreign-key-matching)) gives the following results: - -| Statement | Can insert? | Throws error? | Notes | -|-------------------------------------------------+-------------+---------------+-----------------------------------------------------| -| `INSERT INTO full_test VALUES (1,1,1)` | Yes | No | References `parent(1,1,1)`. | -| `INSERT INTO full_test VALUES (NULL,NULL,NULL)` | Yes | No | Does not reference `parent`. | -| `INSERT INTO full_test VALUES (1,NULL,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,1,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,NULL,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (1,1,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (1,NULL,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (NULL,1,1)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (2,2,NULL)` | No | Yes | Can't mix null and non-null values in `MATCH FULL`. | -| `INSERT INTO full_test VALUES (2,2,2)` | No | Yes | No `parent` reference exists. | - -## See also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`CHECK` constraint](check.html) -- [`DEFAULT` constraint](default-value.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY KEY` constraint](primary-key.html) -- [`UNIQUE` constraint](unique.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v19.1/frequently-asked-questions.md b/src/current/v19.1/frequently-asked-questions.md deleted file mode 100644 index a7482494bdb..00000000000 --- a/src/current/v19.1/frequently-asked-questions.md +++ /dev/null @@ -1,184 +0,0 @@ ---- -title: Frequently Asked Questions -summary: CockroachDB FAQ - What is CockroachDB? How does it work? What makes it different from other databases? -tags: postgres, cassandra, google cloud spanner -toc: true ---- - -## What is CockroachDB? - -CockroachDB is a [distributed SQL](https://www.cockroachlabs.com/blog/what-is-distributed-sql/) database built on a transactional and strongly-consistent key-value store. It **scales** horizontally; **survives** disk, machine, rack, and even datacenter failures with minimal latency disruption and no manual intervention; supports **strongly-consistent** ACID transactions; and provides a familiar **SQL** API for structuring, manipulating, and querying data. - -CockroachDB is inspired by Google's [Spanner](http://research.google.com/archive/spanner.html) and [F1](http://research.google.com/pubs/pub38125.html) technologies, and the [source code](https://github.com/cockroachdb/cockroach) is freely available. - -## When is CockroachDB a good choice? - -CockroachDB is well suited for applications that require reliable, available, and correct data, and millisecond response times, regardless of scale. It is built to automatically replicate, rebalance, and recover with minimal configuration and operational overhead. Specific use cases include: - -- Distributed or replicated OLTP -- Multi-datacenter deployments -- Multi-region deployments -- Cloud migrations -- Infrastructure initiatives built for the cloud - -CockroachDB returns single-row reads in 2ms or less and single-row writes in 4ms or less, and supports a variety of [SQL and operational tuning practices](performance-tuning.html) for optimizing query performance. However, CockroachDB is not yet suitable for heavy analytics / OLAP. - -## How easy is it to install CockroachDB? - -It's as easy as downloading a binary or running our official Kubernetes configurations or Docker image. There are other simple install methods as well, such as running our Homebrew recipe on OS X or building from source files on both OS X and Linux. - -For more details, see [Install CockroachDB](install-cockroachdb.html). - -## How does CockroachDB scale? - -CockroachDB scales horizontally with minimal operator overhead. You can run it on your local computer, a single server, a corporate development cluster, or a private or public cloud. [Adding capacity](start-a-node.html) is as easy as pointing a new node at the running cluster. - -At the key-value level, CockroachDB starts off with a single, empty range. As you put data in, this single range eventually reaches a threshold size (64MB by default). When that happens, the data splits into two ranges, each covering a contiguous segment of the entire key-value space. This process continues indefinitely; as new data flows in, existing ranges continue to split into new ranges, aiming to keep a relatively small and consistent range size. - -When your cluster spans multiple nodes (physical machines, virtual machines, or containers), newly split ranges are automatically rebalanced to nodes with more capacity. CockroachDB communicates opportunities for rebalancing using a peer-to-peer [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) by which nodes exchange network addresses, store capacity, and other information. - -## How does CockroachDB survive failures? - -CockroachDB is designed to survive software and hardware failures, from server restarts to datacenter outages. This is accomplished without confusing artifacts typical of other distributed systems (e.g., stale reads) using strongly-consistent replication as well as automated repair after failures. - -**Replication** - -CockroachDB replicates your data for availability and guarantees consistency between replicas using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. You can [define the location of replicas](configure-replication-zones.html) in various ways, depending on the types of failures you want to secure against and your network topology. You can locate replicas on: - -- Different servers within a rack to tolerate server failures -- Different servers on different racks within a datacenter to tolerate rack power/network failures -- Different servers in different datacenters to tolerate large scale network or power outages - -In a CockroachDB cluster spread across multiple geographic regions, the round-trip latency between regions will have a direct effect on your database's performance. In such cases, it is important to think about the latency requirements of each table and then use the appropriate [data topologies](topology-patterns.html) to locate data for optimal performance and resiliency. For a step-by-step demonstration, see [Low Latency Multi-Region Deployment](demo-geo-partitioning.html). - -**Automated Repair** - -For short-term failures, such as a server restart, CockroachDB uses Raft to continue seamlessly as long as a majority of replicas remain available. Raft makes sure that a new “leader” for each group of replicas is elected if the former leader fails, so that transactions can continue and affected replicas can rejoin their group once they’re back online. For longer-term failures, such as a server/rack going down for an extended period of time or a datacenter outage, CockroachDB automatically rebalances replicas from the missing nodes, using the unaffected replicas as sources. Using capacity information from the gossip network, new locations in the cluster are identified and the missing replicas are re-replicated in a distributed fashion using all available nodes and the aggregate disk and network bandwidth of the cluster. - -## How is CockroachDB strongly-consistent? - -CockroachDB guarantees [serializable SQL transactions](demo-serializable.html), the highest isolation level defined by the SQL standard. It does so by combining the Raft consensus algorithm for writes and a custom time-based synchronization algorithms for reads. - -- Stored data is versioned with MVCC, so [reads simply limit their scope to the data visible at the time the read transaction started](architecture/transaction-layer.html#time-and-hybrid-logical-clocks). - -- Writes are serviced using the [Raft consensus algorithm](https://raft.github.io/), a popular alternative to Paxos. A consensus algorithm guarantees that any majority of replicas together always agree on whether an update was committed successfully. Updates (writes) must reach a majority of replicas (2 out of 3 by default) before they are considered committed. - - To ensure that a write transaction does not interfere with read transactions that start after it, CockroachDB also uses a [timestamp cache](architecture/transaction-layer.html#timestamp-cache) which remembers when data was last read by ongoing transactions. - - This ensures that clients always observe serializable consistency with regards to other concurrent transactions. - -## How is CockroachDB both highly available and strongly consistent? - -The [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem) states that it is impossible for a distributed system to simultaneously provide more than two out of the following three guarantees: - -- Consistency -- Availability -- Partition Tolerance - -CockroachDB is a CP (consistent and partition tolerant) system. This means -that, in the presence of partitions, the system will become unavailable rather than do anything which might cause inconsistent results. For example, writes require acknowledgments from a majority of replicas, and reads require a lease, which can only be transferred to a different node when writes are possible. - -Separately, CockroachDB is also Highly Available, although "available" here means something different than the way it is used in the CAP theorem. In the CAP theorem, availability is a binary property, but for High Availability, we talk about availability as a spectrum (using terms like "five nines" for a system that is available 99.999% of the time). - -Being both CP and HA means that whenever a majority of replicas can talk to each other, they should be able to make progress. For example, if you deploy CockroachDB to three datacenters and the network link to one of them fails, the other two datacenters should be able to operate normally with only a few seconds' disruption. We do this by attempting to detect partitions and failures quickly and efficiently, transferring leadership to nodes that are able to communicate with the majority, and routing internal traffic away from nodes that are partitioned away. - -## Why is CockroachDB SQL? - -At the lowest level, CockroachDB is a distributed, strongly-consistent, transactional key-value store, but the external API is Standard SQL with extensions. This provides developers familiar relational concepts such as schemas, tables, columns, and indexes and the ability to structure, manipulate, and query data using well-established and time-proven tools and processes. Also, since CockroachDB supports the PostgreSQL wire protocol, it’s simple to get your application talking to Cockroach; just find your [PostgreSQL language-specific driver](install-client-drivers.html) and start building. - -For more details, learn our [basic CockroachDB SQL statements](learn-cockroachdb-sql.html), explore the [full SQL grammar](sql-grammar.html), and try it out via our [built-in SQL client](use-the-built-in-sql-client.html). Also, to understand how CockroachDB maps SQL table data to key-value storage and how CockroachDB chooses the best index for running a query, see [SQL in CockroachDB](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/) and [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -## Does CockroachDB support distributed transactions? - -Yes. CockroachDB distributes transactions across your cluster, whether it’s a few servers in a single location or many servers across multiple datacenters. Unlike with sharded setups, you do not need to know the precise location of data; you just talk to any node in your cluster and CockroachDB gets your transaction to the right place seamlessly. Distributed transactions proceed without downtime or additional latency while rebalancing is underway. You can even move tables – or entire databases – between data centers or cloud infrastructure providers while the cluster is under load. - -## Do transactions in CockroachDB guarantee ACID semantics? - -Yes. Every [transaction](transactions.html) in CockroachDB guarantees [ACID semantics](https://en.wikipedia.org/wiki/ACID) spanning arbitrary tables and rows, even when data is distributed. - -- **Atomicity:** Transactions in CockroachDB are “all or nothing.” If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. If a transaction succeeds, all mutations are applied together with virtual simultaneity. For a detailed discussion of atomicity in CockroachDB transactions, see [How CockroachDB Distributes Atomic Transactions](https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/). -- **Consistency:** SQL operations never see any intermediate states and move the database from one valid state to another, keeping indexes up to date. Operations always see the results of previously completed statements on overlapping data and maintain specified constraints such as unique columns. For a detailed look at how we've tested CockroachDB for correctness and consistency, see [CockroachDB Beta Passes Jepsen Testing](https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-jepsen-testing/). -- **Isolation:** Transactions in CockroachDB implement the strongest ANSI isolation level: serializable (`SERIALIZABLE`). This means that transactions will never result in anomalies. For more information about transaction isolation in CockroachDB, see [Transactions: Isolation Levels](transactions.html#isolation-levels). -- **Durability:** In CockroachDB, every acknowledged write has been persisted consistently on a majority of replicas (by default, at least 2) via the [Raft consensus algorithm](https://raft.github.io/). Power or disk failures that affect only a minority of replicas (typically 1) do not prevent the cluster from operating and do not lose any data. - -## Since CockroachDB is inspired by Spanner, does it require atomic clocks to synchronize time? - -No. CockroachDB was designed to work without atomic clocks or GPS clocks. It’s a database intended to be run on arbitrary collections of nodes, from physical servers in a corp development cluster to public cloud infrastructure using the flavor-of-the-month virtualization layer. It’d be a showstopper to require an external dependency on specialized hardware for clock synchronization. However, CockroachDB does require moderate levels of clock synchronization for correctness. If clocks drift past a maximum threshold, nodes will be taken offline. It's therefore highly recommended to run [NTP](http://www.ntp.org/) or other clock synchronization software on each node. - -For more details on how CockroachDB handles unsynchronized clocks, see [Clock Synchronization](recommended-production-settings.html#clock-synchronization). And for a broader discussion of clocks, and the differences between clocks in Spanner and CockroachDB, see [Living Without Atomic Clocks](https://www.cockroachlabs.com/blog/living-without-atomic-clocks/). - -## What languages can I use to work with CockroachDB? - -CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers. We've tested it from the following languages: - -- Go -- Python -- Ruby -- Java -- JavaScript (node.js) -- C++/C -- Clojure -- PHP -- Rust - -See [Install Client Drivers](install-client-drivers.html) for more details. - -## Why does CockroachDB use the PostgreSQL wire protocol instead of the MySQL protocol? - -CockroachDB uses the PostgreSQL wire protocol because it is better documented than the MySQL protocol, and because PostgreSQL has a liberal Open Source license, similar to BSD or MIT licenses, whereas MySQL has the more restrictive GNU General Public License. - -Note, however, that the protocol used doesn't significantly impact how easy it is to port applications. Swapping out SQL network drivers is rather straightforward in nearly every language. What makes it hard to move from one database to another is the dialect of SQL in use. CockroachDB's dialect is based on PostgreSQL as well. - -## What is CockroachDB’s security model? - -You can run a secure or insecure CockroachDB cluster. When secure, client/node and inter-node communication is encrypted, and SSL certificates authenticate the identity of both clients and nodes. When insecure, there's no encryption or authentication. - -Also, CockroachDB supports common SQL privileges on databases and tables. The `root` user has privileges for all databases, while unique users can be granted privileges for specific statements at the database and table-levels. - -For more details, see our [Security Overview](security-overview.html). - -## How does CockroachDB compare to MySQL or PostgreSQL? - -While all of these databases support SQL syntax, CockroachDB is the only one that scales easily (without the manual complexity of sharding), rebalances and repairs itself automatically, and distributes transactions seamlessly across your cluster. - -For more insight, see [CockroachDB in Comparison](cockroachdb-in-comparison.html). - -## How does CockroachDB compare to Cassandra, HBase, MongoDB, or Riak? - -While all of these are distributed databases, only CockroachDB supports distributed transactions and provides strong consistency. Also, these other databases provide custom APIs, whereas CockroachDB offers standard SQL with extensions. - -For more insight, see [CockroachDB in Comparison](cockroachdb-in-comparison.html). - -## Can a PostgreSQL or MySQL application be migrated to CockroachDB? - -Yes. Most users should be able to follow the instructions in [Migrate from Postgres](migrate-from-postgres.html) or [Migrate from MySQL](migrate-from-mysql.html). Due to differences in available features and syntax, some features supported by these databases may require manual effort to port to CockroachDB. Check those pages for details. - -We also fully support [importing your data via CSV](migrate-from-csv.html). - -## Does Cockroach Labs offer a cloud database as a service? - -Yes. The CockroachCloud offering is currently in Limited Availability and accepting customers on a qualified basis. The offering provides a running CockroachDB cluster suitable to your needs, fully managed by Cockroach Labs on GCP or AWS. Benefits include: - -- No provisioning or deployment efforts for you -- Daily full backups and hourly incremental backups of your data -- Upgrades to the latest stable release of CockroachDB -- Monitoring to provide SLA-level support - -For more details, see the [CockroachCloud](../cockroachcloud/quickstart.html) docs. - -## Why did Cockroach Labs change the license for CockroachDB? - -Our past outlook on the right business model relied on a crucial norm in the OSS world: that companies could build a business around a strong open source core product without a much larger technology platform company coming along and offering the same product as a service. - -Recently, however, OSS companies have seen the rise of highly-integrated providers take advantage of their unique position to offer “as-a-service” versions of OSS products, and offer a superior user experience as a consequence of their integrations. We’ve most recently seen it happen with Amazon’s forked version of ElasticSearch. - -To respond to this breed of competitor, we changed our software licensing terms. To learn more about our motivations, see the [Licensing FAQs](licensing-faqs.html) as well as our [blog post](https://www.cockroachlabs.com/blog/oss-relicensing-cockroachdb/) about the license change. - -## Have questions that weren’t answered? - -Try searching the rest of our docs for answers or using our other [support resources](support-resources.html), including: - -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) diff --git a/src/current/v19.1/functions-and-operators.md b/src/current/v19.1/functions-and-operators.md deleted file mode 100644 index e2da24348ea..00000000000 --- a/src/current/v19.1/functions-and-operators.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -title: Functions and Operators -summary: CockroachDB supports many built-in functions, aggregate functions, and operators. -toc: true ---- - -CockroachDB supports the following SQL functions and operators for use in [scalar expressions](scalar-expressions.html). - -{{site.data.alerts.callout_success}}In the built-in SQL shell, use \hf [function] to get inline help about a specific function.{{site.data.alerts.end}} - -## Special syntax forms - -The following syntax forms are recognized for compatibility with the -SQL standard and PostgreSQL, but are equivalent to regular built-in -functions: - -{% include {{ page.version.version }}/sql/function-special-forms.md %} - -## Conditional and function-like operators - -The following table lists the operators that look like built-in -functions but have special evaluation rules: - - Operator | Description -----------|------------- - `ANNOTATE_TYPE(...)` | [Explicitly Typed Expression](scalar-expressions.html#explicitly-typed-expressions) - `ARRAY(...)` | [Conversion of Subquery Results to An Array](scalar-expressions.html#conversion-of-subquery-results-to-an-array) - `ARRAY[...]` | [Conversion of Scalar Expressions to An Array](scalar-expressions.html#array-constructors) - `CAST(...)` | [Type Cast](scalar-expressions.html#explicit-type-coercions) - `COALESCE(...)` | [First non-NULL expression with Short Circuit](scalar-expressions.html#coalesce-and-ifnull-expressions) - `EXISTS(...)` | [Existence Test on the Result of Subqueries](scalar-expressions.html#existence-test-on-the-result-of-subqueries) - `IF(...)` | [Conditional Evaluation](scalar-expressions.html#if-expressions) - `IFNULL(...)` | Alias for `COALESCE` restricted to two operands - `NULLIF(...)` | [Return `NULL` conditionally](scalar-expressions.html#nullif-expressions) - `ROW(...)` | [Tuple Constructor](scalar-expressions.html#tuple-constructor) - -## Built-in functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/functions.md %} - -## Aggregate functions - -{{site.data.alerts.callout_success}} -For examples showing how to use aggregate functions, see [the `SELECT` clause documentation](select-clause.html#aggregate-functions). -{{site.data.alerts.end}} - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/aggregates.md %} - -## Window functions - -{% remote_include https://raw.githubusercontent.com/cockroachdb/cockroach/{{ page.release_info.crdb_branch_name }}/docs/generated/sql/window_functions.md %} - -## Operators - -The following table lists all CockroachDB operators from highest to lowest precedence, i.e., the order in which they will be evaluated within a statement. Operators with the same precedence are left associative. This means that those operators are grouped together starting from the left and moving right. - -| Order of Precedence | Operator | Name | Operator Arity | -| ------------------- | -------- | ---- | -------------- | -| 1 | `.` | Member field access operator | binary | -| 2 | `::` | [Type cast](scalar-expressions.html#explicit-type-coercions) | binary | -| 3 | `-` | Unary minus | unary (prefix) | -| | `~` | Bitwise not | unary (prefix) | -| 4 | `^` | Exponentiation | binary | -| 5 | `*` | Multiplication | binary | -| | `/` | Division | binary | -| | `//` | Floor division | binary | -| | `%` | Modulo | binary | -| 6 | `+` | Addition | binary | -| | `-` | Subtraction | binary | -| 7 | `<<` | Bitwise left-shift | binary | -| | `>>` | Bitwise right-shift | binary | -| 8 | `&` | Bitwise AND | binary | -| 9 | `#` | Bitwise XOR | binary | -| 10 | | | Bitwise OR | binary | -| 11 | || | Concatenation | binary | -| | `< ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than" comparison | binary | -| | `> ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than" comparison | binary | -| | `= ANY`, ` SOME`, ` ALL` | [Multi-valued] "equal" comparison | binary | -| | `<= ANY`, ` SOME`, ` ALL` | [Multi-valued] "less than or equal" comparison | binary | -| | `>= ANY`, ` SOME`, ` ALL` | [Multi-valued] "greater than or equal" comparison | binary | -| | `<> ANY` / `!= ANY`, `<> SOME` / `!= SOME`, `<> ALL` / `!= ALL` | [Multi-valued] "not equal" comparison | binary | -| | `[NOT] LIKE ANY`, `[NOT] LIKE SOME`, `[NOT] LIKE ALL` | [Multi-valued] `LIKE` comparison | binary | -| | `[NOT] ILIKE ANY`, `[NOT] ILIKE SOME`, `[NOT] ILIKE ALL` | [Multi-valued] `ILIKE` comparison | binary | -| 12 | `[NOT] BETWEEN` | Value is [not] within the range specified | binary | -| | `[NOT] BETWEEN SYMMETRIC` | Like `[NOT] BETWEEN`, but in non-sorted order. For example, whereas `a BETWEEN b AND c` means `b <= a <= c`, `a BETWEEN SYMMETRIC b AND c` means `(b <= a <= c) OR (c <= a <= b)`. | binary | -| | `[NOT] IN` | Value is [not] in the set of values specified | binary | -| | `[NOT] LIKE` | Matches [or not] LIKE expression, case sensitive | binary | -| | `[NOT] ILIKE` | Matches [or not] LIKE expression, case insensitive | binary | -| | `[NOT] SIMILAR` | Matches [or not] SIMILAR TO regular expression | binary | -| | `~` | Matches regular expression, case sensitive | binary | -| | `!~` | Does not match regular expression, case sensitive | binary | -| | `~*` | Matches regular expression, case insensitive | binary | -| | `!~*` | Does not match regular expression, case insensitive | binary | -| 13 | `=` | Equal | binary | -| | `<` | Less than | binary | -| | `>` | Greater than | binary | -| | `<=` | Less than or equal to | binary | -| | `>=` | Greater than or equal to | binary | -| | `!=`, `<>` | Not equal | binary | -| 14 | `IS [DISTINCT FROM]` | Equal, considering `NULL` as value | binary | -| | `IS NOT [DISTINCT FROM]` | `a IS NOT b` equivalent to `NOT (a IS b)` | binary | -| | `ISNULL`, `IS UNKNOWN` , `NOTNULL`, `IS NOT UNKNOWN` | Equivalent to `IS NULL` / `IS NOT NULL` | unary (postfix) | -| | `IS NAN`, `IS NOT NAN` | [Comparison with the floating-point NaN value](scalar-expressions.html#comparison-with-nan) | unary (postfix) | -| | `IS OF(...)` | Type predicate | unary (postfix) -| 15 | `NOT` | [Logical NOT](scalar-expressions.html#logical-operators) | unary | -| 16 | `AND` | [Logical AND](scalar-expressions.html#logical-operators) | binary | -| 17 | `OR` | [Logical OR](scalar-expressions.html#logical-operators) | binary | - -[Multi-valued]: scalar-expressions.html#multi-valued-comparisons - -### Supported operations - -{% include v19.1/sql/operators.md %} - - diff --git a/src/current/v19.1/generate-cockroachdb-resources.md b/src/current/v19.1/generate-cockroachdb-resources.md deleted file mode 100644 index 4dfd000e085..00000000000 --- a/src/current/v19.1/generate-cockroachdb-resources.md +++ /dev/null @@ -1,379 +0,0 @@ ---- -title: Generate CockroachDB Resources -summary: Use cockroach gen to generate command-line interface utlities, such as man pages, and example data. -toc: true ---- - -The `cockroach gen` command can generate command-line interface (CLI) utilities ([`man` pages](https://en.wikipedia.org/wiki/Man_page) and a`bash` autocompletion script), example SQL data suitable to populate test databases, and an HAProxy configuration file for load balancing a running cluster. - -## Subcommands - -Subcommand | Usage ------------|------ -`man` | Generate man pages for CockroachDB. -`autocomplete` | Generate `bash` or `zsh` autocompletion script for CockroachDB.

      **Default:** `bash` -`example-data` | Generate example SQL datasets. You can also use the [`cockroach workload`](cockroach-workload.html) command to generate these sample datasets in a persistent cluster and the [`cockroach demo `](cockroach-demo.html) command to generate these datasets in a temporary, in-memory cluster. -`haproxy` | Generate an HAProxy config file for a running CockroachDB cluster. The node addresses included in the config are those advertised by the nodes. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - -## Synopsis - -Generate man pages: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen man -~~~ - -Generate bash autocompletion script: -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete -~~~ - -Generate example SQL data: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data intro | cockroach sql -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data startrek | cockroach sql -~~~ - -Generate an HAProxy config file for a running cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy -~~~ - -View help: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen --help -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen man --help -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete --help -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data --help -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy --help -~~~ - -## Flags - -The `gen` subcommands supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -#### `man` - -Flag | Description ------|----------- -`--path` | The path where man pages will be generated.

      **Default:** `man/man1` under the current directory - -#### `autocomplete` - -Flag | Description ------|----------- -`--out` | The path where the autocomplete file will be generated.

      **Default:** `cockroach.bash` in the current directory - -#### `example-data` - -No flags are supported. See the [Generate Example Data](#generate-example-data) example for guidance. - -#### `haproxy` - -Flag | Description ------|------------ -`--host` | The server host and port number to connect to. This can be the address of any node in the cluster.

      **Env Variable:** `COCKROACH_HOST`
      **Default:** `localhost:26257` -`--port`
      `-p` | The server port to connect to. Note: The port number can also be specified via `--host`.

      **Env Variable:** `COCKROACH_PORT`
      **Default:** `26257` -`--insecure` | Use an insecure connection.

      **Env Variable:** `COCKROACH_INSECURE`
      **Default:** `false` -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html) containing the CA and client certificates and client key.

      **Env Variable:** `COCKROACH_CERTS_DIR`
      **Default:** `${HOME}/.cockroach-certs/` -`--url` | A [connection URL](connection-parameters.html#connect-using-a-url) to use instead of the other arguments.

      **Env Variable:** `COCKROACH_URL`
      **Default:** no URL -`--out` | The path where the `haproxy.cfg` file will be generated. If an `haproxy.cfg` file already exists in the directory, it will be overwritten.

      **Default:** `haproxy.cfg` in the current directory -`--locality` | If nodes were started with [locality](start-a-node.html#locality) details, you can use the `--locality` flag here to filter the nodes included in the HAProxy config file, specifying the explicit locality tier(s) or a regular expression to match against. This is useful in cases where you want specific instances of HAProxy to route to specific nodes. See the [Generate an HAProxy configuration file](#generate-an-haproxy-config-file) example for more details. - -### Logging - -By default, the `gen` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Generate `man` pages - -Generate man pages: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen man -~~~ - -Move the man pages to the man directory: - -{% include copy-clipboard.html %} -~~~ shell -$ sudo mv man/man1/* /usr/share/man/man1 -~~~ - -Access man pages: - -{% include copy-clipboard.html %} -~~~ shell -$ man cockroach -~~~ - -### Generate a `bash` autocompletion script - -Generate bash autocompletion script: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen autocomplete -~~~ - -Add the script to your `.bashrc` and `.bash_profle`: - -{% include copy-clipboard.html %} -~~~ shell -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bashrc -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ printf "\n\n#cockroach bash autocomplete\nsource 'cockroach.bash'" >> ~/.bash_profile -~~~ - -You can now use `tab` to autocomplete `cockroach` commands. - -### Generate example data - -{{site.data.alerts.callout_success}} -You can also use the [`cockroach workload`](cockroach-workload.html) command to generate these sample datasets in a persistent cluster and the [`cockroach demo `](cockroach-demo.html) command to generate these datasets in a temporary, in-memory cluster. -{{site.data.alerts.end}} - -To test out CockroachDB, you can generate an example `startrek` database, which contains 2 tables, `episodes` and `quotes`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data startrek | cockroach sql --insecure -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -DROP TABLE -CREATE TABLE -INSERT 79 -CREATE TABLE -INSERT 200 -~~~ - -Launch the built-in SQL client to view it: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM startrek; -~~~ -~~~ -+------------+ -| table_name | -+------------+ -| episodes | -| quotes | -+------------+ -(2 rows) -~~~ - -You can also generate an example `intro` database, which contains 1 table, `mytable`, with a hidden message: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen example-data intro | cockroach sql --insecure -~~~ - -~~~ -CREATE DATABASE -SET -DROP TABLE -CREATE TABLE -INSERT 1 -INSERT 1 -INSERT 1 -INSERT 1 -... -~~~ - -{% include copy-clipboard.html %} -~~~ shell -# Launch the built-in SQL client to view it: -$ cockroach sql --insecure -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM intro; -~~~ - -~~~ -+-------------+ -| table_name | -+-------------+ -| mytable | -+-------------+ -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM intro.mytable WHERE (l % 2) = 0; -~~~ - -~~~ -+----+------------------------------------------------------+ -| l | v | -+----+------------------------------------------------------+ -| 0 | !__aaawwmqmqmwwwaas,,_ .__aaawwwmqmqmwwaaa,, | -| 2 | !"VT?!"""^~~^"""??T$Wmqaa,_auqmWBT?!"""^~~^^""??YV^ | -| 4 | ! "?##mW##?"- | -| 6 | ! C O N G R A T S _am#Z??A#ma, Y | -| 8 | ! _ummY" "9#ma, A | -| 10 | ! vm#Z( )Xmms Y | -| 12 | ! .j####mmm#####mm#m##6. | -| 14 | ! W O W ! jmm###mm######m#mmm##6 | -| 16 | ! ]#me*Xm#m#mm##m#m##SX##c | -| 18 | ! dm#||+*$##m#mm#m#Svvn##m | -| 20 | ! :mmE=|+||S##m##m#1nvnnX##; A | -| 22 | ! :m#h+|+++=Xmm#m#1nvnnvdmm; M | -| 24 | ! Y $#m>+|+|||##m#1nvnnnnmm# A | -| 26 | ! O ]##z+|+|+|3#mEnnnnvnd##f Z | -| 28 | ! U D 4##c|+|+|]m#kvnvnno##P E | -| 30 | ! I 4#ma+|++]mmhvnnvq##P` ! | -| 32 | ! D I ?$#q%+|dmmmvnnm##! | -| 34 | ! T -4##wu#mm#pw##7' | -| 36 | ! -?$##m####Y' | -| 38 | ! !! "Y##Y"- | -| 40 | ! | -+----+------------------------------------------------------+ -(21 rows) -~~~ - -### Generate an HAProxy config file - -[HAProxy](http://www.haproxy.org/) is one of the most popular open-source TCP load balancers, and CockroachDB includes a built-in command for generating a configuration file that is preset to work with your running cluster. - -
      - - -

      - -
      -To generate an HAProxy config file for an entire secure cluster, run the `cockroach gen haproxy` command, specifying the location of [certificate directory](create-security-certificates.html) and the address of any instance running a CockroachDB node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---certs-dir= \ ---host=
      -~~~ - -To limit the HAProxy config file to nodes matching specific ["localities"](start-a-node.html#locality), use the `--localities` flag, specifying the explicit locality tier(s) or a regular expression to match against: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---certs-dir= \ ---host=
      ---locality=region=us.* -~~~ -
      - -
      -To generate an HAProxy config file for an entire insecure cluster, run the `cockroach gen haproxy` command, specifying the address of any instance running a CockroachDB node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=
      -~~~ - -To limit the HAProxy config file to nodes matching specific ["localities"](start-a-node.html#locality), use the `--localities` flag, specifying the explicit locality tier(s) or a regular expression to match against: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach gen haproxy \ ---insecure \ ---host=
      ---locality=region=us.* -~~~ -
      - -By default, the generated configuration file is called `haproxy.cfg` and looks as follows, with the `server` addresses pre-populated correctly: - -~~~ -global - maxconn 4096 - -defaults - mode tcp - # Timeout values should be configured for your specific use. - # See: https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4-timeout%20connect - timeout connect 10s - timeout client 1m - timeout server 1m - # TCP keep-alive on client side. Server already enables them. - option clitcpka - -listen psql - bind :26257 - mode tcp - balance roundrobin - option httpchk GET /health?ready=1 - server cockroach1 :26257 check port 8080 - server cockroach2 :26257 check port 8080 - server cockroach3 :26257 check port 8080 -~~~ - -The file is preset with the minimal [configurations](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html) needed to work with your running cluster: - -Field | Description -------|------------ -`timeout connect`
      `timeout client`
      `timeout server` | Timeout values that should be suitable for most deployments. -`bind` | The port that HAProxy listens on. This is the port clients will connect to and thus needs to be allowed by your network configuration.

      This tutorial assumes HAProxy is running on a separate machine from CockroachDB nodes. If you run HAProxy on the same machine as a node (not recommended), you'll need to change this port, as `26257` is likely already being used by the CockroachDB node. -`balance` | The balancing algorithm. This is set to `roundrobin` to ensure that connections get rotated amongst nodes (connection 1 on node 1, connection 2 on node 2, etc.). Check the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-balance) for details about this and other balancing algorithms. -`option httpchk` | The HTTP endpoint that HAProxy uses to check node health. [`/health?ready=1`](monitoring-and-alerting.html#health-ready-1) ensures that HAProxy doesn't direct traffic to nodes that are live but not ready to receive requests. -`server` | For each included node, this field specifies the address the node advertises to other nodes in the cluster, i.e., the addressed pass in the [`--advertise-addr` flag](start-a-node.html#networking) on node startup. Make sure hostnames are resolvable and IP addresses are routable from HAProxy. - -{{site.data.alerts.callout_info}} -For full details on these and other configuration settings, see the [HAProxy Configuration Manual](http://cbonte.github.io/haproxy-dconv/1.7/configuration.html). -{{site.data.alerts.end}} - -## See also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Deploy CockroachDB On-Premises](deploy-cockroachdb-on-premises.html) (using HAProxy for load balancing) diff --git a/src/current/v19.1/get-started-with-enterprise-trial.md b/src/current/v19.1/get-started-with-enterprise-trial.md deleted file mode 100644 index 0c8114a2c96..00000000000 --- a/src/current/v19.1/get-started-with-enterprise-trial.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Enterprise Trial –– Get Started -summary: Check out this page to get started with your CockroachDB Enterprise Trial -toc: true -license: true ---- - -Congratulations on starting your CockroachDB Enterprise Trial! With it, you'll not only get access to CockroachDB's core capabilities like [high availability](frequently-asked-questions.html#how-does-cockroachdb-survive-failures) and [`SERIALIZABLE` isolation](frequently-asked-questions.html#how-is-cockroachdb-strongly-consistent), but also our Enterprise-only features like distributed [`BACKUP`](backup.html) & [`RESTORE`](restore.html), [geo-partitioning](partitioning.html), and [cluster visualization](enable-node-map.html). - -## Install CockroachDB - -If you haven't already, you'll need to [locally install](install-cockroachdb.html), [remotely deploy](manual-deployment.html), or [orchestrate](orchestration.html) CockroachDB. - -## Enable Enterprise features - -As the CockroachDB `root` user, open the [built-in SQL shell](use-the-built-in-sql-client.html) in insecure or secure mode, as per your CockroachDB setup. In the following example, we assume that CockroachDB is running in insecure mode. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure -~~~ - -{{site.data.alerts.callout_info}} -If you've secured your deployment, you'll need to [include the flags for your certificates](create-security-certificates.html) instead of the `--insecure` flag. -{{site.data.alerts.end}} - -Now, use the `SET CLUSTER SETTING` command to set the name of your organization and the license key: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING cluster.organization = 'Acme Company'; SET CLUSTER SETTING enterprise.license = 'xxxxxxxxxxxx'; -~~~ - -Then verify your organization in response to the following query: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CLUSTER SETTING cluster.organization; -~~~ - -## Use Enterprise features - -Your cluster now has access to all of CockroachDB's enterprise features for the length of the trial: - -{% include {{ page.version.version }}/misc/enterprise-features.md %} - -## Getting help - -If you or your team need any help during your trial, our engineers are available on [CockroachDB Community Slack](https://cockroachdb.slack.com), [our forum](https://forum.cockroachlabs.com/), or [GitHub](https://github.com/cockroachdb/cockroach).

      - -## See also - -- [Enterprise Licensing](enterprise-licensing.html) -- [`SET CLUSTER SETTING`](set-cluster-setting.html) -- [`SHOW CLUSTER SETTING`](show-cluster-setting.html) diff --git a/src/current/v19.1/grant-roles.md b/src/current/v19.1/grant-roles.md deleted file mode 100644 index be7f1985c52..00000000000 --- a/src/current/v19.1/grant-roles.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: GRANT <roles> -summary: The GRANT statement grants user privileges for interacting with specific databases and tables. -toc: true ---- - -The `GRANT ` [statement](sql-statements.html) lets you add a [role](authorization.html#create-and-manage-roles) or [user](create-and-manage-users.html) as a member to a role. - -{{site.data.alerts.callout_info}} -GRANT <roles> is no longer an enterprise feature and is now freely available in the core version of CockroachDB. -{{site.data.alerts.end}} - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/grant_roles.html %}
      - -## Required privileges - -The user granting role membership must be a role admin (i.e., members with the `ADMIN OPTION`) or a superuser (i.e., a member of the `admin` role). - -## Considerations - -- Users and roles can be members of roles. -- The `root` user is automatically created as an `admin` role and assigned the `ALL` privilege for new databases. -- All privileges of a role are inherited by all its members. -- Membership loops are not allowed (direct: `A is a member of B is a member of A` or indirect: `A is a member of B is a member of C ... is a member of A`). - -## Parameters - -Parameter | Description -----------|------------ -`role_name` | The name of the role to which you want to add members. To add members to multiple roles, use a comma-separated list of role names. -`user_name` | The name of the [user](create-and-manage-users.html) or [role](authorization.html#create-and-manage-roles) to whom you want to grant membership. To add multiple members, use a comma-separated list of user and/or role names. -`WITH ADMIN OPTION` | Designate the user as an role admin. Role admins can grant or revoke membership for the specified role. - -## Examples - -### Grant role membership - -{% include copy-clipboard.html %} -~~~ sql -> GRANT design TO ernie; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | false | -| design | lola | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -### Grant the admin option - -{% include copy-clipboard.html %} -~~~ sql -> GRANT design TO ERNIE WITH ADMIN OPTION; -~~~ -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON ROLE design; -~~~ -~~~ -+--------+---------+---------+ -| role | member | isAdmin | -+--------+---------+---------+ -| design | barkley | false | -| design | ernie | true | -| design | lola | false | -| design | lucky | false | -+--------+---------+---------+ -~~~ - -## See also - -- [Authorization](authorization.html) -- [`REVOKE `](revoke-roles.html) -- [`GRANT `](grant.html) -- [`REVOKE `](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [Manage Users](create-and-manage-users.html) diff --git a/src/current/v19.1/grant.md b/src/current/v19.1/grant.md deleted file mode 100644 index 4a1f5b27292..00000000000 --- a/src/current/v19.1/grant.md +++ /dev/null @@ -1,154 +0,0 @@ ---- -title: GRANT <privileges> -summary: The GRANT statement grants user privileges for interacting with specific databases and tables. -toc: true ---- - -The `GRANT ` [statement](sql-statements.html) lets you control each [role](authorization.html#create-and-manage-roles) or [user's](create-and-manage-users.html) SQL [privileges](authorization.html#assign-privileges) for interacting with specific databases and tables. - -For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - - -## Synopsis - -
      {% include {{ page.version.version }}/sql/diagrams/grant_privileges.html %}
      - -## Required privileges - -The user granting privileges must have the `GRANT` privilege on the target databases or tables. - -## Supported privileges - -Roles and users can be granted the following privileges. Some privileges are applicable both for databases and tables, while other are applicable only for tables (see **Levels** in the table below). - -- When a role or user is granted privileges for a database, new tables created in the database will inherit the privileges, but the privileges can then be changed. -- When a role or user is granted privileges for a table, the privileges are limited to the table. -- The `root` user automatically belongs to the `admin` role and has the `ALL` privilege for new databases. -- For privileges required by specific statements, see the documentation for the respective [SQL statement](sql-statements.html). - -Privilege | Levels -----------|------------ -`ALL` | Database, Table -`CREATE` | Database, Table -`DROP` | Database, Table -`GRANT` | Database, Table -`SELECT` | Table -`INSERT` | Table -`DELETE` | Table -`UPDATE` | Table - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | A comma-separated list of table names. Alternately, to grant privileges to all tables, use `*`. `ON TABLE table.*` grants apply to all existing tables in a database but will not affect tables created after the grant. -`database_name` | A comma-separated list of database names.

      Privileges granted on databases will be inherited by any new tables created in the databases, but do not affect existing tables in the database. -`user_name` | A comma-separated list of [users](create-and-manage-users.html) and/or [roles](authorization.html#create-and-manage-roles) to whom you want to grant privileges. - -## Examples - -### Grant privileges on databases - -{% include copy-clipboard.html %} -~~~ sql -> GRANT CREATE ON DATABASE db1, db2 TO maxroach, betsyroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON DATABASE db1, db2; -~~~ - -~~~ -+----------+------------+------------+ -| Database | User | Privileges | -+----------+------------+------------+ -| db1 | betsyroach | CREATE | -| db1 | maxroach | CREATE | -| db1 | root | ALL | -| db2 | betsyroach | CREATE | -| db2 | maxroach | CREATE | -| db2 | root | ALL | -+----------+------------+------------+ -(6 rows) -~~~ - -### Grant privileges on specific tables in a database - -{% include copy-clipboard.html %} -~~~ sql -> GRANT DELETE ON TABLE db1.t1, db1.t2 TO betsyroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE db1.t1, db1.t2; -~~~ - -~~~ -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | betsyroach | DELETE | -| t1 | root | ALL | -| t2 | betsyroach | DELETE | -| t2 | root | ALL | -+-------+------------+------------+ -(4 rows) -~~~ - -### Grant privileges on all tables in a database - -{% include copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON TABLE db2.* TO henryroach; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE db2.*; -~~~ - -~~~ -+-------+------------+------------+ -| Table | User | Privileges | -+-------+------------+------------+ -| t1 | henryroach | SELECT | -| t1 | root | ALL | -| t2 | henryroach | SELECT | -| t2 | root | ALL | -+-------+------------+------------+ -(4 rows) -~~~ - -### Make a table readable to every user in the system - -{% include copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON TABLE myTable TO public; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON TABLE myTable; -~~~ - -~~~ - database_name | schema_name | table_name | grantee | privilege_type -+---------------+-------------+------------+---------+----------------+ - defaultdb | public | mytable | admin | ALL - defaultdb | public | mytable | public | SELECT - defaultdb | public | mytable | root | ALL -(3 rows) -~~~ - - -## See also - -- [Authorization](authorization.html) -- [`REVOKE `](revoke-roles.html) -- [`GRANT `](grant-roles.html) -- [`REVOKE `](revoke.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW ROLES`](show-roles.html) -- [Manage Users](create-and-manage-users.html) diff --git a/src/current/v19.1/gssapi_authentication.md b/src/current/v19.1/gssapi_authentication.md deleted file mode 100644 index 14a7959e8de..00000000000 --- a/src/current/v19.1/gssapi_authentication.md +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: GSSAPI Authentication (Enterprise) -summary: Learn about the GSSAPI authentication features for secure CockroachDB clusters. -toc: true ---- - -CockroachDB supports the Generic Security Services API (GSSAPI) with Kerberos authentication. - -{{site.data.alerts.callout_info}} -GSSAPI authentication is an [enterprise-only](enterprise-licensing.html) feature. -{{site.data.alerts.end}} - -## Configuring KDC - -To use Kerberos authentication with CockroachDB, configure a Kerberos service principal name (SPN) for CockroachDB and generate a valid keytab file with the following specifications: - -- Set the SPN to the name specified by your client driver. For example, if you use the psql client, set SPN to `postgres`. -- Create SPNs for all DNS addresses that a user would use to connect to your CockroachDB cluster (including any TCP load balancers between the user and the CockroachDB node) and ensure that the keytab contains the keys for every SPN you create. - -## Configuring the CockroachDB node -1. Copy the keytab file to a location accessible by the `cockroach` binary. - -2. [Create certificates](create-security-certificates.html) for internode and `root` user authentication: - - {% include copy-clipboard.html %} - ~~~ shell - $ mkdir certs my-safe-directory - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-ca \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-node \ - localhost \ - $(hostname) \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach cert create-client \ - root \ - --certs-dir=certs \ - --ca-key=my-safe-directory/ca.key - ~~~ - -3. Provide the path to the keytab in the `KRB5_KTNAME` environment variable. - -4. Start a CockroachDB node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --listen-addr=0.0.0.0 - ~~~ - -5. Connect to CockroachDB as `root` using the `root` client certificate generated above: - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sql --certs-dir=certs - ~~~ - -6. [Enable an enterprise license](enterprise-licensing.html#obtain-a-license). - {{site.data.alerts.callout_info}} You need the enterprise license if you want to use the GSSAPI feature. However, if you only want to test that the GSSAPI setup is working, you do not need to enable an enterprise license. {{site.data.alerts.end}} - -7. Enable GSSAPI authentication: - - {% include copy-clipboard.html %} - ~~~ sql - > SET cluster setting server.host_based_authentication.configuration = 'host all all all gss include_realm=0'; - ~~~ - - Setting the `server.host_based_authentication.configuration` [cluster setting](cluster-settings.html) makes it mandatory for all users (except `root`) to authenticate using GSSAPI. The `root` user is still required to authenticate using its client certificate. - - The `include_realm=0` option is required to tell CockroachDB to remove the `@DOMAIN.COM` realm information from the username. We do not support any advanced mapping of GSSAPI usernames to CockroachDB usernames right now. If you want to limit which realms' users can connect, you can also add one or more `krb_realm` parameters to the end of the line as an allowlist, as follows: `host all all all gss include_realm=0 krb_realm=domain.com krb_realm=corp.domain.com` - -8. Create CockroachDB users for every Kerberos user. Ensure the username does not have the `DOMAIN.COM` realm information. For example, if one of your Kerberos user has a username `carl@realm.com`, then you need to create a CockroachDB user with the username `carl`: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE USER carl; - ~~~ - - Grant privileges to the user: - - {% include copy-clipboard.html %} - ~~~ sql - > GRANT ALL ON DATABASE defaultdb TO carl; - ~~~ - -## Configuring the client - -{{site.data.alerts.callout_info}} -The `cockroach sql` shell does not yet support GSSAPI authentication. You need to use a GSSAPI-compatible Postgres client, such as Postgres's `psql` client. -{{site.data.alerts.end}} - -1. Install and configure your Kerberos client. -2. Install the Postgres client (for example, postgresql-client-10 Debian package from postgresql.org). -3. Get a Kerberos TGT for the Kerberos user from the KDC using `kinit`. -4. Use the `psql` client, which supports GSSAPI authentication, to connect to CockroachDB: - - {% include copy-clipboard.html %} - ~~~ shell - > psql "postgresql://localhost:26257/defaultdb?sslmode=require" -U carl - ~~~ - -5. If you specified an enterprise license earlier, you should now have a Postgres shell in CockroachDB, indicating that the GSSAPI authentication was successful. If you did not specify an enterprise license, you'll see a message like this: `psql: ERROR: use of GSS authentication requires an enterprise license.` If you see this message, GSSAPI authentication is set up correctly. - -## See also - -- [Authentication](authentication.html) -- [Create Security Certificates](create-security-certificates.html) diff --git a/src/current/v19.1/import.md b/src/current/v19.1/import.md deleted file mode 100644 index 23ec852c2a5..00000000000 --- a/src/current/v19.1/import.md +++ /dev/null @@ -1,723 +0,0 @@ ---- -title: IMPORT -summary: The IMPORT statement imports various types of data into CockroachDB. -toc: true ---- - -The `IMPORT` [statement](sql-statements.html) imports the following types of data into CockroachDB: - -- [CSV/TSV][csv] -- [Postgres dump files][postgres] -- [MySQL dump files][mysql] -- [CockroachDB dump files](sql-dump.html) - -{{site.data.alerts.callout_success}} -This page has reference information about the `IMPORT` statement. For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview](migration-overview.html). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -`IMPORT` only works for creating new tables. It does not support adding data to existing tables. Also, `IMPORT` cannot be used within a [transaction](transactions.html). -{{site.data.alerts.end}} - -## Required privileges - -Only members of the `admin` role can run `IMPORT`. By default, the `root` user belongs to the `admin` role. - -## Synopsis - -**Import a table from CSV** - -
      - {% include {{ page.version.version }}/sql/diagrams/import_csv.html %} -
      - -**Import a database or table from dump file** - -
      - {% include {{ page.version.version }}/sql/diagrams/import_dump.html %} -
      - -## Parameters - -### For import from CSV - -Parameter | Description -----------|------------ -`table_name` | The name of the table you want to import/create. -`table_elem_list` | The table schema you want to use. -`CREATE USING file_location` | If not specifying the table schema inline via `table_elem_list`, this is the [URL](#import-file-urls) of a CSV file containing the table schema. -`file_location` | The [URL](#import-file-urls) of a CSV file containing the table data. This can be a comma-separated list of URLs to CSV files. For an example, see [Import a table from multiple CSV files](#import-a-table-from-multiple-csv-files) below. -`WITH kv_option_list` | Control your import's behavior with [these options](#import-options). - -### For import from dump file - -Parameter | Description -----------|------------ -`table_name` | The name of the table you want to import/create. Use this when the dump file contains a specific table. Leave out `TABLE table_name FROM` when the dump file contains an entire database. -`import_format` | [PGDUMP](#import-a-postgres-database-dump) or [MYSQLDUMP](#import-a-mysql-database-dump) -`file_location` | The [URL](#import-file-urls) of a dump file you want to import. -`WITH kv_option_list` | Control your import's behavior with [these options](#import-options). - -### Import file URLs - -URLs for the files you want to import must use the format shown below. For examples, see [Example file URLs](#example-file-urls). - -{% include {{ page.version.version }}/misc/external-urls.md %} - -### Import options - -You can control the `IMPORT` process's behavior using any of the following key-value pairs as a `kv_option`. - - - -| Key | Context | Value | Required? | Example | -|---------------------+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+---------------------------------------------------------------------------------------------------| -| `delimiter` | CSV | The unicode character that delimits columns in your rows. **Default: `,`**. | No | To use tab-delimited values: `IMPORT TABLE foo (..) CSV DATA ('file.csv') WITH delimiter = e'\t'` | -| `comment` | CSV | The unicode character that identifies rows to skip. | No | `IMPORT TABLE foo (..) CSV DATA ('file.csv') WITH comment = '#'` | -| `nullif` | CSV | The string that should be converted to *NULL*. | No | To use empty columns as *NULL*: `IMPORT TABLE foo (..) CSV DATA ('file.csv') WITH nullif = ''` | -| `skip` | CSV | The number of rows to be skipped while importing a file. **Default: `'0'`**. | No | To import CSV files with column headers: `IMPORT ... CSV DATA ('file.csv') WITH skip = '1'` | -| `decompress` | General | The decompression codec to be used: `gzip`, `bzip`, `auto`, or `none`. **Default: `'auto'`**, which guesses based on file extension (`.gz`, `.bz`, `.bz2`). `none` disables decompression. | No | `IMPORT ... WITH decompress = 'bzip'` | -| `skip_foreign_keys` | Postgres, MySQL | Ignore foreign key constraints in the dump file's DDL. **Off by default**. May be necessary to import a table with unsatisfied foreign key constraints from a full database dump. | No | `IMPORT TABLE foo FROM MYSQLDUMP 'dump.sql' WITH skip_foreign_keys` | -| `max_row_size` | Postgres | Override limit on line size. **Default: 0.5MB**. This setting may need to be tweaked if your Postgres dump file has extremely long lines, for example as part of a `COPY` statement. | No | `IMPORT PGDUMP DATA ... WITH max_row_size = '5MB'` | - -For examples showing how to use these options, see the [Examples](#examples) section below. - -For instructions and working examples showing how to migrate data from other databases and formats, see the [Migration Overview](migration-overview.html). - -## Requirements - -### Prerequisites - -Before using `IMPORT`, you should have: - -- The schema of the table you want to import. -- The data you want to import, preferably hosted on cloud storage. This location must be equally accessible to all nodes using the same import file location. This is necessary because the `IMPORT` statement is issued once by the client, but is executed concurrently across all nodes of the cluster. For more information, see the [Import file location](#import-file-location) section below. - -### Import targets - -Imported tables must not exist and must be created in the `IMPORT` statement. If the table you want to import already exists, you must drop it with [`DROP TABLE`](drop-table.html). - -You can specify the target database in the table name in the `IMPORT` statement. If it's not specified there, the active database in the SQL session is used. - -### Create table - -Your `IMPORT` statement must reference a `CREATE TABLE` statement representing the schema of the data you want to import. You have several options: - -- Specify the table's columns explicitly from the [SQL client](use-the-built-in-sql-client.html). For an example, see [Import a table from a CSV file](#import-a-table-from-a-csv-file) below. - -- Load a file that already contains a `CREATE TABLE` statement. For an example, see [Import a Postgres database dump](#import-a-postgres-database-dump) below. - -We also recommend [specifying all secondary indexes you want to use in the `CREATE TABLE` statement](create-table.html#create-a-table-with-secondary-and-inverted-indexes). It is possible to [add secondary indexes later](create-index.html), but it is significantly faster to specify them during import. - -{{site.data.alerts.callout_info}} -By default, the [Postgres][postgres] and [MySQL][mysql] import formats support foreign keys. However, the most common dependency issues during import are caused by unsatisfied foreign key relationships that cause errors like `pq: there is no unique constraint matching given keys for referenced table tablename`. You can avoid these issues by adding the [`skip_foreign_keys`](#import-options) option to your `IMPORT` statement as needed. Ignoring foreign constraints will also speed up data import. -{{site.data.alerts.end}} - -### Available storage - -Each node in the cluster is assigned an equal part of the imported data, and so must have enough temp space to store it. In addition, data is persisted as a normal table, and so there must also be enough space to hold the final, replicated data. The node's first-listed/default [`store`](start-a-node.html#store) directory must have enough available storage to hold its portion of the data. - -On [`cockroach start`](start-a-node.html), if you set `--max-disk-temp-storage`, it must also be greater than the portion of the data a node will store in temp space. - -### Import file location - -We strongly recommend using cloud/remote storage (Amazon S3, Google Cloud Platform, etc.) for the data you want to import. - -Local files are supported; however, they must be accessible to all nodes in the cluster using identical [Import file URLs](#import-file-urls). - -To import a local file, you have the following options: - -- Option 1. Run a [local file server](create-a-file-server.html) to make the file accessible from all nodes. - -- Option 2. Make the file accessible from each local node's store: - 1. Create an `extern` directory on each node's store. The pathname will differ depending on the [`--store` flag passed to `cockroach start` (if any)](start-a-node.html#general), but will look something like `/path/to/cockroach-data/extern/`. - 2. Copy the file to each node's `extern` directory. - 3. Assuming the file is called `data.sql`, you can access it in your `IMPORT` statement using the following [import file URL](#import-file-urls): `'nodelocal:///data.sql'`. - -### Table users and privileges - -Imported tables are treated as new tables, so you must [`GRANT`](grant.html) privileges to them. - -## Performance - -All nodes are used during the import job, which means all nodes' CPU and RAM will be partially consumed by the `IMPORT` task in addition to serving normal traffic. - -## Viewing and controlling import jobs - -After CockroachDB successfully initiates an import, it registers the import as a job, which you can view with [`SHOW JOBS`](show-jobs.html). - -After the import has been initiated, you can control it with [`PAUSE JOB`](pause-job.html), [`RESUME JOB`](resume-job.html), and [`CANCEL JOB`](cancel-job.html). - -{{site.data.alerts.callout_info}} -If initiated correctly, the statement returns when the import is finished or if it encounters an error. In some cases, the import can continue after an error has been returned (the error message will tell you that the import has resumed in background). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}}Pausing and then resuming an IMPORT job will cause it to restart from the beginning.{{site.data.alerts.end}} - -## Examples - -### Import a table from a CSV file - -To manually specify the table schema: - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]&AWS_SESSION_TOKEN=[placeholder]') -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv') -; -~~~ - -To use a file to specify the table schema: - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers -CREATE USING 's3://acme-co/customers-create-table.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]' -CSV DATA ('s3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers -CREATE USING 'azure://acme-co/customer-create-table.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers -CREATE USING 'gs://acme-co/customers-create-table.sql' -CSV DATA ('gs://acme-co/customers.csv') -; -~~~ - -### Import a table from multiple CSV files - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ( - 's3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', - 's3://acme-co/customers2.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder', - 's3://acme-co/customers3.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', - 's3://acme-co/customers4.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]', -); -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ( - 'azure://acme-co/customer-import-data1.1.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', - 'azure://acme-co/customer-import-data1.2.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', - 'azure://acme-co/customer-import-data1.3.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', - 'azure://acme-co/customer-import-data1.4.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', - 'azure://acme-co/customer-import-data1.5.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co', -); -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ( - 'gs://acme-co/customers.csv', - 'gs://acme-co/customers2.csv', - 'gs://acme-co/customers3.csv', - 'gs://acme-co/customers4.csv', -); -~~~ - -### Import a table from a TSV file - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/customers.tsv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -WITH - delimiter = e'\t' -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.tsv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - delimiter = e'\t' -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.tsv') -WITH - delimiter = e'\t' -; -~~~ - -### Skip commented lines - -The `comment` option determines which Unicode character marks the rows in the data to be skipped. - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -WITH - comment = '#' -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - comment = '#' -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv') -WITH - comment = '#' -; -~~~ - -### Skip first *n* lines - -The `skip` option determines the number of header rows to skip when importing a file. - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/customers.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -WITH - skip = '2' -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - skip = '2' -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv') -WITH - skip = '2' -; -~~~ - -### Use blank characters as `NULL` - -The `nullif` option defines which string should be converted to `NULL`. - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/employees.csv?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -WITH - nullif = '' -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - nullif = '' -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv') -WITH - nullif = '' -; -~~~ - -### Import a compressed CSV file - -CockroachDB chooses the decompression codec based on the filename (the common extensions `.gz` or `.bz2` and `.bz`) and uses the codec to decompress the file during import. - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/employees.csv.gz?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv.gz?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv.gz') -; -~~~ - -Optionally, you can use the `decompress` option to specify the codec to be used for decompressing the file during import: - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('s3://acme-co/employees.csv.gz?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]') -WITH - decompress = 'gzip' -; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('azure://acme-co/customer-import-data.csv.gz.latest?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co') -WITH - decompress = 'gzip' -; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE customers ( - id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - name TEXT, - INDEX name_idx (name) -) -CSV DATA ('gs://acme-co/customers.csv.gz') -WITH - decompress = 'gzip' -; -~~~ - -### Import a Postgres database dump - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 's3://your-external-storage/employees.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]'; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 'azure://acme-co/employees.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co'; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 'gs://acme-co/employees.sql'; -~~~ - -For the commands above to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from Postgres][postgres]. - -### Import a table from a Postgres database dump - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM PGDUMP 's3://your-external-storage/employees-full.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]' WITH skip_foreign_keys; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM PGDUMP 'azure://acme-co/employees.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' WITH skip_foreign_keys; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM PGDUMP 'gs://acme-co/employees.sql' WITH skip_foreign_keys; -~~~ - -If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` shown may be needed. For more information, see the list of [import options](#import-options). - -For the command above to succeed, you need to have created the dump file with specific flags to `pg_dump`. For more information, see [Migrate from Postgres][postgres]. - -### Import a CockroachDB dump file - -Cockroach dump files can be imported using the `IMPORT PGDUMP`. - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 's3://your-external-storage/employees-full.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]'; -~~~ - -Azure: -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 'azure://acme-co/employees.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co'; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT PGDUMP 'gs://acme-co/employees.sql'; -~~~ - -For more information, see [SQL Dump (Export)](sql-dump.html). - -### Import a MySQL database dump - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT MYSQLDUMP 's3://your-external-storage/employees-full.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]'; -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT MYSQLDUMP 'azure://acme-co/employees.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co'; -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT MYSQLDUMP 'gs://acme-co/employees.sql'; -~~~ - -For more detailed information about importing data from MySQL, see [Migrate from MySQL][mysql]. - -### Import a table from a MySQL database dump - -Amazon S3: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM MYSQLDUMP 's3://your-external-storage/employees-full.sql?AWS_ACCESS_KEY_ID=[placeholder]&AWS_SECRET_ACCESS_KEY=[placeholder]' WITH skip_foreign_keys -~~~ - -Azure: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM MYSQLDUMP 'azure://acme-co/employees.sql?AZURE_ACCOUNT_KEY=hash&AZURE_ACCOUNT_NAME=acme-co' WITH skip_foreign_keys -~~~ - -Google Cloud: - -{% include copy-clipboard.html %} -~~~ sql -> IMPORT TABLE employees FROM MYSQLDUMP 'gs://acme-co/employees.sql' WITH skip_foreign_keys -~~~ - -If the table schema specifies foreign keys into tables that do not exist yet, the `WITH skip_foreign_keys` shown may be needed. For more information, see the list of [import options](#import-options). - -For more detailed information about importing data from MySQL, see [Migrate from MySQL][mysql]. - -## Known limitation - -`IMPORT` can sometimes fail with a "context canceled" error, or can restart itself many times without ever finishing. If this is happening, it is likely due to a high amount of disk contention. This can be mitigated by setting the `kv.bulk_io_write.max_rate` [cluster setting](cluster-settings.html) to a value below your max disk write speed. For example, to set it to 10MB/s, execute: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.bulk_io_write.max_rate = '10MB'; -~~~ - -## See also - -- [Create a File Server](create-a-file-server.html) -- [Migration Overview](migration-overview.html) -- [Migrate from MySQL][mysql] -- [Migrate from Postgres][postgres] -- [Migrate from CSV][csv] - - - -[postgres]: migrate-from-postgres.html -[mysql]: migrate-from-mysql.html -[csv]: migrate-from-csv.html diff --git a/src/current/v19.1/index.md b/src/current/v19.1/index.md deleted file mode 100644 index 9aa658bec8c..00000000000 --- a/src/current/v19.1/index.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: CockroachDB Docs -summary: CockroachDB is the SQL database for building global, scalable cloud services that survive disasters. -toc: true -homepage: true -contribute: false -cta: false ---- - -CockroachDB is the SQL database for building global, scalable cloud services that survive disasters. - - diff --git a/src/current/v19.1/indexes.md b/src/current/v19.1/indexes.md deleted file mode 100644 index 65d502d969a..00000000000 --- a/src/current/v19.1/indexes.md +++ /dev/null @@ -1,132 +0,0 @@ ---- -title: Indexes -summary: Indexes improve your database's performance by helping SQL locate data without having to look through every row of a table. -toc: true -toc_not_nested: true ---- - -Indexes improve your database's performance by helping SQL locate data without having to look through every row of a table. - -## How do indexes work? - -When you create an index, CockroachDB "indexes" the columns you specify, which creates a copy of the columns and then sorts their values (without sorting the values in the table itself). - -After a column is indexed, SQL can easily filter its values using the index instead of scanning each row one-by-one. On large tables, this greatly reduces the number of rows SQL has to use, executing queries exponentially faster. - -For example, if you index an `INT` column and then filter it WHERE <indexed column> = 10, SQL can use the index to find values starting at 10 but less than 11. In contrast, without an index, SQL would have to evaluate _every_ row in the table for values equaling 10. This is also known as a "full table scan", and it can be very bad for query performance. - -### Creation - -Each table automatically has an index created called `primary`, which indexes either its [primary key](primary-key.html) or—if there is no primary key—a unique value for each row known as `rowid`. We recommend always defining a primary key because the index it creates provides much better performance than letting CockroachDB use `rowid`. - -The `primary` index helps filter a table's primary key but doesn't help SQL find values in any other columns. However, you can use secondary indexes to improve the performance of queries using columns not in a table's primary key. You can create them: - -- At the same time as the table with the `INDEX` clause of [`CREATE TABLE`](create-table.html#create-a-table-with-secondary-and-inverted-indexes). In addition to explicitly defined indexes, CockroachDB automatically creates secondary indexes for columns with the [`UNIQUE` constraint](unique.html). -- For existing tables with [`CREATE INDEX`](create-index.html). -- By applying the `UNIQUE` constraint to columns with [`ALTER TABLE`](alter-table.html), which automatically creates an index of the constrained columns. - -To create the most useful secondary indexes, you should also check out our [best practices](#best-practices). - -### Selection - -Because each query can use only a single index, CockroachDB selects the index it calculates will scan the fewest rows (i.e., the fastest). For more detail, check out our blog post [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/), which will show you how to use the [`EXPLAIN`](explain.html) statement for your query to see which index is being used. - -To override CockroachDB's index selection, you can also force [queries to use a specific index](table-expressions.html#force-index-selection) (also known as "index hinting"). - -### Storage - -CockroachDB stores indexes directly in your key-value store. You can find more information in our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -### Locking - -Tables are not locked during index creation thanks to CockroachDB's [schema change procedure](https://www.cockroachlabs.com/blog/how-online-schema-changes-are-possible-in-cockroachdb/). - -### Performance - -Indexes create a trade-off: they greatly improve the speed of queries, but may slightly slow down writes to an affected column (because new values have to be written for both the table _and_ the index). - -To maximize your indexes' performance, we recommend following a few [best practices](#best-practices). - -## Best practices - -We recommend creating indexes for all of your common queries. To design the most useful indexes, look at each query's `WHERE` and `SELECT` clauses, and create indexes that: - -- [Index all columns](#indexing-columns) in the `WHERE` clause. -- [Store columns](#storing-columns) that are _only_ in the `SELECT` clause. - -{{site.data.alerts.callout_success}} -For more information about how to tune CockroachDB's performance, see [SQL Performance Best Practices](performance-best-practices-overview.html) and the [Performance Tuning](performance-tuning.html) tutorial. -{{site.data.alerts.end}} - -### Indexing columns - -When designing indexes, it's important to consider which columns you index and the order you list them. Here are a few guidelines to help you make the best choices: - -- Each table's [primary key](primary-key.html) (which we recommend always [defining](create-table.html#create-a-table-primary-key-defined)) is automatically indexed. The index it creates (called `primary`) cannot be changed, nor can you change the primary key of a table after it's been created, so this is a critical decision for every table. -- Queries can benefit from an index even if they only filter a prefix of its columns. For example, if you create an index of columns `(A, B, C)`, queries filtering `(A)` or `(A, B)` can still use the index. However, queries that do not filter `(A)` will not benefit from the index.

      This feature also lets you avoid using single-column indexes. Instead, use the column as the first column in a multiple-column index, which is useful to more queries. -- Columns filtered in the `WHERE` clause with the equality operators (`=` or `IN`) should come first in the index, before those referenced with inequality operators (`<`, `>`). -- Indexes of the same columns in different orders can produce different results for each query. For more information, see [our blog post on index selection](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/)—specifically the section "Restricting the search space." - -### Storing columns - -The `STORING` clause specifies columns which are not part of the index key but should be stored in the index. This optimizes queries which retrieve those columns without filtering on them, because it prevents the need to read the primary index. - -### Example - -Say we have a table with three columns, two of which are indexed: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE tbl (col1 INT, col2 INT, col3 INT, INDEX (col1, col2)); -~~~ - -If we filter on the indexed columns but retrieve the unindexed column, this requires reading `col3` from the primary index via an "index join." - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT col3 FROM tbl WHERE col1 = 10 AND col2 > 1; -~~~ - -~~~ - tree | field | description -+-----------------+-------------+-----------------------+ - render | | - └── index-join | | - │ | table | tbl@primary - │ | key columns | rowid - └── scan | | - | table | tbl@tbl_col1_col2_idx - | spans | /10/2-/11 -~~~ - -However, if we store `col3` in the index, the index join is no longer necessary. This means our query only needs to read from the secondary index, so it will be more efficient. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE tbl (col1 INT, col2 INT, col3 INT, INDEX (col1, col2) STORING (col3)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT col3 FROM tbl WHERE col1 = 10 AND col2 > 1; -~~~ - -~~~ - tree | field | description -+-----------+-------------+-------------------+ - render | | - └── scan | | - | table | tbl@tbl_col1_col2_idx - | spans | /10/2-/11 -~~~ - -## See also - -- [Inverted Indexes](inverted-indexes.html) -- [SQL Performance Best Practices](performance-best-practices-overview.html) -- [Select from a specific index](select-clause.html#select-from-a-specific-index) -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW INDEX`](show-index.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/inet.md b/src/current/v19.1/inet.md deleted file mode 100644 index 80f3381b741..00000000000 --- a/src/current/v19.1/inet.md +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: INET -summary: The INET data type stores an IPv4 or IPv6 address. -toc: true ---- -The `INET` [data type](data-types.html) stores an IPv4 or IPv6 address. - - -## Syntax - -A constant value of type `INET` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `INET` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`INET`. - -`INET` constants can be expressed using the following formats: - -Format | Description --------|------------- -IPv4 | Standard [RFC791](https://tools.ietf.org/html/rfc791)-specified format of 4 octets expressed individually in decimal numbers and separated by periods. Optionally, the address can be followed by a subnet mask.

      Examples: `'190.0.0.0'`, `'190.0.0.0/24'` -IPv6 | Standard [RFC8200](https://tools.ietf.org/html/rfc8200)-specified format of 8 colon-separated groups of 4 hexadecimal digits. An IPv6 address can be mapped to an IPv4 address. Optionally, the address can be followed by a subnet mask.

      Examples: `'2001:4f8:3:ba:2e0:81ff:fe22:d1f1'`, `'2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120'`, `'::ffff:192.168.0.1/24'` - -{{site.data.alerts.callout_info}}IPv4 addresses will sort before IPv6 addresses, including IPv4-mapped IPv6 addresses.{{site.data.alerts.end}} - -## Size - -An `INET` value is 32 bits for IPv4 or 128 bits for IPv6. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE computers ( - ip INET PRIMARY KEY, - user_email STRING, - registration_date DATE - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM computers; -~~~ - -~~~ -+-------------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------------+-----------+-------------+----------------+-----------------------+-------------+ -| ip | INET | false | NULL | | {"primary"} | -| user_email | STRING | true | NULL | | {} | -| registration_date | DATE | true | NULL | | {} | -+-------------------+-----------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO computers - VALUES - ('192.168.0.1', 'info@cockroachlabs.com', '2018-01-31'), - ('192.168.0.2/10', 'lauren@cockroachlabs.com', '2018-01-31'), - ('2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120', 'test@cockroachlabs.com', '2018-01-31'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM computers; -~~~ -~~~ -+--------------------------------------+--------------------------+---------------------------+ -| ip | user_email | registration_date | -+--------------------------------------+--------------------------+---------------------------+ -| 192.168.0.1 | info@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -| 192.168.0.2/10 | lauren@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -| 2001:4f8:3:ba:2e0:81ff:fe22:d1f1/120 | test@cockroachlabs.com | 2018-01-31 00:00:00+00:00 | -+--------------------------------------+--------------------------+---------------------------+ -~~~ - -## Supported casting and conversion - -`INET` values can be [cast](data-types.html#data-type-conversions-and-casts) to the following data type: - -- `STRING` - Converts to format `'Address/subnet'`. - -## See also - -- [Data Types](data-types.html) -- [Functions and Operators](functions-and-operators.html) diff --git a/src/current/v19.1/information-schema.md b/src/current/v19.1/information-schema.md deleted file mode 100644 index 39a513c20d2..00000000000 --- a/src/current/v19.1/information-schema.md +++ /dev/null @@ -1,358 +0,0 @@ ---- -title: Information Schema -summary: The information_schema database contains read-only views that you can use for introspection into your database's tables, columns, indexes, and views. -toc: true ---- - -CockroachDB provides a virtual schema called `information_schema` that contains information about your database's tables, columns, indexes, and views. This information can be used for introspection and reflection. - -The definition of `information_schema` is part of the SQL standard and can therefore be relied on to remain stable over time. This contrasts with CockroachDB's `SHOW` statements, which provide similar data and are meant to be stable in CockroachDB but not standardized. It also contrasts with the virtual schema `crdb_internal`, which reflects the internals of CockroachDB and may thus change across CockroachDB versions. - -{{site.data.alerts.callout_info}} -The `information_schema` views typically represent objects that the current user has privilege to access. To ensure you can view all the objects in a database, access it as the `root` user. -{{site.data.alerts.end}} - -## Data exposed by information_schema - -To perform introspection on objects, you can either read from the related `information_schema` table or use one of CockroachDB's `SHOW` statements. - -Object | Information Schema Table | Corresponding `SHOW` Statement --------|--------------|-------- -Columns | [`columns`](#columns) | [`SHOW COLUMNS`](show-columns.html) -Constraints | [`key_column_usage`](#key_column_usage), [`referential_constraints`](#referential_constraints), [`table_constraints`](#table_constraints)| [`SHOW CONSTRAINTS`](show-constraints.html) -Databases | [`schemata`](#schemata)| [`SHOW DATABASE`](show-vars.html) -Indexes | [`statistics`](#statistics)| [`SHOW INDEX`](show-index.html) -Privileges | [`schema_privileges`](#schema_privileges), [`table_privileges`](#table_privileges)| [`SHOW GRANTS`](show-grants.html) -Roles | [`role_table_grants`](#role_table_grants) | [`SHOW ROLES`](show-roles.html) -Sequences | [`sequences`](#sequences) | [`SHOW CREATE SEQUENCE`](show-create.html) -Tables | [`tables`](#tables)| [`SHOW TABLES`](show-tables.html) -Views | [`tables`](#tables), [`views`](#views)| [`SHOW CREATE`](show-create.html) - -## Tables in information_schema - -The virtual schema `information_schema` contains virtual tables, also called "system views," representing the database's objects, each of which is detailed below. - -These differ from regular [SQL views](views.html) in that they are -not showing data created from the content of other tables. Instead, -CockroachDB generates the data for virtual tables when they are accessed. - -Currently, there are some `information_schema` tables that are empty but provided for compatibility: - -- `routines` -- `parameters` - -{{site.data.alerts.callout_info}} -A query can specify a table name without a database name (e.g., `SELECT * FROM information_schema.sequences`). See [Name Resolution](sql-name-resolution.html) for more information. -{{site.data.alerts.end}} - -### administrable_role_authorizations - -`administrable_role_authorizations` identifies all roles that the current user has the admin option for. - -Column | Description --------|----------- -`grantee` | The name of the user to which this role membership was granted (always the current user). - -### applicable_roles - -`applicable_roles` identifies all roles whose privileges the current user can use. This implies there is a chain of role grants from the current user to the role in question. The current user itself is also an applicable role, but is not listed. - -Column | Description --------|----------- -`grantee` | Name of the user to which this role membership was granted (always the current user). -`role_name` | Name of a role. -`is_grantable` | `YES` if the grantee has the admin option on the role; `NO` if not. - -### columns - -`columns` contains information about the columns in each table. - -Column | Description --------|----------- -`table_catalog` | Name of the database containing the table. -`table_schema` | Name of the schema containing the table. -`table_name` | Name of the table. -`column_name` | Name of the column. -`ordinal_position` | Ordinal position of the column in the table (begins at 1). -`column_default` | Default value for the column. -`is_nullable` | `YES` if the column accepts `NULL` values; `NO` if it doesn't (e.g., it has the [`NOT NULL` constraint](not-null.html)). -`data_type` | [Data type](data-types.html) of the column. -`character_maximum_length` | If `data_type` is `STRING`, the maximum length in characters of a value; otherwise `NULL`. -`character_octet_length` | If `data_type` is `STRING`, the maximum length in octets (bytes) of a value; otherwise `NULL`. -`numeric_precision` | If `data_type` is numeric, the declared or implicit precision (i.e., number of significant digits); otherwise `NULL`. -`numeric_precision_radix` | If `data_type` identifies a numeric type, the base in which the values in the columns `numeric_precision` and `numeric_scale` are expressed (either `2` or `10`). For all other data types, column is `NULL`. -`numeric_scale` | If `data_type` is an exact numeric type, the scale (i.e., number of digits to the right of the decimal point); otherwise `NULL`. -`datetime_precision` | Always `NULL` (unsupported by CockroachDB). -`character_set_catalog` | Always `NULL` (unsupported by CockroachDB). -`character_set_schema` | Always `NULL` (unsupported by CockroachDB). -`character_set_name` | Always `NULL` (unsupported by CockroachDB). -`domain_catalog` | Always `NULL` (unsupported by CockroachDB). -`domain_schema` | Always `NULL` (unsupported by CockroachDB). -`domain_name` | Always `NULL` (unsupported by CockroachDB). -`generation_expression` | The expression used for computing the column value in a computed column. -`is_hidden` | Whether or not the column is hidden. Possible values: `true` or `false`. -`crdb_sql_type` | [Data type](data-types.html) of the column. - -### column_privileges - -`column_privileges` identifies all privileges granted on columns to or by a currently enabled role. There is one row for each combination of `grantor`, `grantee`, and column (defined by `table_catalog`, `table_schema`, `table_name`, and `column_name`). - -Column | Description --------|----------- -`grantor` | Name of the role that granted the privilege. -`grantee` | Name of the role that was granted the privilege. -`table_catalog` | Name of the database containing the table that contains the column (always the current database). -`table_schema` | Name of the schema containing the table that contains the column. -`table_name` | Name of the table. -`column_name` | Name of the column. -`privilege_type` | Name of the [privilege](authorization.html#assign-privileges). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### constraint_column_usage - -`constraint_column_usage` identifies all columns in a database that are used by some [constraint](constraints.html). - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the table that contains the column that is used by some constraint. -`table_schema` | Name of the schema that contains the table that contains the column that is used by some constraint. -`table_name` | Name of the table that contains the column that is used by some constraint. -`column_name` | Name of the column that is used by some constraint. -`constraint_catalog` | Name of the database that contains the constraint. -`constraint_schema` | Name of the schema that contains the constraint. -`constraint_name` | Name of the constraint. - -### enabled_roles - -The `enabled_roles` view identifies enabled roles for the current user. This includes both direct and indirect roles. - -Column | Description --------|----------- -`role_name` | Name of a role. - -### key_column_usage - -`key_column_usage` identifies columns with [`PRIMARY KEY`](primary-key.html), [`UNIQUE`](unique.html), or [foreign key / `REFERENCES`](foreign-key.html) constraints. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`table_name` | Name of the constrained table. -`column_name` | Name of the constrained column. -`ordinal_position` | Ordinal position of the column within the constraint (begins at 1). -`position_in_unique_constraint` | For foreign key constraints, ordinal position of the referenced column within its uniqueness constraint (begins at 1). - -### referential_constraints - -`referential_constraints` identifies all referential ([Foreign Key](foreign-key.html)) constraints. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`unique_constraint_catalog` | Name of the database containing the `UNIQUE` or `PRIMARY KEY` constraint that the foreign key constraint references (always the current database). -`unique_constraint_schema` | Name of the schema containing the `UNIQUE` or `PRIMARY KEY` constraint that the foreign key constraint references. -`unique_constraint_name` | Name of the `UNIQUE` or `PRIMARY KEY` constraint. -`match_option` | Match option of the foreign key constraint: `FULL`, `PARTIAL`, or `NONE`. -`update_rule` | Update rule of the foreign key constraint: `CASCADE`, `SET NULL`, `SET DEFAULT`, `RESTRICT`, or `NO ACTION`. -`delete_rule` | Delete rule of the foreign key constraint: `CASCADE`, `SET NULL`, `SET DEFAULT`, `RESTRICT`, or `NO ACTION`. -`table_name` | Name of the table containing the constraint. -`referenced_table_name` | Name of the table containing the `UNIQUE` or `PRIMARY KEY` constraint that the foreign key constraint references. - -### role_table_grants - -`role_table_grants` identifies which [privileges](authorization.html#assign-privileges) have been granted on tables or views where the grantor -or grantee is a currently enabled role. This table is identical to [`table_privileges`](#table_privileges). - -Column | Description --------|----------- -`grantor` | Name of the role that granted the privilege. -`grantee` | Name of the role that was granted the privilege. -`table_catalog` | Name of the database containing the table. -`table_schema` | Name of the schema containing the table. -`table_name` | Name of the table. -`privilege_type` | Name of the [privilege](authorization.html#assign-privileges). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). -`with_hierarchy` | Always `NULL` (unsupported by CockroachDB). - -### schema_privileges - -`schema_privileges` identifies which [privileges](authorization.html#assign-privileges) have been granted to each user at the database level. - -Column | Description --------|----------- -`grantee` | Username of user with grant. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`privilege_type` | Name of the [privilege](authorization.html#assign-privileges). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### schemata - -`schemata` identifies the database's schemas. - -Column | Description --------|----------- -`table_catalog` | Name of the database. -`table_schema` | Name of the schema. -`default_character_set_name` | Always `NULL` (unsupported by CockroachDB). -`sql_path` | Always `NULL` (unsupported by CockroachDB). - -### sequences - -`sequences` identifies [sequences](create-sequence.html) defined in a database. - -Column | Description --------|----------- -`sequence_catalog` | Name of the database that contains the sequence. -`sequence_schema` | Name of the schema that contains the sequence. -`sequence_name` | Name of the sequence. -`data_type` | The data type of the sequence. -`numeric_precision` | The (declared or implicit) precision of the sequence `data_type`. -`numeric_precision_radix` | The base of the values in which the columns `numeric_precision` and `numeric_scale` are expressed. The value is either `2` or `10`. -`numeric_scale` | The (declared or implicit) scale of the sequence `data_type`. The scale indicates the number of significant digits to the right of the decimal point. It can be expressed in decimal (base 10) or binary (base 2) terms, as specified in the column `numeric_precision_radix`. -`start_value` | The first value of the sequence. -`minimum_value` | The minimum value of the sequence. -`maximum_value` | The maximum value of the sequence. -`increment` | The value by which the sequence is incremented. A negative number creates a descending sequence. A positive number creates an ascending sequence. -`cycle_option` | Currently, all sequences are set to `NO CYCLE` and the sequence will not wrap. - -### statistics - -`statistics` identifies table [indexes](indexes.html). - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the constrained table. -`table_schema` | Name of the schema that contains the constrained table. -`table_name` | Name of the table. -`non_unique` | `NO` if the index was created with the `UNIQUE` constraint; `YES` if the index was not created with `UNIQUE`. -`index_schema` | Name of the database that contains the index. -`index_name` | Name of the index. -`seq_in_index` | Ordinal position of the column within the index (begins at 1). -`column_name` | Name of the column being indexed. -`collation` | Always `NULL` (unsupported by CockroachDB). -`cardinality` | Always `NULL` (unsupported by CockroachDB). -`direction` | `ASC` (ascending) or `DESC` (descending) order. -`storing` | `YES` if column is [stored](create-index.html#store-columns); `NO` if it's indexed or implicit. -`implicit` | `YES` if column is implicit (i.e., it is not specified in the index and not stored); `NO` if it's indexed or stored. - -### table_constraints - -`table_constraints` identifies [constraints](constraints.html) applied to tables. - -Column | Description --------|----------- -`constraint_catalog` | Name of the database containing the constraint. -`constraint_schema` | Name of the schema containing the constraint. -`constraint_name` | Name of the constraint. -`table_catalog` | Name of the database containing the constrained table. -`table_schema` | Name of the schema containing the constrained table. -`table_name` | Name of the constrained table. -`constraint_type` | Type of [constraint](constraints.html): `CHECK`, foreign key, `PRIMARY KEY`, or `UNIQUE`. -`is_deferrable` | `YES` if the constraint can be deferred; `NO` if not. -`initially_deferred` | `YES` if the constraint is deferrable and initially deferred; `NO` if not. - -### table_privileges - -`table_privileges` identifies which [privileges](authorization.html#assign-privileges) have been granted to each user at the table level. - -Column | Description --------|----------- -`grantor` | Always `NULL` (unsupported by CockroachDB). -`grantee` | Username of the user with grant. -`table_catalog` | Name of the database that the grant applies to. -`table_schema` | Name of the schema that the grant applies to. -`table_name` | Name of the table that the grant applies to. -`privilege_type` | Type of [privilege](authorization.html#assign-privileges): `SELECT`, `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, `REFERENCES`, or `TRIGGER`. -`is_grantable` | Always `NULL` (unsupported by CockroachDB). -`with_hierarchy` | Always `NULL` (unsupported by CockroachDB). - -### tables - -`tables` identifies tables and views in the database. - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the table. -`table_schema` | Name of the schema that contains the table. -`table_name` | Name of the table. -`table_type` | Type of the table: `BASE TABLE` for a normal table, `VIEW` for a view, or `SYSTEM VIEW` for a view created by CockroachDB. -`version` | Version number of the table; versions begin at 1 and are incremented each time an `ALTER TABLE` statement is issued on the table. Note that this column is an experimental feature used for internal purposes inside CockroachDB and its definition is subject to change without notice. - -### user_privileges - -`user_privileges` identifies global [privileges](authorization.html#assign-privileges). - -{{site.data.alerts.callout_info}}Currently, CockroachDB does not support global privileges for non-root users. Therefore, this view contains global privileges only for root. -{{site.data.alerts.end}} - -Column | Description --------|----------- -`grantee` | Username of user with grant. -`table_catalog` | Name of the database that the privilege applies to. -`privilege_type` | Type of [privilege](authorization.html#assign-privileges). -`is_grantable` | Always `NULL` (unsupported by CockroachDB). - -### views - -`views` identifies [views](views.html) in the database. - -Column | Description --------|----------- -`table_catalog` | Name of the database that contains the view. -`table_schema` | Name of the schema that contains the view. -`table_name` | Name of the view. -`view_definition` | `AS` clause used to [create the view](views.html#creating-views). -`check_option` | Always `NULL` (unsupported by CockroachDB). -`is_updatable` | Always `NULL` (unsupported by CockroachDB). -`is_insertable_into` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_updatable` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_deletable` | Always `NULL` (unsupported by CockroachDB). -`is_trigger_insertable_into` | Always `NULL` (unsupported by CockroachDB). - -## Examples - -### Retrieve all columns from an information schema table - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM db_name.information_schema.table_constraints; -~~~ -~~~ -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -| constraint_catalog | constraint_schema | constraint_name | table_catalog | table_schema | table_name | constraint_type | is_deferrable | initially_deferred | -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -| jsonb_test | public | primary | jsonb_test | public | programming | PRIMARY KEY | NO | NO | -+--------------------+-------------------+-----------------+---------------+--------------+-------------+-----------------+---------------+--------------------+ -~~~ - -### Retrieve specific columns from an information schema table - -{% include copy-clipboard.html %} -~~~ sql -> SELECT table_name, constraint_name FROM db_name.information_schema.table_constraints; -~~~ -~~~ -+-------------+-----------------+ -| table_name | constraint_name | -+-------------+-----------------+ -| programming | primary | -+-------------+-----------------+ -~~~ - -## See also - -- [`SHOW`](show-vars.html) -- [`SHOW COLUMNS`](show-columns.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`SHOW CREATE`](show-create.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW GRANTS`](show-grants.html) -- [`SHOW INDEX`](show-index.html) -- [`SHOW TABLES`](show-tables.html) diff --git a/src/current/v19.1/initialize-a-cluster.md b/src/current/v19.1/initialize-a-cluster.md deleted file mode 100644 index 313951fe3f3..00000000000 --- a/src/current/v19.1/initialize-a-cluster.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: Initialize a Cluster -summary: Perform a one-time-only initialization of a CockroachDB cluster. -toc: true ---- - -This page explains the `cockroach init` [command](cockroach-commands.html), which you use to perform a one-time initialization of a new multi-node cluster. For a full walk-through of the cluster startup and initialization process, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -{{site.data.alerts.callout_info}} -When [starting a single-node cluster](start-a-node.html#start-a-single-node-cluster), you do not need to use the `cockroach init` command. You can simply run the `cockroach start` command without the `--join` flag to start and initialize the single-node cluster. -{{site.data.alerts.end}} - -## Synopsis - -Perform a one-time initialization of a cluster: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init -~~~ - -View help: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init --help -~~~ - -## Flags - -The `cockroach init` command supports the following [client connection](#client-connection) and [logging](#logging) flags. - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for details. - -### Logging - -By default, the `init` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -These examples assume that nodes have already been started with [`cockroach start`](start-a-node.html) but are waiting to be initialized as a new cluster. For a more detailed walk-through, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -### Initialize a Cluster on a Node's Machine - -
      - - -
      - -
      -1. SSH to the machine where the node has been started. - -2. Make sure the `client.root.crt` and `client.root.key` files for the `root` user are on the machine. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of the current node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --certs-dir=certs --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients. -
      - -
      -1. SSH to the machine where the node has been started. - -2. Run the `cockroach init` command with the `--host` flag set to the address of the current node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients. -
      - -### Initialize a cluster from another machine - -
      - - -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Create a `certs` directory and copy the CA certificate and the client certificate and key for the `root` user into the directory. - -3. Run the `cockroach init` command with the `--certs-dir` flag set to the directory containing the `ca.crt` file and the files for the `root` user, and with the `--host` flag set to the address of any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --certs-dir=certs --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients. -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Run the `cockroach init` command with the `--host` flag set to the address of any node: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init --insecure --host=
      - ~~~ - - At this point, all the nodes complete startup and print helpful details to the [standard output](start-a-node.html#standard-output), such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients. -
      - -## See also - -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](start-a-local-cluster.html) -- [`cockroach start`](start-a-node.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/insert.md b/src/current/v19.1/insert.md deleted file mode 100644 index 7ca6f7c1bb2..00000000000 --- a/src/current/v19.1/insert.md +++ /dev/null @@ -1,697 +0,0 @@ ---- -title: INSERT -summary: The INSERT statement inserts one or more rows into a table. -toc: true ---- - -The `INSERT` [statement](sql-statements.html) inserts one or more rows into a table. In cases where inserted values conflict with uniqueness constraints, the `ON CONFLICT` clause can be used to update rather than insert rows. - - -## Performance best practices - -- To bulk-insert data into an existing table, batch multiple rows in one [multi-row `INSERT`](#insert-multiple-rows-into-an-existing-table) statement and do not include the `INSERT` statements within a transaction. Experimentally determine the optimal batch size for your application by monitoring the performance for different batch sizes (10 rows, 100 rows, 1000 rows). -- To bulk-insert data into a brand new table, the [`IMPORT`](import.html) statement performs better than `INSERT`. -- In traditional SQL databases, generating and retrieving unique IDs involves using `INSERT` with `SELECT`. In CockroachDB, use `RETURNING` clause with `INSERT` instead. See [Insert and Return Values](#insert-and-return-values) for more details. - -## Required privileges - -The user must have the `INSERT` [privilege](authorization.html#assign-privileges) on the table. -To use `ON CONFLICT`, the user must also have the `SELECT` privilege on the table. -To use `ON CONFLICT DO UPDATE`, the user must additionally have the `UPDATE` privilege on the table. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/insert.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The table you want to write data to.| -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of a column to populate during the insert. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column. Also, if column names are listed after `INTO`, values must be in corresponding order; otherwise, they must follow the declared order of the columns in the table. -`DEFAULT VALUES` | To fill all columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `select_stmt`. To fill a specific column with its default value, leave the value out of the `select_stmt` or use `DEFAULT` at the appropriate position. See the [Insert Default Values](#insert-default-values) examples below. -`RETURNING target_list` | Return values based on rows inserted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html). See the [Insert and Return Values](#insert-and-return-values) example below. - -### `ON CONFLICT` clause - -
      - {% include {{ page.version.version }}/sql/diagrams/on_conflict.html %} -
      - -Normally, when inserted values -conflict with a `UNIQUE` constraint on one or more columns, CockroachDB -returns an error. To update the affected rows instead, use an `ON -CONFLICT` clause containing the column(s) with the unique constraint -and the `DO UPDATE SET` expression set to the column(s) to be updated -(any `SET` expression supported by the [`UPDATE`](update.html) -statement is also supported here, including those with `WHERE` -clauses). To prevent the affected rows from updating while allowing -new rows to be inserted, set `ON CONFLICT` to `DO NOTHING`. See the -[Update Values `ON CONFLICT`](#update-values-on-conflict) and [Do Not -Update Values `ON CONFLICT`](#do-not-update-values-on-conflict) -examples below. - -If the values in the `SET` expression cause uniqueness conflicts, -CockroachDB will return an error. - -As a short-hand alternative to the `ON -CONFLICT` clause, you can use the [`UPSERT`](upsert.html) -statement. However, `UPSERT` does not let you specify the column(s) with -the unique constraint; it always uses the column(s) from the primary -key. Using `ON CONFLICT` is therefore more flexible. - -## Examples - -All of the examples below assume you've already created a table `accounts`: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE accounts( - id INT DEFAULT unique_rowid() PRIMARY KEY, - balance DECIMAL -); -~~~ - -### Insert a single row - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (balance, id) VALUES (10000.50, 1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance -+----+----------+ - 1 | 10000.50 -(1 row) -~~~ - -If you do not list column names, the statement will use the columns of the table in their declared order: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM accounts; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -+-------------+-----------+-------------+----------------+-----------------------+-----------+-----------+ - id | INT8 | false | unique_rowid() | | {primary} | false - balance | DECIMAL | true | NULL | | {} | false -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts VALUES (2, 20000.75); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance -+----+----------+ - 1 | 10000.50 - 2 | 20000.75 -(2 rows) -~~~ - -### Insert multiple rows into an existing table - -{{site.data.alerts.callout_success}} -Multi-row inserts are faster than multiple single-row `INSERT` statements. As a performance best practice, we recommend batching multiple rows in one multi-row `INSERT` statement instead of using multiple single-row `INSERT` statements. Experimentally determine the optimal batch size for your application by monitoring the performance for different batch sizes (10 rows, 100 rows, 1000 rows). -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (3, 8100.73), (4, 9400.10); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance -+----+----------+ - 1 | 10000.50 - 2 | 20000.75 - 3 | 8100.73 - 4 | 9400.10 -(4 rows) -~~~ - -### Insert multiple rows into a new table - -The [`IMPORT`](import.html) statement performs better than `INSERT` when inserting rows into a new table. - -### Insert from a `SELECT` statement - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE other_accounts ( - id INT DEFAULT unique_rowid() PRIMARY KEY, - balance DECIMAL -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO other_accounts (id, balance) VALUES (5, 350.10), (6, 150), (7, 200.10); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) SELECT id, balance FROM other_accounts WHERE id > 4; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance -+----+----------+ - 1 | 10000.50 - 2 | 20000.75 - 3 | 8100.73 - 4 | 9400.10 - 5 | 350.10 - 6 | 150 - 7 | 200.10 -(7 rows) -~~~ - -### Insert default values - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id) VALUES (8); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (9, DEFAULT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id in (8, 9); -~~~ - -~~~ - id | balance -+----+---------+ - 8 | NULL - 9 | NULL -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts DEFAULT VALUES; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ - id | balance -+--------------------+----------+ - 1 | 10000.50 - 2 | 20000.75 - 3 | 8100.73 - 4 | 9400.10 - 5 | 350.10 - 6 | 150 - 7 | 200.10 - 8 | NULL - 9 | NULL - 454320296521498625 | NULL -(10 rows) -~~~ - -### Insert and return values - -In this example, the `RETURNING` clause returns the `id` values of the rows inserted, which are generated server-side by the `unique_rowid()` function. The language-specific versions assume that you have installed the relevant [client drivers](install-client-drivers.html). - -{{site.data.alerts.callout_success}}This use of RETURNING mirrors the behavior of MySQL's last_insert_id() function.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}When a driver provides a query() method for statements that return results and an exec() method for statements that do not (e.g., Go), it's likely necessary to use the query() method for INSERT statements with RETURNING.{{site.data.alerts.end}} - -
      - - - - - -
      - -
      - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (DEFAULT, 1000), (DEFAULT, 250) - RETURNING id; -~~~ - -~~~ - id -+--------------------+ - 454320445012049921 - 454320445012082689 -(2 rows) -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ python -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='root', - host='localhost', - port=26257 -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Insert two rows into the "accounts" table -# and return the "id" values generated server-side. -cur.execute( - 'INSERT INTO accounts (id, balance) ' - 'VALUES (DEFAULT, 1000), (DEFAULT, 250) ' - 'RETURNING id' -) - -# Print out the returned values. -rows = cur.fetchall() -print('IDs:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() -~~~ - -The printed values would look like: - -~~~ -IDs: -['190019066706952193'] -['190019066706984961'] -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ ruby -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'root', - dbname: 'bank', - host: 'localhost', - port: 26257 -) - -# Insert two rows into the "accounts" table -# and return the "id" values generated server-side. -conn.exec( - 'INSERT INTO accounts (id, balance) '\ - 'VALUES (DEFAULT, 1000), (DEFAULT, 250) '\ - 'RETURNING id' -) do |res| - -# Print out the returned values. -puts "IDs:" - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() -~~~ - -The printed values would look like: - -~~~ -IDs: -{"id"=>"190019066706952193"} -{"id"=>"190019066706984961"} -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ go -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - //Connect to the "bank" database. - db, err := sql.Open( - "postgres", - "postgresql://root@localhost:26257/bank?sslmode=disable" - ) - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Insert two rows into the "accounts" table - // and return the "id" values generated server-side. - rows, err := db.Query( - "INSERT INTO accounts (id, balance) " + - "VALUES (DEFAULT, 1000), (DEFAULT, 250) " + - "RETURNING id", - ) - if err != nil { - log.Fatal(err) - } - - // Print out the returned values. - defer rows.Close() - fmt.Println("IDs:") - for rows.Next() { - var id int - if err := rows.Scan(&id); err != nil { - log.Fatal(err) - } - fmt.Printf("%d\n", id) - } -} -~~~ - -The printed values would look like: - -~~~ -IDs: -190019066706952193 -190019066706984961 -~~~ - -
      - -
      - -{% include copy-clipboard.html %} -~~~ js -var async = require('async'); - -// Require the driver. -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'root', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -pg.connect(config, function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Insert two rows into the "accounts" table - // and return the "id" values generated server-side. - client.query( - `INSERT INTO accounts (id, balance) - VALUES (DEFAULT, 1000), (DEFAULT, 250) - RETURNING id;`, - next - ); - } - ], - function (err, results) { - if (err) { - console.error('error inserting into and selecting from accounts', err); - finish(); - } - // Print out the returned values. - console.log('IDs:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); -~~~ - -The printed values would look like: - -~~~ -IDs: -{ id: '190019066706952193' } -{ id: '190019066706984961' } -~~~ - -
      - -### Update values `ON CONFLICT` - -When a uniqueness conflict is detected, CockroachDB stores the row in a temporary table called `excluded`. This example demonstrates how you use the columns in the temporary `excluded` table to apply updates on conflict: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 500.50) - ON CONFLICT (id) - DO UPDATE SET balance = excluded.balance; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 500.50 -(1 row) -~~~ - -You can also update the row using an existing value: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 500.50) - ON CONFLICT (id) - DO UPDATE SET balance = accounts.balance + excluded.balance; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 1001.00 -(1 row) -~~~ - -You can also use a `WHERE` clause to apply the `DO UPDATE SET` expression conditionally: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 700) - ON CONFLICT (id) - DO UPDATE SET balance = excluded.balance - WHERE excluded.balance > accounts.balance; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 800.00 -(1 row) -~~~ - -### Do not update values `ON CONFLICT` - -In this example, we get an error from a uniqueness conflict: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 1001.00 -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) VALUES (8, 125.50); -~~~ - -~~~ -pq: duplicate key value (id)=(8) violates unique constraint "primary" -~~~ - -In this example, we use `ON CONFLICT DO NOTHING` to ignore the uniqueness error and prevent the affected row from being updated: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 125.50) - ON CONFLICT (id) - DO NOTHING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id = 8; -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 1001.00 -(1 row) -~~~ - -In this example, `ON CONFLICT DO NOTHING` prevents the first row from updating while allowing the second row to be inserted: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO accounts (id, balance) - VALUES (8, 125.50), (10, 450) - ON CONFLICT (id) - DO NOTHING; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts WHERE id in (8, 10); -~~~ - -~~~ - id | balance -+----+---------+ - 8 | 1001.00 - 10 | 450 -(2 rows) -~~~ - -### Import data containing duplicate rows using `ON CONFLICT` and `DISTINCT ON` - -If the input data for `INSERT ON CONFLICT` contains duplicate rows, -you must use [`DISTINCT -ON`](select-clause.html#eliminate-duplicate-rows) to remove these -duplicates. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH - -- the following data contains duplicates on the conflict column "id": - inputrows AS (VALUES (8, 130), (8, 140)) - - INSERT INTO accounts (id, balance) - (SELECT DISTINCT ON(id) id, balance FROM inputrows) -- de-duplicate the input rows - ON CONFLICT (id) - DO NOTHING; -~~~ - -The `DISTINCT ON` clause does not guarantee which of the duplicates is -considered. To force the selection of a particular duplicate, use an -`ORDER BY` clause: - -{% include copy-clipboard.html %} -~~~ sql -> WITH - -- the following data contains duplicates on the conflict column "id": - inputrows AS (VALUES (8, 130), (8, 140)) - - INSERT INTO accounts (id, balance) - (SELECT DISTINCT ON(id) id, balance - FROM inputrows - ORDER BY balance) -- pick the lowest balance as value to update in each account - ON CONFLICT (id) - DO NOTHING; -~~~ - -{{site.data.alerts.callout_info}} -Using `DISTINCT ON` incurs a performance cost to search and eliminate duplicates. -For best performance, avoid using it when the input is known to not contain duplicates. -{{site.data.alerts.end}} - -## See also - -- [Selection Queries](selection-queries.html) -- [`DELETE`](delete.html) -- [`UPDATE`](update.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/install-client-drivers.md b/src/current/v19.1/install-client-drivers.md deleted file mode 100644 index 808bdc93965..00000000000 --- a/src/current/v19.1/install-client-drivers.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Client Drivers -summary: CockroachDB supports the PostgreSQL wire protocol, so you can use any available PostgreSQL client drivers. -toc: false ---- - -CockroachDB supports the PostgreSQL wire protocol, so most available PostgreSQL client drivers should work with CockroachDB. - -{{site.data.alerts.callout_success}} -For code samples using these drivers, see [Build an App with CockroachDB](build-an-app-with-cockroachdb.html). -{{site.data.alerts.end}} - -{% include {{page.version.version}}/misc/drivers.md %} - -## See also - -- [Build an App with CockroachDB](build-an-app-with-cockroachdb.html) -- [Third party database tools](third-party-database-tools.html) -- [Connection parameters](connection-parameters.html) -- [Transactions](transactions.html) -- [Performance best practices](performance-best-practices-overview.html) diff --git a/src/current/v19.1/install-cockroachdb-linux.html b/src/current/v19.1/install-cockroachdb-linux.html deleted file mode 100644 index 0343a39f6a9..00000000000 --- a/src/current/v19.1/install-cockroachdb-linux.html +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: Install CockroachDB on Linux -summary: Install CockroachDB on Mac, Linux, or Windows. Sign up for product release notes. -tags: download, binary, homebrew -toc: true -key: install-cockroachdb.html ---- - -
      - - - -
      - -

      See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

      - -
      -

      Download the Binary

      - - {% include {{ page.version.version }}/misc/linux-binary-prereqs.md %} - -
        -
      1. -

        Download the CockroachDB archive for Linux, and extract the binary:

        - -
        - icon/buttons/copy - -
        -
        $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.linux-amd64.tgz | tar -xz
        -
      2. -
      3. -

        Copy the binary into your PATH so it's easy to execute cockroach commands from any shell:

        - - {% include copy-clipboard.html %}
        cp -i cockroach-{{ page.release_info.version }}.linux-amd64/cockroach /usr/local/bin/
        -

        If you get a permissions error, prefix the command with sudo.

        -
      4. -
      5. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="1" %} -
      6. -
      -
      - -
      -

      Use Kubernetes

      - -

      To orchestrate CockroachDB using Kubernetes, either with configuration files or the Helm package manager, use the following tutorials:

      - - -
      - -
      -

      Use Docker

      - - {{site.data.alerts.callout_danger}}Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker. Unless you are very experienced with Docker, we recommend starting with a different installation and deployment method.{{site.data.alerts.end}} - -
        -
      1. -

        Install Docker for Linux. Please carefully check that you meet all prerequisites.

        -
      2. -
      3. -

        Confirm that the Docker daemon is running in the background:

        - -
        - icon/buttons/copy - -
        -
        $ docker version
        -

        If you do not see the server listed, start the Docker daemon.

        - - {{site.data.alerts.callout_info}}On Linux, Docker needs sudo privileges.{{site.data.alerts.end}} -
      4. -
      5. -

        Pull the image for the {{page.release_info.version}} release of CockroachDB from Docker Hub:

        - -
        - icon/buttons/copy - -
        -
        $ sudo docker pull {{page.release_info.docker_image}}:{{page.release_info.version}}
        -
        -
      6. -
      7. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="2" %} -
      8. -
      -
      - -
      -

      Build from Source

      -
        -
      1. -

        Install the following prerequisites, as necessary:

        - - - - - - - - - - - - - - - - - - - - - - -
        C++ compilerMust support C++ 11. GCC prior to 6.0 does not work due to this issue. On macOS, Xcode should suffice.
        GoVersion 1.11.6+ is required, but 1.12 and above is not recommended. Older versions might work via make build IGNORE_GOVERS=1.
        BashVersions 4+ are preferred, but later releases from the 3.x series are also known to work.
        CMakeVersions 3.8+ are known to work.
        AutoconfVersion 2.68 or higher is required.
        -

        A 64-bit system is strongly recommended. Building or running CockroachDB on 32-bit systems has not been tested. You'll also need at least 2GB of RAM. If you plan to run our test suite, you'll need closer to 4GB of RAM.

        -
      2. -
      3. -

        Download the CockroachDB {{ page.release_info.version }} source archive, and extract the sources:

        - -
        - icon/buttons/copy - -
        -
        $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.src.tgz | tar -xz
        -
      4. -
      5. In the extracted directory, run make build:

        - - {% include copy-clipboard.html %}
        cd cockroach-{{ page.release_info.version }}
        - - {% include copy-clipboard.html %}
        make build
        - -

        The build process can take 10+ minutes, so please be patient.

        - - {{site.data.alerts.callout_info}}The default binary contains core open-source functionality covered by the Apache License 2 (APL2) and enterprise functionality covered by the CockroachDB Community License (CCL). To build a pure open-source (APL2) version excluding enterprise functionality, use make buildoss. See this blog post for more details.{{site.data.alerts.end}} -
      6. -
      7. - -

        Install the cockroach binary into /usr/local/bin/ so it's easy to execute cockroach commands from any directory:

        - - {% include copy-clipboard.html %}
        make install
        -

        If you get a permissions error, prefix the command with sudo.

        - -

        You can also execute the cockroach binary directly from its built location, ./src/github.com/cockroachdb/cockroach/cockroach, but the rest of the documentation assumes you have the binary on your PATH.

        -
      8. -
      9. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="3" %} -
      10. -
      -
      - -

      What's next?

      - -{% include {{ page.version.version }}/misc/install-next-steps.html %} - -{% include {{ page.version.version }}/misc/diagnostics-callout.html %} diff --git a/src/current/v19.1/install-cockroachdb-mac.html b/src/current/v19.1/install-cockroachdb-mac.html deleted file mode 100644 index 57c87bdfb56..00000000000 --- a/src/current/v19.1/install-cockroachdb-mac.html +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: Install CockroachDB on Mac -summary: Install CockroachDB on Mac, Linux, or Windows. Sign up for product release notes. -tags: download, binary, homebrew -toc: true -key: install-cockroachdb.html ---- - -
      - - - -
      - -

      See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

      - -
      -

      Download the binary

      -
        -
      1. -

        Download the CockroachDB archive for OS X, and extract the binary:

        - -
        - icon/buttons/copy - -
        -
        curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz | tar -xz
        -
      2. -
      3. -

        Copy the binary into your PATH so you can execute cockroach commands from any shell:

        - - {% include copy-clipboard.html %}
        cp -i cockroach-{{ page.release_info.version }}.darwin-10.9-amd64/cockroach /usr/local/bin/
        -

        If you get a permissions error, prefix the command with sudo.

        -
      4. -
      5. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="1" %} -
      6. -
      -
      - -
      -

      Use Kubernetes

      - -

      To orchestrate CockroachDB locally using Kubernetes, either with configuration files or the Helm package manager, see Orchestrate CockroachDB Locally with Minikube.

      -
      - -
      -

      Use Docker

      - - {{site.data.alerts.callout_danger}}Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker. Unless you are very experienced with Docker, we recommend starting with a different installation and deployment method.{{site.data.alerts.end}} - -
        -
      1. -

        Install Docker for Mac. Please carefully check that you meet all prerequisites.

        -
      2. -
      3. -

        Confirm that the Docker daemon is running in the background:

        - -
        - icon/buttons/copy - -
        -
        $ docker version
        -

        If you do not see the server listed, start the Docker daemon.

        -
      4. -
      5. -

        Pull the image for the {{page.release_info.version}} release of CockroachDB from Docker Hub:

        - -
        - icon/buttons/copy - -
        -
        $ docker pull {{page.release_info.docker_image}}:{{page.release_info.version}}
        -
        -
      6. -
      7. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="2" %} -
      8. -
      -
      - -
      -

      Build from source

      -
        -
      1. -

        Install the following prerequisites, as necessary:

        - - - - - - - - - - - - - - - - - - - - - - -
        C++ compilerMust support C++ 11. GCC prior to 6.0 does not work due to this issue. On macOS, Xcode should suffice.
        GoVersion 1.11.6+ is required, but 1.12 and above is not recommended. Older versions might work via make build IGNORE_GOVERS=1.
        BashVersions 4+ are preferred, but later releases from the 3.x series are also known to work.
        CMakeVersions 3.8+ are known to work.
        AutoconfVersion 2.68 or higher is required.
        -

        A 64-bit system is strongly recommended. Building or running CockroachDB on 32-bit systems has not been tested. You'll also need at least 2GB of RAM. If you plan to run our test suite, you'll need closer to 4GB of RAM.

        -
      2. -
      3. -

        Download the CockroachDB {{ page.release_info.version }} source archive, and extract the sources:

        - -
        - icon/buttons/copy - -
        -
        $ curl https://binaries.cockroachdb.com/cockroach-{{ page.release_info.version }}.src.tgz | tar -xz
        -
      4. -
      5. In the extracted directory, run make build:

        - - {% include copy-clipboard.html %}
        cd cockroach-{{ page.release_info.version }}
        - - {% include copy-clipboard.html %}
        make build
        - -

        The build process can take 10+ minutes, so please be patient.

        - - {{site.data.alerts.callout_info}}The default binary contains core open-source functionality covered by the Apache License 2 (APL2) and enterprise functionality covered by the CockroachDB Community License (CCL). To build a pure open-source (APL2) version excluding enterprise functionality, use make buildoss. See this blog post for more details.{{site.data.alerts.end}} -
      6. -
      7. -

        Install the cockroach binary into /usr/local/bin/ so it's easy to execute cockroach commands from any directory:

        - - {% include copy-clipboard.html %}
        make install
        -

        If you get a permissions error, prefix the command with sudo.

        - -

        You can also execute the cockroach binary directly from its built location, ./src/github.com/cockroachdb/cockroach/cockroach, but the rest of the documentation assumes you have the binary on your PATH.

        -
      8. -
      9. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="3" %} -
      10. -
      -
      - -

      What's next?

      - -{% include {{ page.version.version }}/misc/install-next-steps.html %} - -{% include {{ page.version.version }}/misc/diagnostics-callout.html %} diff --git a/src/current/v19.1/install-cockroachdb-windows.html b/src/current/v19.1/install-cockroachdb-windows.html deleted file mode 100644 index f1af8a56a6b..00000000000 --- a/src/current/v19.1/install-cockroachdb-windows.html +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Install CockroachDB on Windows -summary: Install CockroachDB on Mac, Linux, or Windows. Sign up for product release notes. -tags: download, binary, homebrew -toc: true -key: install-cockroachdb.html ---- - -
      - - - -
      - -

      See Release Notes for what's new in the latest release, {{ page.release_info.version }}. To upgrade to this release from an older version, see Cluster Upgrade.

      - -
      -

      Download the executable

      - - {% include windows_warning.md %} - -
        -
      1. -

        Download and extract the CockroachDB {{ page.release_info.version }} archive for Windows.

        -
      2. -
      3. -

        To ensure that CockroachDB can use location-based names as time zone identifiers, download Go's official zoneinfo.zip and set the ZONEINFO environment variable to point to the zip file.

        -
      4. -
      5. -

        Open PowerShell, navigate to the directory containing the executable, and make sure it works:

        - -
        - icon/buttons/copy - -
        -
        PS C:\cockroach-{{ page.release_info.version }}.windows-6.2-amd64> .\cockroach.exe version
        -
      6. -
      7. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="1" %} -
      8. -
      -
      - -
      -

      Use Kubernetes

      - -

      To orchestrate CockroachDB locally using Kubernetes, either with configuration files or the Helm package manager, see Orchestrate CockroachDB Locally with Minikube.

      -
      - -
      -

      Use Docker

      - - {{site.data.alerts.callout_danger}}Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker. Unless you are very experienced with Docker, we recommend starting with a different installation and deployment method.{{site.data.alerts.end}} - -
        -
      1. -

        Install Docker for Windows.

        -
        Docker for Windows requires 64bit Windows 10 Pro and Microsoft Hyper-V. Please see the official documentation for more details. Note that if your system does not satisfy the stated requirements, you can try using Docker Toolbox.
        -
      2. -
      3. -

        Open PowerShell and confirm that the Docker daemon is running in the background:

        - -
        - icon/buttons/copy - -
        -
        PS C:\Users\username> docker version
        - -

        If you do not see the server listed, start Docker for Windows.

        -
      4. -
      5. -

        Share your local drives. This makes it possible to mount local directories as data volumes to persist node data after containers are stopped or deleted.

        -
      6. -
      7. -

        Pull the image for the {{page.release_info.version}} release of CockroachDB from Docker Hub:

        - -
        - icon/buttons/copy - -
        -
        PS C:\Users\username> docker pull {{page.release_info.docker_image}}:{{page.release_info.version}}
        -
      8. -
      9. -

        Keep up-to-date with CockroachDB releases and best practices:

        -{% include marketo-install.html uid="2" %} -
      10. -
      -
      - -

      What's next?

      - -{% include {{ page.version.version }}/misc/install-next-steps.html %} - -{% include {{ page.version.version }}/misc/diagnostics-callout.html %} diff --git a/src/current/v19.1/install-cockroachdb.html b/src/current/v19.1/install-cockroachdb.html deleted file mode 100644 index 6e742bb3164..00000000000 --- a/src/current/v19.1/install-cockroachdb.html +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Install CockroachDB -summary: Install CockroachDB on Mac, Linux, or Windows. Sign up for product release notes. -toc: false -feedback: false ---- - - diff --git a/src/current/v19.1/int.md b/src/current/v19.1/int.md deleted file mode 100644 index 31d125b8783..00000000000 --- a/src/current/v19.1/int.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -title: INT -summary: CockroachDB supports various signed integer data types. -toc: true ---- - -CockroachDB supports various signed integer [data types](data-types.html). - -{{site.data.alerts.callout_info}} -For instructions showing how to auto-generate integer values (e.g., to auto-number rows in a table), see [this FAQ entry](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb). -{{site.data.alerts.end}} - -## Names and Aliases - -| Name | Allowed Width | Aliases | Range | -|--------+---------------+--------------------------------------------------+----------------------------------------------| -| `INT` | 64-bit | `INTEGER`
      `INT8`
      `INT64`
      `BIGINT` | -9223372036854775807 to +9223372036854775807 | -| `INT2` | 16-bit | `SMALLINT` | -32768 to +32767 | -| `INT4` | 32-bit | None | -2147483648 to +2147483647 | -| `INT8` | 64-bit | `INT` | -9223372036854775807 to +9223372036854775807 | - -## Syntax - -A constant value of type `INT` can be entered as a [numeric literal](sql-constants.html#numeric-literals). -For example: `42`, `-1234`, or `0xCAFE`. - -## Size - -The different integer types place different constraints on the range of allowable values, but all integers are stored in the same way regardless of type. Smaller values take up less space than larger ones (based on the numeric value, not the data type). - -### Considerations for 64-bit signed integers - -By default, `INT` is an alias for `INT8`, which creates 64-bit signed integers. This differs from the Postgres default for `INT`, [which is 32 bits](https://www.postgresql.org/docs/9.6/datatype-numeric.html), and may cause issues for your application if it is not written to handle 64-bit integers, whether due to the language your application is written in, or the ORM/framework it uses to generate SQL (if any). - -For example, JavaScript language runtimes represent numbers as 64-bit floats, which means that the JS runtime [can only represent 53 bits of numeric accuracy](http://2ality.com/2012/04/number-encoding.html) and thus has a max safe value of 253, or 9007199254740992. This means that the maximum size of a default `INT` in CockroachDB is much larger than JavaScript can represent as an integer. Visually, the size difference is as follows: - -``` -9223372036854775807 # INT default max value - 9007199254740991 # JS integer max value -``` - -Given the above, if a table contains a column with a default-sized `INT` value, and you are reading from it or writing to it via JavaScript, you will not be able to read and write values to that column correctly. This issue can pop up in a surprising way if you are using a framework that autogenerates both frontend and backend code (such as [twirp](https://github.com/twitchtv/twirp)). In such cases, you may find that your backend code can handle 64-bit signed integers, but the generated client/frontend code cannot. - -If your application needs to use an integer size that is different than the CockroachDB default (for these or other reasons), you can change one or both of the settings below. For example, you can set either of the below to `4` to cause `INT` and `SERIAL` to become aliases for `INT4` and `SERIAL4`, which use 32-bit integers. - -1. The `default_int_size` [session variable](set-vars.html). -2. The `sql.defaults.default_int_size` [cluster setting](cluster-settings.html). - -{{site.data.alerts.callout_success}} -If your application requires arbitrary precision numbers, use the [`DECIMAL`](decimal.html) data type. -{{site.data.alerts.end}} - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE ints (a INT PRIMARY KEY, b SMALLINT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM ints; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| a | INT | false | NULL | | {"primary"} | -| b | SMALLINT | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO ints VALUES (1, 32); -~~~ - -~~~ -INSERT 1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM ints; -~~~ - -~~~ -+---+----+ -| a | b | -+---+----+ -| 1 | 32 | -+---+----+ -(1 row) -~~~ - -## Supported casting and conversion - -`INT` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`DECIMAL` | –– -`FLOAT` | Loses precision if the `INT` value is larger than 2^53 in magnitude. -`BIT` | Converts to the binary representation of the integer value. If the value is negative, the sign bit is replicated on the left to fill the entire bit array. -`BOOL` | **0** converts to `false`; all other values convert to `true`. -`DATE` | Converts to days since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIMESTAMP` | Converts to seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`INTERVAL` | Converts to seconds. -`STRING` | –– - -## See also - -- [Data Types](data-types.html) -- [`FLOAT`](float.html) -- [`DECIMAL`](decimal.html) diff --git a/src/current/v19.1/intellij-idea.md b/src/current/v19.1/intellij-idea.md deleted file mode 100644 index 759dc3ad1d5..00000000000 --- a/src/current/v19.1/intellij-idea.md +++ /dev/null @@ -1,94 +0,0 @@ ---- -title: Intellij IDEA -summary: Learn how to use IntelliJ IDEA with a CockroachDB cluster. -toc: true ---- - -You can use CockroachDB in [IntelliJ IDEA](https://www.jetbrains.com/idea/) as a [database data source](https://www.jetbrains.com/help/idea/managing-data-sources.html#data_sources), which lets you accomplish tasks like managing your database's schema from within your IDE. - -## Support - -As of CockroachDB {{page.version.version}}, IntelliJ IDEA only has **partial support**. This means that the application is mostly functional, but its integration still has a few rough edges. - -### Versions - -The level of support in this document was tested as of the following versions: - -- CockroachDB v19.1.0-beta.20190225 -- IntelliJ IDEA Ultimate 18.1.3 -- PostgreSQL JDBC 41.1 - -{{site.data.alerts.callout_info}} -This feature should also work with other JetBrains IDEs, such as PyCharm, but Cockroach Labs has not yet tested its integration. -{{site.data.alerts.end}} - -### Warnings & Errors - -Users can expect to encounter the following behaviors when using CockroachDB within IntelliJ IDEA. - -- **Warnings** do not require any action on the user's end and can be ignored. Note that even if a message indicates that it is an "error", it can still be treated as a warning by this definition. -- **Errors** require the user to take action to resolve the problem and cannot be ignored. - -#### Warnings - -##### [XXUUU] ERROR: could not decorrelate subquery... - -DBeaver - Select CockroachDB - -Displays once per load of schema. - -
      - -##### [42883] ERROR: unknown function: pg_function_is_visible() Failed to retrieve... - -DBeaver - Select CockroachDB - -Display periodically. Does not impact functionality. - -#### Errors - -##### [42703] org.postgresql.util.PSQLException: ERROR: column "n.xmin" does not exist - -DBeaver - Select CockroachDB - -Requires setting **Introspect using JDBC metadata** ([details below](#set-cockroachdb-as-a-data-source-in-intellij)). - -
      - -## Set CockroachDB as a Data Source in IntelliJ - -1. Launch the **Database** tool window. (**View** > **Tool Windows** > **Database**) DBeaver - Select CockroachDB -1. Add a PostgreSQL data source. (**New (+)** > **Data Source** > **PostgreSQL**)DBeaver - Select CockroachDB -1. On the **General** tab, enter your database's connection string: - - Field | Value - ------|------- - **Host** | Your CockroachDB cluster's hostname - **Port** | Your CockroachDB cluster's port. By default, CockroachDB uses port **26257**. - **Database** | The database you want to connect to. Note that CockroachDB's notion of database differs from PostgreSQL's; you can see your cluster's databases through the [`SHOW DATABASES`](show-databases.html) command. - **User** | The user to connect as. By default, you can use **root**. - **Password** | If your cluster uses password authentication, enter the password. - **Driver** | Select or install **PostgreSQL** using a version greater than or equal to 41.1. (Older drivers have not been tested.) - - DBeaver - Select CockroachDB -1. Install or select a **PostgreSQL** driver. We recommend a version greater than or equal to 41.1. -1. If your cluster uses SSL authentication, go to the **SSH/SSL** tab, select **Use SSL** and provide the location of your certificate files. -1. Go to the **Options** tab, and then select **Introspect using JDBC metadata**.DBeaver - Select CockroachDB -1. Click **OK**. - -You can now use IntelliJ's [database tool window](https://www.jetbrains.com/help/idea/working-with-the-database-tool-window.html) to interact with your CockroachDB cluster. - -## Report Issues with IntelliJ IDEA & CockroachDB - -If you encounter issues other than those outlined above, please [file an issue on the `cockroachdb/cockroach` GitHub repo](https://github.com/cockroachdb/cockroach/issues/new?template=bug_report.md), including the following details about the environment where you encountered the issue: - -- CockroachDB version ([`cockroach version`](view-version-details.html)) -- IntelliJ IDEA version -- Operating system -- Steps to reproduce the behavior -- If possible, a trace of the SQL statements sent to CockroachDB while the error is being reproduced using [SQL query logging](query-behavior-troubleshooting.html#sql-logging). - -## See Also - -+ [Client connection parameters](connection-parameters.html) -+ [Third-Party Database Tools](third-party-database-tools.html) diff --git a/src/current/v19.1/interleave-in-parent.md b/src/current/v19.1/interleave-in-parent.md deleted file mode 100644 index 12ac66c3308..00000000000 --- a/src/current/v19.1/interleave-in-parent.md +++ /dev/null @@ -1,255 +0,0 @@ ---- -title: INTERLEAVE IN PARENT -summary: Interleaving tables improves query performance by optimizing the key-value structure of closely related table's data. -toc: true -toc_not_nested: true ---- - -Interleaving tables improves query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same [key-value range](frequently-asked-questions.html#how-does-cockroachdb-scale) if it's likely to be read and written together. - -{{site.data.alerts.callout_info}}Interleaving tables does not affect their behavior within SQL.{{site.data.alerts.end}} - - -## How interleaved tables work - -When tables are interleaved, data written to one table (known as the **child**) is inserted directly into another (known as the **parent**) in the key-value store. This is accomplished by matching the child table's Primary Key to the parent's. - -### Interleave prefix - -For interleaved tables to have Primary Keys that can be matched, the child table must use the parent table's entire Primary Key as a prefix of its own Primary Key––these matching columns are referred to as the **interleave prefix**. It's easiest to think of these columns as representing the same data, which is usually implemented with Foreign Keys. - -{{site.data.alerts.callout_success}}To formally enforce the relationship between each table's interleave prefix columns, we recommend using Foreign Key constraints.{{site.data.alerts.end}} - -For example, if you want to interleave `orders` into `customers` and the Primary Key of customers is `id`, you need to create a column representing `customers.id` as the first column in the Primary Key of `orders`—e.g., with a column called `customer`. So the data representing `customers.id` is the interleave prefix, which exists in the `orders` table as the `customer` column. - -### Key-value structure - -When you write data into the child table, it is inserted into the key-value store immediately after the parent table's key matching the interleave prefix. - -For example, if you interleave `orders` into `customers`, the `orders` data is written directly within the `customers` table in the key-value store. The following is a crude, illustrative example of what the keys would look like in this structure: - -~~~ -/customers/1 -/customers/1/orders/1000 -/customers/1/orders/1002 -/customers/2 -/customers/2/orders/1001 -/customers/2/orders/1003 -... -/customers/n/ -/customers/n/orders/ -~~~ - -By writing data in this way, related data is more likely to remain on the same key-value range, which can make it much faster to read from and write to. Using the above example, all of customer 1's data is going to be written to the same range, including its representation in both the `customers` and `orders` tables. - -## When to interleave tables - -{% include {{ page.version.version }}/faq/when-to-interleave-tables.html %} - -### Interleaved hierarchy - -Interleaved tables typically work best when the tables form a hierarchy. For example, you could interleave the table `orders` (as the child) into the table `customers` (as the parent, which represents the people who placed the orders). You can extend this example by also interleaving the tables `invoices` (as a child) and `packages` (as a child) into `orders` (as the parent). - -The entire set of these relationships is referred to as the **interleaved hierarchy**, which contains all of the tables related through [interleave prefixes](#interleave-prefix). - -### Benefits - -In general, reads, writes, and joins of values related through the interleave prefix are *much* faster. However, you can also improve performance with any of the following: - -- Filtering more columns in the interleave prefix (from left to right). - - For example, if the interleave prefix of `packages` is `(customer, order)`, filtering on `customer` would be fast, but filtering on `customer` *and* `order` would be faster. - -- Using only tables in the interleaved hierarchy. - - - -Fast deletes are available for interleaved tables that use [`ON DELETE CASCADE`](add-constraint.html#add-the-foreign-key-constraint-with-cascade). Deleting rows from such tables will use an optimized code path and run much faster, as long as the following conditions are met: - -- The table or any of its interleaved tables do not have any secondary indices. -- The table or any of its interleaved tables are not referenced by any other table outside of them by foreign key. -- All of the interleaved relationships use `ON DELETE CASCADE` clauses. - -The performance boost when using this fast path is several orders of magnitude, potentially reducing delete times from seconds to nanoseconds. - -For an example showing how to create tables that meet these criteria, see [Interleaved fast path deletes](#interleaved-fast-path-deletes) below. - -### Tradeoffs - -- In general, reads and deletes over ranges of table values (e.g., `WHERE column > value`) in interleaved tables are slower. - - However, an exception to this is performing operations on ranges of table values in the greatest descendant in the interleaved hierarchy that filters on all columns of the interleave prefix with constant values. - - For example, if the interleave prefix of `packages` is `(customer, order)`, filtering on the entire interleave prefix with constant values while calculating a range of table values on another column, like `WHERE customer = 1 AND order = 1001 AND delivery_date > DATE '2016-01-25'`, would still be fast. - - Another exception is the [fast path delete optimization](#fast-path-deletes), which is available if you set up your tables according to certain criteria. - -- If the amount of interleaved data stored for any Primary Key value of the root table is larger than [a key-value range's maximum size](configure-replication-zones.html#replication-zone-variables) (64MB by default), the interleaved optimizations will be diminished. - - For example, if one customer has 200MB of order data, their data is likely to be spread across multiple key-value ranges and CockroachDB will not be able to access it as quickly, despite it being interleaved. - -## Syntax - -
      - {% include {{ page.version.version }}/sql/diagrams/interleave.html %} -
      - -## Parameters - - Parameter | Description ------------|------------- - `CREATE TABLE ...` | For help with this section of the syntax, [`CREATE TABLE`](create-table.html). - `INTERLEAVE IN PARENT table_name` | The name of the parent table you want to interleave the new child table into. - `name_list` | A comma-separated list of columns from the child table's Primary Key that represent the parent table's Primary Key (i.e., the interleave prefix). - -## Requirements - -- You can only interleave tables when creating the child table. - -- Each child table's Primary Key must contain its parent table's Primary Key as a prefix (known as the **interleave prefix**). - - For example, if the parent table's primary key is `(a INT, b STRING)`, the child table's primary key could be `(a INT, b STRING, c DECIMAL)`. - - {{site.data.alerts.callout_info}}This requirement is enforced only by ensuring that the columns use the same data types. However, we recommend ensuring the columns refer to the same values by using the Foreign Key constraint.{{site.data.alerts.end}} - -- Interleaved tables cannot be the child of more than 1 parent table. However, each parent table can have many children tables. Children tables can also be parents of interleaved tables. - -## Recommendations - -- Use interleaved tables when your schema forms a hierarchy, and the Primary Key of the root table (for example, a "user ID" or "account ID") is a parameter to most of your queries. - -- To enforce the relationship between the parent and children table's Primary Keys, use [Foreign Key constraints](foreign-key.html) on the child table. - -- In cases where you're uncertain if interleaving tables will improve your queries' performance, test how tables perform under load when they're interleaved and when they aren't. - -## Examples - -### Interleaving tables - -This example creates an interleaved hierarchy between `customers`, `orders`, and `packages`, as well as the appropriate Foreign Key constraints. You can see that each child table uses its parent table's Primary Key as a prefix of its own Primary Key (the **interleave prefix**). - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE customers ( - id INT PRIMARY KEY, - name STRING(50) - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE orders ( - customer INT, - id INT, - total DECIMAL(20, 5), - PRIMARY KEY (customer, id), - CONSTRAINT fk_customer FOREIGN KEY (customer) REFERENCES customers - ) INTERLEAVE IN PARENT customers (customer); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE packages ( - customer INT, - "order" INT, - id INT, - address STRING(50), - delivered BOOL, - delivery_date DATE, - PRIMARY KEY (customer, "order", id), - CONSTRAINT fk_order FOREIGN KEY (customer, "order") REFERENCES orders - ) INTERLEAVE IN PARENT orders (customer, "order"); -~~~ - -### Interleaved fast path deletes - -This example shows how to create interleaved tables that enable our SQL engine to use a code path optimized to run much faster when deleting rows from these tables. For more information about the criteria for enabling this optimization, see [fast path deletes](#fast-path-deletes) above. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE items (id INT PRIMARY KEY); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS bundles ( - id INT, - item_id INT, - PRIMARY KEY (item_id, id), - FOREIGN KEY (item_id) REFERENCES items (id) ON DELETE CASCADE ON UPDATE CASCADE - ) - INTERLEAVE IN PARENT items (item_id); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS suppliers ( - id INT, - item_id INT, - PRIMARY KEY (item_id, id), - FOREIGN KEY (item_id) REFERENCES items (id) ON DELETE CASCADE ON UPDATE CASCADE - ) - INTERLEAVE IN PARENT items (item_id); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS orders ( - id INT, - item_id INT, - bundle_id INT, - FOREIGN KEY (item_id, bundle_id) REFERENCES bundles (item_id, id) ON DELETE CASCADE ON UPDATE CASCADE, - PRIMARY KEY (item_id, bundle_id, id) - ) - INTERLEAVE IN PARENT bundles (item_id, bundle_id); -~~~ - -The following statement will delete some rows from the `parent` table, very quickly: - -{% include copy-clipboard.html %} -~~~ sql -> DELETE FROM items WHERE id <= 5; -~~~ - -### Key-value storage example - -It can be easier to understand what interleaving tables does by seeing what it looks like in the key-value store. For example, using the above example of interleaving `orders` in `customers`, we could insert the following values: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO customers (id, name) VALUES - (1, 'Ha-Yun'), - (2, 'Emanuela'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO orders (customer, id, total) VALUES - (1, 1000, 100.00), - (2, 1001, 90.00), - (1, 1002, 80.00), - (2, 1003, 70.00); -~~~ - -Using an illustrative format of the key-value store (keys are on the left; values are represented by `-> value`), the data would be written like this: - -~~~ -/customers/ -> 'Ha-Yun' -/customers//orders/ -> 100.00 -/customers//orders/ -> 80.00 -/customers/ -> 'Emanuela' -/customers//orders/ -> 90.00 -/customers//orders/ -> 70.00 -~~~ - -You'll notice that `customers.id` and `orders.customer` are written into the same position in the key-value store. This is how CockroachDB relates the two table's data for the interleaved structure. By storing data this way, accessing any of the `orders` data alongside the `customers` is much faster. - -{{site.data.alerts.callout_info}}If we didn't set Foreign Key constraints between customers.id and orders.customer and inserted orders.customer = 3, the data would still get written into the key-value in the expected location next to the customers table identifier, but SELECT * FROM customers WHERE id = 3 would not return any values.{{site.data.alerts.end}} - -To better understand how CockroachDB writes key-value data, see our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -## See also - -- [`CREATE TABLE`](create-table.html) -- [Foreign Keys](foreign-key.html) -- [Column Families](column-families.html) diff --git a/src/current/v19.1/internal/version-switcher-page-data.json b/src/current/v19.1/internal/version-switcher-page-data.json deleted file mode 100644 index 5ec30bf893f..00000000000 --- a/src/current/v19.1/internal/version-switcher-page-data.json +++ /dev/null @@ -1,17 +0,0 @@ ---- -layout: none ---- - -{%- capture page_folder -%}/{{ page.version.version }}/{%- endcapture -%} -{%- assign pages = site.pages | where_exp: "pages", "pages.url contains page_folder" | where_exp: "pages", "pages.name != '404.md'" -%} -{ -{%- for x in pages -%} -{%- assign key = x.url | replace: page_folder, "" -%} -{%- if x.key -%} - {%- assign key = x.key -%} -{%- endif %} - {{ key | jsonify }}: { - "url": {{ x.url | jsonify }} - }{% unless forloop.last %},{% endunless -%} -{% endfor %} -} \ No newline at end of file diff --git a/src/current/v19.1/interval.md b/src/current/v19.1/interval.md deleted file mode 100644 index 086ecb15922..00000000000 --- a/src/current/v19.1/interval.md +++ /dev/null @@ -1,105 +0,0 @@ ---- -title: INTERVAL -summary: The INTERVAL data type stores a value that represents a span of time. -toc: true ---- - -The `INTERVAL` [data type](data-types.html) stores a value that represents a span of time. - -## Aliases - -There are no aliases for the interval type. However, CockroachDB supports using uninterpreted [string literals](sql-constants.html#string-literals) in contexts where an `INTERVAL` value is otherwise expected. - -## Syntax - -A constant value of type `INTERVAL` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `INTERVAL` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`INTERVAL`. - -`INTERVAL` constants can be expressed using the following formats: - -Format | Description --------|-------- -SQL Standard | `INTERVAL 'Y-M D H:M:S'`

      `Y-M D`: Using a single value defines days only; using two values defines years and months. Values must be integers.

      `H:M:S`: Using a single value defines seconds only; using two values defines hours and minutes. Values can be integers or floats.

      Note that each side is optional. -ISO 8601 | `INTERVAL 'P1Y2M3DT4H5M6S'` -Traditional PostgreSQL | `INTERVAL '1 year 2 months 3 days 4 hours 5 minutes 6 seconds'` -Abbreviated PostgreSQL | `INTERVAL '1 yr 2 mons 3 d 4 hrs 5 mins 6 secs'` - -CockroachDB also supports using uninterpreted -[string literals](sql-constants.html#string-literals) in contexts -where an `INTERVAL` value is otherwise expected. - -## Size - -An `INTERVAL` column supports values up to 24 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. Intervals are stored internally as months, days, and microseconds. - -## Precision - -New in v19.1: Intervals are stored with microsecond precision instead of nanoseconds, and it is no longer possible to create intervals with nanosecond precision. As a result, parsing from a [string](string.html) or converting from a [float](float.html) or [decimal](decimal.html) will round to the nearest microsecond, as will any arithmetic [operation](functions-and-operators.html#supported-operations) (add, sub, mul, div) on intervals. CockroachDB rounds (instead of truncating) to match the behavior of Postgres. - -{{site.data.alerts.callout_danger}} -When upgrading to 19.1, existing intervals with nanoseconds will no longer be able to return their nanosecond part. An existing table `t` with nanoseconds in intervals of column `s` can round them to the nearest microsecond with `UPDATE t SET s = s + '0s'`. Note that this could cause uniqueness problems if the interval is being used as a [primary key](primary-key.html). -{{site.data.alerts.end}} - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE intervals (a INT PRIMARY KEY, b INTERVAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM intervals; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden --------------+-----------+-------------+----------------+-----------------------+-----------+----------- - a | INT8 | f | | | {primary} | f - b | INTERVAL | t | | | {} | f -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO - intervals - VALUES (1, INTERVAL '1 year 2 months 3 days 4 hours 5 minutes 6 seconds'), - (2, INTERVAL '1-2 3 4:5:6'), - (3, '1-2 3 4:5:6'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM intervals; -~~~ - -~~~ - a | b ----+------------------------------- - 1 | 1 year 2 mons 3 days 04:05:06 - 2 | 1 year 2 mons 3 days 04:05:06 - 3 | 1 year 2 mons 3 days 04:05:06 -(3 rows) -~~~ - -## Supported casting and conversion - -`INTERVAL` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INT` | Converts to number of seconds (second precision) -`DECIMAL` | Converts to number of seconds (microsecond precision) -`FLOAT` | Converts to number of seconds (microsecond precision) -`STRING` | Converts to `h-m-s` format (microsecond precision) -`TIME` | Converts to `HH:MM:SS.SSSSSS`, the time equivalent to the interval after midnight (microsecond precision) - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/inverted-indexes.md b/src/current/v19.1/inverted-indexes.md deleted file mode 100644 index f44c197a753..00000000000 --- a/src/current/v19.1/inverted-indexes.md +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: Inverted Indexes -summary: Inverted indexes improve your database's performance and usefulness by helping SQL locate schemaless data in a JSONB column. -toc: true ---- - -Inverted indexes improve your database's performance by helping SQL locate the schemaless data in a [`JSONB`](jsonb.html) column. - -{{site.data.alerts.callout_success}}For a hands-on demonstration of using an inverted index to improve query performance on a JSONB column, see the JSON tutorial.{{site.data.alerts.end}} - - -## How do inverted indexes work? - -Standard [indexes](indexes.html) work well for searches based on prefixes of sorted data. However, schemaless data like [`JSONB`](jsonb.html) cannot be queried without a full table scan, since it does not adhere to ordinary value prefix comparison operators. `JSONB` needs to be indexed in a more detailed way than what a standard index provides. This is where inverted indexes prove useful. - -Inverted indexes filter on components of tokenizable data. The `JSONB` data type is built on two structures that can be tokenized: - -- **Objects** - Collections of key-value pairs where each key-value pair is a token. -- **Arrays** - Ordered lists of values where every value in the array is a token. - -For example, take the following `JSONB` value in column `person`: - -~~~ json -{ - "firstName": "John", - "lastName": "Smith", - "age": 25, - "address": { - "state": "NY", - "postalCode": "10021" - }, - "cars": [ - "Subaru", - "Honda" - ] -} -~~~ - -An inverted index for this object would have an entry per component, mapping it back to the original object: - -~~~ -"firstName": "John" -"lastName": "Smith" -"age": 25 -"address": "state": "NY" -"address": "postalCode": "10021" -"cars" : "Subaru" -"cars" : "Honda" -~~~ - -This lets you to search based on subcomponents. - -### Creation - -You can use inverted indexes to improve the performance of queries using `JSONB` columns. You can create them: - -- At the same time as the table with the `INVERTED INDEX` clause of [`CREATE TABLE`](create-table.html#create-a-table-with-secondary-and-inverted-indexes). -- For existing tables with [`CREATE INVERTED INDEX`](create-index.html). -- Using the following PostgreSQL-compatible syntax: - - ~~~ sql - > CREATE INDEX ON USING GIN (); - ~~~ - -### Selection - -If a query contains a filter against an indexed `JSONB` column that uses any of the supported operators, the inverted index is added to the set of index candidates. - -Because each query can use only a single index, CockroachDB selects the index it calculates will scan the fewest rows (i.e., the fastest). For more detail, check out our blog post [Index Selection in CockroachDB](https://www.cockroachlabs.com/blog/index-selection-cockroachdb-2/). - -To override CockroachDB's index selection, you can also force [queries to use a specific index](table-expressions.html#force-index-selection) (also known as "index hinting"). - -### Storage - -CockroachDB stores indexes directly in your key-value store. You can find more information in our blog post [Mapping Table Data to Key-Value Storage](https://www.cockroachlabs.com/blog/sql-in-cockroachdb-mapping-table-data-to-key-value-storage/). - -### Locking - -Tables are not locked during index creation thanks to CockroachDB's [schema change procedure](https://www.cockroachlabs.com/blog/how-online-schema-changes-are-possible-in-cockroachdb/). - -### Performance - -Indexes create a trade-off: they greatly improve the speed of queries, but slightly slow down writes (because new values have to be copied and sorted). The first index you create has the largest impact, but additional indexes only introduce marginal overhead. - -### Comparisons -Currently, inverted indexes only support equality comparisons using the `=` operator. If you require comparisons using `>`, `<=`, etc., you can create an index on a computed column using your JSON payload, and then create a regular index on that. So if you wanted to write a query where the value of "foo" is greater than three, you would: - -1. Create your table with a computed column: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE test ( - id INT, - data JSONB, - foo INT AS ((data->>'foo')::INT) STORED - ); - ~~~ - -2. Create an index on your computed column: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE INDEX test_idx ON test (foo); - ~~~ - -3. Execute your query with your comparison: - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT * FROM test where foo > 3; - ~~~ - -## Example - -In this example, let's create a table with a `JSONB` column and an inverted index: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - profile_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - last_updated TIMESTAMP DEFAULT now(), - user_profile JSONB, - INVERTED INDEX user_details (user_profile) - ); -~~~ - -Then, insert a few rows a data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO users (user_profile) VALUES - ('{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'), - ('{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'), - ('{"first_name": "Carl", "last_name": "Kimball", "location": "NYC", "breed": "Boston Terrier"}' - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT *, jsonb_pretty(user_profile) FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -| profile_id | last_updated | user_profile | jsonb_pretty | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -| 81330a51-80b2-44aa-b793-1b8d84ba69c9 | 2018-03-13 18:26:24.521541+00:00 | {"breed": "Boston Terrier", "first_name": "Carl", "last_name": | { | -| | | "Kimball", "location": "NYC"} | | -| | | | "breed": "Boston Terrier", | -| | | | "first_name": "Carl", | -| | | | "last_name": "Kimball", | -| | | | "location": "NYC" | -| | | | } | -| 81c87adc-a49c-4bed-a59c-3ac417756d09 | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Ernie", "location": "Brooklyn", "status": "Looking for | { | -| | | treats"} | | -| | | | "first_name": "Ernie", | -| | | | "location": "Brooklyn", | -| | | | "status": "Looking for treats" | -| | | | } | -| ec0a4942-b0aa-4a04-80ae-591b3f57721e | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | { | -| | | "NYC", "online": true} | | -| | | | "first_name": "Lola", | -| | | | "friends": 547, | -| | | | "last_name": "Dog", | -| | | | "location": "NYC", | -| | | | "online": true | -| | | | } | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+------------------------------------+ -~~~ - -Now, run a query that filters on the `JSONB` column: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users where user_profile @> '{"location":"NYC"}'; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| profile_id | last_updated | user_profile | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| 81330a51-80b2-44aa-b793-1b8d84ba69c9 | 2018-03-13 18:26:24.521541+00:00 | {"breed": "Boston Terrier", "first_name": "Carl", "last_name": | -| | | "Kimball", "location": "NYC"} | -| ec0a4942-b0aa-4a04-80ae-591b3f57721e | 2018-03-13 18:26:24.521541+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | -| | | "NYC", "online": true} | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -(2 rows) -~~~ - - -## See also - -- [`JSONB`](jsonb.html) -- [JSON tutorial](demo-json-support.html) -- [Computed Columns](computed-columns.html) -- [`CREATE INDEX`](create-index.html) -- [`DROP INDEX`](drop-index.html) -- [`RENAME INDEX`](rename-index.html) -- [`SHOW INDEX`](show-index.html) -- [Indexes](indexes.html) -- [SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/joins.md b/src/current/v19.1/joins.md deleted file mode 100644 index c773b0d27fe..00000000000 --- a/src/current/v19.1/joins.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -title: Join Expressions -summary: Join expressions combine data from two or more table expressions. -toc: true ---- - -Join expressions, also called "joins", combine the results of two or more table expressions based on conditions on the values of particular columns (i.e., equality columns). - -Join expressions define a data source in the `FROM` sub-clause of [simple `SELECT` clauses](select-clause.html), or as parameter to [`TABLE`](selection-queries.html#table-clause). Joins are a particular kind of [table expression](table-expressions.html). - -{{site.data.alerts.callout_success}} -New in v19.1: The [cost-based optimizer](cost-based-optimizer.html) supports hint syntax to force the use of a specific join algorithm. For more information, see [Join hints](cost-based-optimizer.html#join-hints). -{{site.data.alerts.end}} - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/joined_table.html %} -
      - -
      - -## Parameters - -Parameter | Description -----------|------------ -`joined_table` | Another join expression. -`table_ref` | A [table expression](table-expressions.html). -`a_expr` | A [scalar expression](scalar-expressions.html) to use as [`ON` join condition](#supported-join-conditions). -`name` | A column name to use as [`USING` join condition](#supported-join-conditions) - -## Supported join types - -CockroachDB supports the following join types: - -- [Inner joins](#inner-joins) -- [Left outer joins](#left-outer-joins) -- [Right outer joins](#right-outer-joins) -- [Full outer joins](#full-outer-joins) - -### Inner joins - -Only the rows from the left and right operand that match the condition are returned. - -~~~ -
      [ INNER ] JOIN
      ON -
      [ INNER ] JOIN
      USING(, , ...) -
      NATURAL [ INNER ] JOIN
      -
      CROSS JOIN
      -~~~ - -### Left outer joins - -For every left row where there is no match on the right, `NULL` values are returned for the columns on the right. - -~~~ -
      LEFT [ OUTER ] JOIN
      ON -
      LEFT [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL LEFT [ OUTER ] JOIN
      -~~~ - -### Right outer joins - -For every right row where there is no match on the left, `NULL` values are returned for the columns on the left. - -~~~ -
      RIGHT [ OUTER ] JOIN
      ON -
      RIGHT [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL RIGHT [ OUTER ] JOIN
      -~~~ - -### Full outer joins - -For every row on one side of the join where there is no match on the other side, `NULL` values are returned for the columns on the non-matching side. - -~~~ -
      FULL [ OUTER ] JOIN
      ON -
      FULL [ OUTER ] JOIN
      USING(, , ...) -
      NATURAL FULL [ OUTER ] JOIN
      -~~~ - -## Supported join conditions - -CockroachDB supports the following conditions to match rows in a join: - -- No condition with `CROSS JOIN`: each row on the left is considered - to match every row on the right. -- `ON` predicates: a Boolean [scalar expression](scalar-expressions.html) - is evaluated to determine whether the operand rows match. -- `USING`: the named columns are compared pairwise from the left and - right rows; left and right rows are considered to match if the - columns are equal pairwise. -- `NATURAL`: generates an implicit `USING` condition using all the - column names that are present in both the left and right table - expressions. - -
      {{site.data.alerts.callout_danger}}NATURAL is supported for compatibility with PostgreSQL; its use in new applications is discouraged because its results can silently change in unpredictable ways when new columns are added to one of the join operands.{{site.data.alerts.end}}
      - -## Join algorithms - -CockroachDB supports the following algorithms for performing a join: - -- [Merge joins](#merge-joins) -- [Hash joins](#hash-joins) -- [Lookup joins](#lookup-joins) - -### Merge joins - -By default, CockroachDB uses [merge joins](https://en.wikipedia.org/wiki/Sort-merge_join) whenever possible because they are more performant than [hash joins](#hash-joins), computationally and in terms of memory. A merge join requires both tables to be indexed on the equality columns and the indexes must have the same ordering. When these conditions are not met, CockroachDB resorts to the slower hash joins. Merge joins can be used only with [distributed query processing](https://www.cockroachlabs.com/blog/local-and-distributed-processing-in-cockroachdb/). - -Merge joins are performed on the indexed columns of two tables as follows: - -1. CockroachDB checks for indexes on the equality columns and that they are ordered the same (i.e., `ASC` or `DESC`). -2. CockroachDB takes one row from each table and compares them. - - For inner joins: - - If the rows are equal, CockroachDB returns the rows. - - If there are multiple matches, the cartesian product of the matches is returned. - - If the rows are not equal, CockroachDB discards the lower-value row and repeats the process with the next row until all rows are processed. - - For outer joins: - - If the rows are equal, CockroachDB returns the rows. - - If there are multiple matches, the cartesian product of the matches is returned. - - If the rows are not equal, CockroachDB returns `NULL` for the non-matching column and repeats the process with the next row until all rows are processed. - -### Hash joins - -If a merge join cannot be used, CockroachDB uses a [hash join](https://en.wikipedia.org/wiki/Hash_join). Hash joins are computationally expensive and require additional memory. - -Hash joins are performed on two tables as follows: - -1. CockroachDB reads both tables and attempts to pick the smaller table. -2. CockroachDB creates an in-memory [hash table](https://en.wikipedia.org/wiki/Hash_table) on the smaller table. If the hash table is too large, it will spill over to disk storage (which could affect performance). -3. CockroachDB then scans the large table, looking up each row in the hash table. - -### Lookup joins - -The [cost-based optimizer](cost-based-optimizer.html) decides when it would be beneficial to use a lookup join. Lookup joins are used when there is a large imbalance in size between the two tables, as it only reads the smaller table and then looks up matches in the larger table. A lookup join requires that the right-hand (i.e., larger) table is indexed on the equality column. - -Lookup joins are performed on two tables as follows: - -1. CockroachDB reads each row in the small table. -2. CockroachDB then scans (or "looks up") the larger table for matches to the smaller table and outputs the matching rows. - -You can override the use of lookup joins using [join hints](cost-based-optimizer.html#join-hints). - -## Performance best practices - -{{site.data.alerts.callout_info}}CockroachDBs is currently undergoing major changes to evolve and improve the performance of queries using joins. The restrictions and workarounds listed in this section will be lifted or made unnecessary over time.{{site.data.alerts.end}} - -- Joins over [interleaved tables](interleave-in-parent.html) are usually (but not always) processed more effectively than over non-interleaved tables. -- When no indexes can be used to satisfy a join, CockroachDB may load all the rows in memory that satisfy the condition one of the join operands before starting to return result rows. This may cause joins to fail if the join condition or other `WHERE` clauses are insufficiently selective. -- Outer joins (i.e., [left outer joins](#left-outer-joins), [right outer joins](#right-outer-joins), and [full outer joins](#full-outer-joins)) are generally processed less efficiently than [inner joins](#inner-joins). Use inner joins whenever possible. Full outer joins are the least optimized. -- Use [`EXPLAIN`](explain.html) over queries containing joins to verify that indexes are used. -- Use [indexes](indexes.html) for faster joins. - -## See also - -- [Cost-based Optimizer: Join Hints](cost-based-optimizer.html#join-hints) -- [Scalar Expressions](scalar-expressions.html) -- [Table Expressions](table-expressions.html) -- [Simple `SELECT` Clause](select-clause.html) -- [Selection Queries](selection-queries.html) -- [`EXPLAIN`](explain.html) -- [Performance Best Practices - Overview](performance-best-practices-overview.html) -- [SQL join operation (Wikipedia)](https://en.wikipedia.org/wiki/Join_(SQL)) -- [CockroachDB's first implementation of SQL joins (CockroachDB Blog)](https://www.cockroachlabs.com/blog/cockroachdbs-first-join/) -- [On the Way to Better SQL Joins in CockroachDB (CockroachDB Blog)](https://www.cockroachlabs.com/blog/better-sql-joins-in-cockroachdb/) diff --git a/src/current/v19.1/jsonb.md b/src/current/v19.1/jsonb.md deleted file mode 100644 index 81c87e898a5..00000000000 --- a/src/current/v19.1/jsonb.md +++ /dev/null @@ -1,202 +0,0 @@ ---- -title: JSONB -summary: The JSONB data type stores JSON (JavaScript Object Notation) data. -toc: true ---- - -The `JSONB` [data type](data-types.html) stores JSON (JavaScript Object Notation) data as a binary representation of the `JSONB` value, which eliminates whitespace, duplicate keys, and key ordering. `JSONB` supports [inverted indexes](inverted-indexes.html). - -{{site.data.alerts.callout_success}}For a hands-on demonstration of storing and querying JSON data from a third-party API, see the JSON tutorial.{{site.data.alerts.end}} - - -## Alias - -In CockroachDB, `JSON` is an alias for `JSONB`. - -{{site.data.alerts.callout_info}}In PostgreSQL, JSONB and JSON are two different data types. In CockroachDB, the JSONB / JSON data type is similar in behavior to the JSONB data type in PostgreSQL. -{{site.data.alerts.end}} - -## Considerations - -- The [primary key](primary-key.html), [foreign key](foreign-key.html), and [unique](unique.html) [constraints](constraints.html) cannot be used on `JSONB` values. -- A standard [index](indexes.html) cannot be created on a `JSONB` column; you must use an [inverted index](inverted-indexes.html). - -## Syntax - -The syntax for the `JSONB` data type follows the format specified in [RFC8259](https://tools.ietf.org/html/rfc8259). A constant value of type `JSONB` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals) or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `JSONB`. - -There are six types of `JSONB` values: - -- `null` -- Boolean -- String -- Number (i.e., [`decimal`](decimal.html), **not** the standard `int64`) -- Array (i.e., an ordered sequence of `JSONB` values) -- Object (i.e., a mapping from strings to `JSONB` values) - -Examples: - -- `'{"type": "account creation", "username": "harvestboy93"}'` -- `'{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'` - -{{site.data.alerts.callout_info}}If duplicate keys are included in the input, only the last value is kept.{{site.data.alerts.end}} - -## Size - -The size of a `JSONB` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## `JSONB` Functions - -Function | Description | Example | ----------|-------------|---------| -`jsonb_array_elements()` | Expands a `JSONB` array to a set of `JSONB` values. | `SELECT jsonb_array_elements('[1,true, 2,false]');` -`jsonb_build_object(...)` | Builds a `JSONB` object out of a variadic argument list that alternates between keys and values. | `SELECT json_build_object('Zoo',1,'Enter',2);` -`jsonb_each()` | Expands the outermost `JSONB` object into a set of key-value pairs. | `SELECT * from json_each('{"a":"Apple", "b":"ball"}');` -`jsonb_object_keys()` | Returns sorted set of keys in the outermost `JSONB` object. | `SELECT * from jsonb_object_keys('{"fb1":"abc123","fb2":{"fb3":"ant", "f4":"ball"}}');` -`jsonb_pretty()` | Returns the given `JSONB` value as a `STRING` indented and with newlines. | See the [example](#retrieve-formatted-jsonb-data) below. - -For the full list of supported `JSONB` functions, see [Functions and Operators](functions-and-operators.html#jsonb-functions). - -## `JSONB` Operators - -Operator | Description | Example | ----------|-------------|---------| -`->` | Access a `JSONB` field, returning a `JSONB` value. | `SELECT '[{"foo":"bar"}]'::JSONB->0->'foo' = '"bar"'::JSONB;` -`->>` | Access a `JSONB` field, returning a string. | `SELECT '{"foo":"bar"}'::JSONB->>'foo' = 'bar'::STRING;` -`@>` | Tests whether the left `JSONB` field contains the right `JSONB` field. | `SELECT ('{"foo": {"baz": 3}, "bar": 2}'::JSONB @> '{"foo": {"baz":3}}'::JSONB ) = true;` - -For the full list of supported `JSONB` operators, see [Functions and Operators](functions-and-operators.html). - -## Examples - -### Create a Table with a `JSONB` Column - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - profile_id UUID PRIMARY KEY DEFAULT gen_random_uuid(), - last_updated TIMESTAMP DEFAULT now(), - user_profile JSONB - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM users; -~~~ - -~~~ -+--------------+-----------+-------------+-------------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+--------------+-----------+-------------+-------------------+-----------------------+-------------+ -| profile_id | UUID | false | gen_random_uuid() | | {"primary"} | -| last_updated | TIMESTAMP | true | now() | | {} | -| user_profile | JSON | true | NULL | | {} | -+--------------+-----------+-------------+-------------------+-----------------------+-------------+ -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO users (user_profile) VALUES - ('{"first_name": "Lola", "last_name": "Dog", "location": "NYC", "online" : true, "friends" : 547}'), - ('{"first_name": "Ernie", "status": "Looking for treats", "location" : "Brooklyn"}'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| profile_id | last_updated | user_profile | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -| 33c0a5d8-b93a-4161-a294-6121ee1ade93 | 2018-02-27 16:39:28.155024+00:00 | {"first_name": "Lola", "friends": 547, "last_name": "Dog", "location": | -| | | "NYC", "online": true} | -| 6a7c15c9-462e-4551-9e93-f389cf63918a | 2018-02-27 16:39:28.155024+00:00 | {"first_name": "Ernie", "location": "Brooklyn", "status": "Looking for | -| | | treats"} | -+--------------------------------------+----------------------------------+--------------------------------------------------------------------------+ -~~~ - -### Retrieve formatted `JSONB` data - -To retrieve `JSONB` data with easier-to-read formatting, use the `jsonb_pretty()` function. For example, retrieve data from the table you created in the [first example](#create-a-table-with-a-jsonb-column): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT profile_id, last_updated, jsonb_pretty(user_profile) FROM users; -~~~ -~~~ -+--------------------------------------+----------------------------------+------------------------------------+ -| profile_id | last_updated | jsonb_pretty | -+--------------------------------------+----------------------------------+------------------------------------+ -| 33c0a5d8-b93a-4161-a294-6121ee1ade93 | 2018-02-27 16:39:28.155024+00:00 | { | -| | | "first_name": "Lola", | -| | | "friends": 547, | -| | | "last_name": "Dog", | -| | | "location": "NYC", | -| | | "online": true | -| | | } | -| 6a7c15c9-462e-4551-9e93-f389cf63918a | 2018-02-27 16:39:28.155024+00:00 | { | -| | | "first_name": "Ernie", | -| | | "location": "Brooklyn", | -| | | "status": "Looking for treats" | -| | | } | -+--------------------------------------+----------------------------------+------------------------------------+ -~~~ - -### Retrieve specific fields from a `JSONB` value - -To retrieve a specific field from a `JSONB` value, use the `->` operator. For example, retrieve a field from the table you created in the [first example](#create-a-table-with-a-jsonb-column): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT user_profile->'first_name',user_profile->'location' FROM users; -~~~ -~~~ -+----------------------------+--------------------------+ -| user_profile->'first_name' | user_profile->'location' | -+----------------------------+--------------------------+ -| "Lola" | "NYC" | -| "Ernie" | "Brooklyn" | -+----------------------------+--------------------------+ -~~~ - -You can also use the `->>` operator to return `JSONB` field values as `STRING` values: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT user_profile->>'first_name', user_profile->>'location' FROM users; -~~~ -~~~ -+-----------------------------+---------------------------+ -| user_profile->>'first_name' | user_profile->>'location' | -+-----------------------------+---------------------------+ -| Lola | NYC | -| Ernie | Brooklyn | -+-----------------------------+---------------------------+ -~~~ - -For the full list of functions and operators we support, see [Functions and Operators](functions-and-operators.html). - -### Create a table with a `JSONB` column and a computed column - -{% include {{ page.version.version }}/computed-columns/jsonb.md %} - -## Supported casting and conversion - -`JSONB` values can be [cast](data-types.html#data-type-conversions-and-casts) to the following data type: - -- `STRING` - -## See also - -- [JSON tutorial](demo-json-support.html) -- [Inverted Indexes](inverted-indexes.html) -- [Computed Columns](computed-columns.html) -- [Data Types](data-types.html) -- [Functions and Operators](functions-and-operators.html) diff --git a/src/current/v19.1/keywords-and-identifiers.md b/src/current/v19.1/keywords-and-identifiers.md deleted file mode 100644 index 918ed8fee49..00000000000 --- a/src/current/v19.1/keywords-and-identifiers.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Keywords & Identifiers -toc: false ---- - -SQL statements consist of two fundamental components: - -- [__Keywords__](#keywords): Words with specific meaning in SQL like `CREATE`, `INDEX`, and `BOOL` -- [__Identifiers__](#identifiers): Names for things like databases and some functions - -## Keywords - -Keywords make up SQL's vocabulary and can have specific meaning in statements. Each SQL keyword that CockroachDB supports is on one of four lists: - -- [Reserved Keywords](sql-grammar.html#reserved_keyword) -- [Type Function Name Keywords](sql-grammar.html#type_func_name_keyword) -- [Column Name Keywords](sql-grammar.html#col_name_keyword) -- [Unreserved Keywords](sql-grammar.html#unreserved_keyword) - -Reserved keywords have fixed meanings and are not typically allowed as identifiers. All other types of keywords are considered non-reserved; they have special meanings in certain contexts and can be used as identifiers in other contexts. - -### Keyword uses - -Most users asking about keywords want to know more about them in terms of: - -- __Names of objects__, covered on this page in [Identifiers](#identifiers) -- __Syntax__, covered in our pages [SQL Statements](sql-statements.html) and [SQL Grammar](sql-grammar.html) - -## Identifiers - -Identifiers are most commonly used as names of objects like databases, tables, or columns—because of this, the terms "name" and "identifier" are often used interchangeably. However, identifiers also have less-common uses, such as changing column labels with `SELECT`. - -### Rules for Identifiers - -In our [SQL grammar](sql-grammar.html), all values that accept an `identifier` must: - -- Begin with a Unicode letter or an underscore (_). Subsequent characters can be letters, underscores, digits (0-9), or dollar signs ($). -- Not equal any [SQL keyword](#keywords) unless the keyword is accepted by the element's syntax. For example, [`name`](sql-grammar.html#name) accepts Unreserved or Column Name keywords. - -To bypass either of these rules, simply surround the identifier with double-quotes ("). You can also use double-quotes to preserve case-sensitivity in database, table, view, and column names. However, all references to such identifiers must also include double-quotes. - -{{site.data.alerts.callout_info}}Some statements have additional requirements for identifiers. For example, each table in a database must have a unique name. These requirements are documented on each statement's page.{{site.data.alerts.end}} - -## See also - -- [SQL Statements](sql-statements.html) -- [Full SQL Grammar](sql-grammar.html) diff --git a/src/current/v19.1/known-limitations.md b/src/current/v19.1/known-limitations.md deleted file mode 100644 index 745ef216a3d..00000000000 --- a/src/current/v19.1/known-limitations.md +++ /dev/null @@ -1,418 +0,0 @@ ---- -title: Known Limitations in CockroachDB v19.1 -summary: Learn about newly identified limitations in CockroachDB as well as unresolved limitations identified in earlier releases. -toc: true ---- - -This page describes newly identified limitations in the CockroachDB {{page.release_info.version}} release as well as unresolved limitations identified in earlier releases. - -## New limitations - -### Enterprise `BACKUP` does not capture database/table/column comments - -The [`COMMENT ON`](comment-on.html) statement associates comments to databases, tables, or columns. However, the internal table (`system.comments`) in which these comments are stored is not captured by enterprise [`BACKUP`](backup.html). - -As a workaround, alongside a `BACKUP`, run the [`cockroach dump`](sql-dump.html) command with `--dump-mode=schema` for each table in the backup. This will emit `COMMENT ON` statements alongside `CREATE` statements. - -[Tracking Github Issue](https://github.com/cockroachdb/cockroach/issues/44396) - -## Unresolved limitations - -### Filtering by `now()` results in a full table scan - -When filtering a query by `now()`, the [cost-based optimizer](cost-based-optimizer.html) currently cannot constrain an index on the filtered timestamp column. This results in a full table scan. For example: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bydate (a TIMESTAMP NOT NULL, INDEX (a)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT * FROM bydate WHERE a > (now() - '1h'::interval); -~~~ - -~~~ - tree | field | description --------+-------------+--------------------------- - | distributed | true - | vectorized | false - scan | | - | table | bydate@primary - | spans | FULL SCAN - | filter | a > (now() - '01:00:00') -(6 rows) -~~~ - -As a workaround, pass the correct date into the query as a parameter to a prepared query with a placeholder, which will allow the optimizer to constrain the index correctly: - -{% include copy-clipboard.html %} -~~~ sql -> PREPARE q AS SELECT * FROM bydate WHERE a > ($1::timestamp - '1h'::interval); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> EXECUTE q ('2020-05-12 00:00:00'); -~~~ - -[Tracking Github Issue](https://github.com/cockroachdb/cockroach/issues/18836) - -### Adding stores to a node - -{% include {{ page.version.version }}/known-limitations/adding-stores-to-node.md %} - -### Cold starts of large clusters may require manual intervention - -If a cluster contains a large amount of data (>500GiB / node), and all nodes are stopped and then started at the same time, clusters can enter a state where they're unable to startup without manual intervention. In this state, logs fill up rapidly with messages like `refusing gossip from node x; forwarding to node y`, and data and metrics may become inaccessible. - -To exit this state, you should: - -1. Stop all nodes. -2. Set the following environment variables: `COCKROACH_SCAN_INTERVAL=60m`, and `COCKROACH_SCAN_MIN_IDLE_TIME=1s`. -3. Restart the cluster. - -Once restarted, monitor the Replica Quiescence graph on the [**Replication Dashboard**](admin-ui-replication-dashboard.html). When >90% of the replicas have become quiescent, conduct a rolling restart and remove the environment variables. Make sure that under-replicated ranges do not increase between restarts. - -Once in a stable state, the risk of this issue recurring can be mitigated by increasing your [`range_max_bytes`](configure-zone.html#variables) to 134217728 (128MiB). We always recommend testing changes to `range_max_bytes` in a development environment before making changes on production. - -[Tracking Github Issue](https://github.com/cockroachdb/cockroach/issues/39117) - -### Requests to restarted node in need of snapshots may hang - -When a node is offline, the [Raft logs](architecture/replication-layer.html#raft-logs) for the ranges on the node get truncated. When the node comes back online, it therefore often needs [Raft snapshots](architecture/replication-layer.html#snapshots) to get many of its ranges back up-to-date. While in this state, requests to a range will hang until its snapshot has been applied, which can take a long time. - -To work around this limitation, you can adjust the `kv.snapshot_recovery.max_rate` [cluster setting](cluster-settings.html) to temporarily relax the throughput rate limiting applied to snapshots. For example, changing the rate limiting from the default 8 MB/s, at which 1 GB of snapshots takes at least 2 minutes, to 64 MB/s can result in an 8x speedup in snapshot transfers and, therefore, a much shorter interruption of requests to an impacted node: - -{% include copy-clipboard.html %} -~~~ sql -> SET CLUSTER SETTING kv.snapshot_recovery.max_rate = '64mb'; -~~~ - -Before increasing this value, however, verify that you will not end up saturating your network interfaces, and once the problem has resolved, be sure to reset to the original value. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/37906) - -### Location-based time zone names - -When the machine running a CockroachDB node is missing time zone data, the node will be unable to resolve location-based time zone names. - -To resolve this issue on Linux, install the [`tzdata`](https://www.iana.org/time-zones) library (sometimes called `tz` or `zoneinfo`). - -To resolve this issue on Windows, download Go's official [zoneinfo.zip](https://github.com/golang/go/raw/master/lib/time/zoneinfo.zip) and set the `ZONEINFO` environment variable to point to the zip file. For step-by-step guidance on setting environment variables on Windows, see this [external article](https://www.techjunkie.com/environment-variables-windows-10/). - -Make sure to do this across all nodes in the cluster and to keep this time zone data up-to-date. - -[Tracking GitHub Issue](https://github.com/cockroachdb/cockroach/issues/32415) - -### Database and table renames are not transactional - -Database and table renames using [`RENAME DATABASE`](rename-database.html) and [`RENAME TABLE`](rename-table.html) are not transactional. - -Specifically, when run inside a [`BEGIN`](begin-transaction.html) ... [`COMMIT`](commit-transaction.html) block, it’s possible for a rename to be half-done - not persisted in storage, but visible to other nodes or other transactions. For more information, see [Table renaming considerations](rename-table.html#table-renaming-considerations). For an issue tracking this limitation, see [cockroach#12123](https://github.com/cockroachdb/cockroach/issues/12123). - -### Change data capture - -Change data capture (CDC) provides efficient, distributed, row-level change feeds into Apache Kafka for downstream processing such as reporting, caching, or full-text indexing. - -{% include {{ page.version.version }}/known-limitations/cdc.md %} - -### Admin UI may become inaccessible for secure clusters - -Accessing the Admin UI for a secure cluster now requires login information (i.e., username and password). This login information is stored in a system table that is replicated like other data in the cluster. If a majority of the nodes with the replicas of the system table data go down, users will be locked out of the Admin UI. - -### `AS OF SYSTEM TIME` in `SELECT` statements - -`AS OF SYSTEM TIME` can only be used in a top-level `SELECT` statement. That is, we do not support statements like `INSERT INTO t SELECT * FROM t2 AS OF SYSTEM TIME
      -~~~ - -Dump just the data of specific tables to stdout: - -~~~ shell -$ cockroach dump
      --dump-mode=data -~~~ - -Dump just the schemas of specific tables to stdout: - -~~~ shell -$ cockroach dump
      --dump-mode=schema -~~~ - -Dump the schemas and data of all tables in a database to stdout: - -~~~ shell -$ cockroach dump -~~~ - -Dump just the schemas of all tables in a database to stdout: - -~~~ shell -$ cockroach dump --dump-mode=schema -~~~ - -Dump just the data of all tables in a database to stdout: - -~~~ shell -$ cockroach dump --dump-mode=data -~~~ - -Dump to a file: - -~~~ shell -$ cockroach dump
      > dump-file.sql -~~~ - -View help: - -~~~ shell -$ cockroach dump --help -~~~ - -## Flags - -The `dump` command supports the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--as-of` | Dump table schema and/or data as they appear at the specified [timestamp](timestamp.html). See this [example](#dump-table-data-as-of-a-specific-time) for a demonstration.

      Note that historical data is available only within the garbage collection window, which is determined by the [`ttlseconds`](configure-replication-zones.html) replication setting for the table (25 hours by default). If this timestamp is earlier than that window, the dump will fail.

      **Default:** Current time -`--dump-mode` | Whether to dump table and view schemas, table data, or both.

      To dump just table and view schemas, set this to `schema`. To dump just table data, set this to `data`. To dump both table and view schemas and table data, leave this flag out or set it to `both`.

      Table and view schemas are dumped in the order in which they can successfully be recreated. For example, if a database includes a table, a second table with a foreign key dependency on the first, and a view that depends on the second table, the dump will list the schema for the first table, then the schema for the second table, and then the schema for the view.

      **Default:** `both` -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -{{site.data.alerts.callout_info}} -The user specified with `--user` must have the `SELECT` privilege on the target tables. -{{site.data.alerts.end}} - -### Logging - -By default, the `dump` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -{{site.data.alerts.callout_info}} -These examples use our sample `startrek` database, which you can add to a cluster via the [`cockroach gen`](generate-cockroachdb-resources.html#generate-example-data) command. Also, the examples assume that the `maxroach` user has been [granted](grant.html) the `SELECT` privilege on all target tables. -{{site.data.alerts.end}} - -### Dump a table's schema and data - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach > backup.sql -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); - -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... -~~~ - -### Dump just a table's schema - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach --dump-mode=schema > backup.sql -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); -~~~ - -### Dump just a table's data - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump startrek episodes --insecure --user=maxroach --dump-mode=data > backup.sql -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat backup.sql -~~~ - -~~~ -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... -~~~ - -### Dump all tables in a database - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump startrek --insecure --user=maxroach > backup.sql -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE episodes ( - id INT NOT NULL, - season INT NULL, - num INT NULL, - title STRING NULL, - stardate DECIMAL NULL, - CONSTRAINT "primary" PRIMARY KEY (id), - FAMILY "primary" (id, season, num), - FAMILY fam_1_title (title), - FAMILY fam_2_stardate (stardate) -); - -CREATE TABLE quotes ( - quote STRING NULL, - characters STRING NULL, - stardate DECIMAL NULL, - episode INT NULL, - INDEX quotes_episode_idx (episode), - FAMILY "primary" (quote, rowid), - FAMILY fam_1_characters (characters), - FAMILY fam_2_stardate (stardate), - FAMILY fam_3_episode (episode) -); - -INSERT INTO episodes (id, season, num, title, stardate) VALUES - (1, 1, 1, 'The Man Trap', 1531.1), - (2, 1, 2, 'Charlie X', 1533.6), - (3, 1, 3, 'Where No Man Has Gone Before', 1312.4), - (4, 1, 4, 'The Naked Time', 1704.2), - (5, 1, 5, 'The Enemy Within', 1672.1), - (6, 1, 6, e'Mudd\'s Women', 1329.8), - (7, 1, 7, 'What Are Little Girls Made Of?', 2712.4), - (8, 1, 8, 'Miri', 2713.5), - (9, 1, 9, 'Dagger of the Mind', 2715.1), - (10, 1, 10, 'The Corbomite Maneuver', 1512.2), - ... - -INSERT INTO quotes (quote, characters, stardate, episode) VALUES - ('"... freedom ... is a worship word..." "It is our worship word too."', 'Cloud William and Kirk', NULL, 52), - ('"Beauty is transitory." "Beauty survives."', 'Spock and Kirk', NULL, 72), - ('"Can you imagine how life could be improved if we could do away with jealousy, greed, hate ..." "It can also be improved by eliminating love, tenderness, sentiment -- the other side of the coin"', 'Dr. Roger Corby and Kirk', 2712.4, 7), - ... -~~~ - -### Dump fails (user does not have `SELECT` privilege) - -In this example, the `dump` command fails for a user that does not have the `SELECT` privilege on the `episodes` table. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump startrek episodes --insecure --user=leslieroach > backup.sql -~~~ - -~~~ -Error: pq: user leslieroach has no privileges on table episodes -Failed running "dump" -~~~ - -### Restore a table from a backup file - -In this example, a user that has the `CREATE` privilege on the `startrek` database uses the [`cockroach sql`](use-the-built-in-sql-client.html) command to recreate a table, based on a file created by the `dump` command. - -{% include copy-clipboard.html %} -~~~ shell -$ cat backup.sql -~~~ - -~~~ -CREATE TABLE quotes ( - quote STRING NULL, - characters STRING NULL, - stardate DECIMAL NULL, - episode INT NULL, - INDEX quotes_episode_idx (episode), - FAMILY "primary" (quote, rowid), - FAMILY fam_1_characters (characters), - FAMILY fam_2_stardate (stardate), - FAMILY fam_3_episode (episode) -); - -INSERT INTO quotes (quote, characters, stardate, episode) VALUES - ('"... freedom ... is a worship word..." "It is our worship word too."', 'Cloud William and Kirk', NULL, 52), - ('"Beauty is transitory." "Beauty survives."', 'Spock and Kirk', NULL, 72), - ('"Can you imagine how life could be improved if we could do away with jealousy, greed, hate ..." "It can also be improved by eliminating love, tenderness, sentiment -- the other side of the coin"', 'Dr. Roger Corby and Kirk', 2712.4, 7), - ... -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --database=startrek --user=maxroach < backup.sql -~~~ - -~~~ -CREATE TABLE -INSERT 100 -INSERT 100 -~~~ - -### Dump table data as of a specific time - -In this example, we assume there were several inserts into a table both before and after `2017-03-07 19:55:00`. - -First, let's use the built-in SQL client to view the table at the current time: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.dump_test" -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 225594758537183233 | a | -| 225594758537248769 | b | -| 225594758537281537 | c | -| 225594758537314305 | d | -| 225594758537347073 | e | -| 225594758537379841 | f | -| 225594758537412609 | g | -| 225594758537445377 | h | -| 225594991654174721 | i | -| 225594991654240257 | j | -| 225594991654273025 | k | -| 225594991654305793 | l | -| 225594991654338561 | m | -| 225594991654371329 | n | -| 225594991654404097 | o | -| 225594991654436865 | p | -+--------------------+------+ -(16 rows) -~~~ - -Next, let's use a [time-travel query](select-clause.html#select-historical-data-time-travel) to view the contents of the table as of `2017-03-07 19:55:00`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.dump_test AS OF SYSTEM TIME '2017-03-07 19:55:00'" -~~~ - -~~~ -+--------------------+------+ -| id | name | -+--------------------+------+ -| 225594758537183233 | a | -| 225594758537248769 | b | -| 225594758537281537 | c | -| 225594758537314305 | d | -| 225594758537347073 | e | -| 225594758537379841 | f | -| 225594758537412609 | g | -| 225594758537445377 | h | -+--------------------+------+ -(8 rows) -~~~ - -Finally, let's use `cockroach dump` with the `--as-of` flag set to dump the contents of the table as of `2017-03-07 19:55:00`. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach dump db1 dump_test --insecure --dump-mode=data --as-of='2017-03-07 19:55:00' -~~~ - -~~~ -INSERT INTO dump_test (id, name) VALUES - (225594758537183233, 'a'), - (225594758537248769, 'b'), - (225594758537281537, 'c'), - (225594758537314305, 'd'), - (225594758537347073, 'e'), - (225594758537379841, 'f'), - (225594758537412609, 'g'), - (225594758537445377, 'h'); -~~~ - -As you can see, the results of the dump are identical to the earlier time-travel query. - -## Known limitations - -### Dumping a table with no user-visible columns - -{% include {{page.version.version}}/known-limitations/dump-table-with-no-columns.md %} - -### Importing an interleaved table from a `cockroach dump` output - -{% include {{page.version.version}}/known-limitations/import-interleaved-table.md %} - -## See also - -- [Import Data](migration-overview.html) -- [`IMPORT`](import.html) -- [Use the Built-in SQL Client](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/sql-faqs.md b/src/current/v19.1/sql-faqs.md deleted file mode 100644 index c0df4c2da33..00000000000 --- a/src/current/v19.1/sql-faqs.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -title: SQL FAQs -summary: Get answers to frequently asked questions about CockroachDB SQL. -toc: true -toc_not_nested: true ---- - - -## How do I bulk insert data into CockroachDB? - -Currently, you can bulk insert data with batches of [`INSERT`](insert.html) statements not exceeding a few MB. The size of your rows determines how many you can use, but 1,000 - 10,000 rows typically works best. For more details, see [Import Data](migration-overview.html). - -## How do I auto-generate unique row IDs in CockroachDB? - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -## How do I generate unique, slowly increasing sequential numbers in CockroachDB? - -{% include {{ page.version.version }}/faq/sequential-numbers.md %} - -## What are the differences between `UUID`, sequences, and `unique_rowid()`? - -{% include {{ page.version.version }}/faq/differences-between-numberings.md %} - -## How do I order writes to a table to closely follow time in CockroachDB? - -{% include {{ page.version.version }}/faq/sequential-transactions.md %} - -## How do I get the last ID/SERIAL value inserted into a table? - -There’s no function in CockroachDB for returning last inserted values, but you can use the [`RETURNING` clause](insert.html#insert-and-return-values) of the `INSERT` statement. - -For example, this is how you’d use `RETURNING` to return a value auto-generated via `unique_rowid()` or [`SERIAL`](serial.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users (id INT DEFAULT unique_rowid(), name STRING); - -> INSERT INTO users (name) VALUES ('mike') RETURNING id; -~~~ - -## What is transaction contention? - -Transaction contention occurs when transactions issued from multiple -clients at the same time operate on the same data. -This can cause transactions to wait on each other and decrease -performance, like when many people try to check out with the same -cashier at a store. - -For more information about contention, see [Understanding and Avoiding -Transaction -Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). - -## Does CockroachDB support `JOIN`? - -[CockroachDB supports SQL joins](joins.html). - -At this time `LATERAL` joins are not yet supported. For details, see [this Github issue](https://github.com/cockroachdb/cockroach/issues/24560). - -## When should I use interleaved tables? - -[Interleaving tables](interleave-in-parent.html) improve query performance by optimizing the key-value structure of closely related tables, attempting to keep data on the same key-value range if it's likely to be read and written together. - -{% include {{ page.version.version }}/faq/when-to-interleave-tables.html %} - -## Does CockroachDB support JSON or Protobuf datatypes? - -Yes, as of v2.0, the [`JSONB`](jsonb.html) data type is supported. - -## How do I know which index CockroachDB will select for a query? - -To see which indexes CockroachDB is using for a given query, you can use the [`EXPLAIN`](explain.html) statement, which will print out the query plan, including any indexes that are being used: - -{% include copy-clipboard.html %} -~~~ sql -> EXPLAIN SELECT col1 FROM tbl1; -~~~ - -If you'd like to tell the query planner which index to use, you can do so via some [special syntax for index hints](table-expressions.html#force-index-selection): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT col1 FROM tbl1@idx1; -~~~ - -## How do I log SQL queries? - -{% include {{ page.version.version }}/faq/sql-query-logging.md %} - -## Does CockroachDB support a UUID type? - -Yes. For more details, see [`UUID`](uuid.html). - -## How does CockroachDB sort results when `ORDER BY` is not used? - -When an [`ORDER BY`](query-order.html) clause is not used in a query, rows are processed or returned in a -non-deterministic order. "Non-deterministic" means that the actual order -can depend on the logical plan, the order of data on disk, the topology -of the CockroachDB cluster, and is generally variable over time. - -## Why are my `INT` columns returned as strings in JavaScript? - -In CockroachDB, all `INT`s are represented with 64 bits of precision, but JavaScript numbers only have 53 bits of precision. This means that large integers stored in CockroachDB are not exactly representable as JavaScript numbers. For example, JavaScript will round the integer `235191684988928001` to the nearest representable value, `235191684988928000`. Notice that the last digit is different. This is particularly problematic when using the `unique_rowid()` [function](functions-and-operators.html), since `unique_rowid()` nearly always returns integers that require more than 53 bits of precision to represent. - -To avoid this loss of precision, Node's [`pg` driver](https://github.com/brianc/node-postgres) will, by default, return all CockroachDB `INT`s as strings. - -{% include copy-clipboard.html %} -~~~ javascript -// Schema: CREATE TABLE users (id INT DEFAULT unique_rowid(), name STRING); -pgClient.query("SELECT id FROM users WHERE name = 'Roach' LIMIT 1", function(err, res) { - var idString = res.rows[0].id; - // idString === '235191684988928001' - // typeof idString === 'string' -}); -~~~ - -To perform another query using the value of `idString`, you can simply use `idString` directly, even where an `INT` type is expected. The string will automatically be coerced into a CockroachDB `INT`. - -{% include copy-clipboard.html %} -~~~ javascript -pgClient.query("UPDATE users SET name = 'Ms. Roach' WHERE id = $1", [idString], function(err, res) { - // All should be well! -}); -~~~ - -If you instead need to perform arithmetic on `INT`s in JavaScript, you will need to use a big integer library like [Long.js](https://www.npmjs.com/package/long). Do _not_ use the built-in `parseInt` function. - -{% include copy-clipboard.html %} -~~~ javascript -parseInt(idString, 10) + 1; // WRONG: returns 235191684988928000 -require('long').fromString(idString).add(1).toString(); // GOOD: returns '235191684988928002' -~~~ - -## Can I use CockroachDB as a key-value store? - -{% include {{ page.version.version }}/faq/simulate-key-value-store.html %} - -## Why are my deletes getting slower over time? - -> I need to delete a large amount of data. I'm iteratively deleting a certain number of rows using a [`DELETE`](delete.html) statement with a [`LIMIT`](limit-offset.html) clause, but it's getting slower over time. Why? - -CockroachDB relies on [multi-version concurrency control (MVCC)](architecture/storage-layer.html#mvcc) to process concurrent requests while guaranteeing [strong consistency](frequently-asked-questions.html#how-is-cockroachdb-strongly-consistent). As such, when you delete a row, it is not immediately removed from disk. The MVCC values for the row will remain until the garbage collection period defined by the [`gc.ttlseconds`](configure-replication-zones.html#gc-ttlseconds) variable in the applicable [zone configuration](show-zone-configurations.html) has passed. By default, this period is 25 hours. - -This means that with the default settings, each iteration of your `DELETE` statement must scan over all of the rows previously marked for deletion within the last 25 hours. This means that if you try to delete 10,000 rows 10 times within the same 25 hour period, the 10th command will have to scan over the 90,000 rows previously marked for deletion. - -If you need to iteratively delete rows in constant time, you can [alter your zone configuration](configure-replication-zones.html#overview) and change `gc.ttlseconds` to a low value like 5 minutes (i.e., `300`), and run your `DELETE` statement once per GC interval. We strongly recommend returning `gc.ttlseconds` to the default value after your large deletion is completed. - -For instructions showing how to delete specific rows, see [Delete specific rows](delete.html#delete-specific-rows). - -## See also - -- [Product FAQs](frequently-asked-questions.html) -- [Operational FAQS](operational-faqs.html) diff --git a/src/current/v19.1/sql-feature-support.md b/src/current/v19.1/sql-feature-support.md deleted file mode 100644 index 9bcfcbb397b..00000000000 --- a/src/current/v19.1/sql-feature-support.md +++ /dev/null @@ -1,171 +0,0 @@ ---- -title: SQL Feature Support in CockroachDB v19.1 -summary: Summary of CockroachDB's conformance to the SQL standard and which common extensions it supports. -toc: true ---- - -Making CockroachDB easy to use is a top priority for us, so we chose to implement SQL. However, even though SQL has a standard, no database implements all of it, nor do any of them have standard implementations of all features. - -To understand which standard SQL features we support (as well as common extensions to the standard), use the table below. - -- **Component** lists the components that are commonly considered part of SQL. -- **Supported** shows CockroachDB's level of support for the component. -- **Type** indicates whether the component is part of the SQL *Standard* or is an *Extension* created by ourselves or others. -- **Details** provides greater context about the component. - - - -## Features - -### Row values - - Component | Supported | Type | Details ------------|-----------|------|--------- - Identifiers | ✓ | Standard | [Identifiers documentation](keywords-and-identifiers.html#identifiers) - `INT` | ✓ | Standard | [`INT` documentation](int.html) - `FLOAT`, `REAL` | ✓ | Standard | [`FLOAT` documentation](float.html) - `BOOLEAN` | ✓ | Standard | [`BOOL` documentation](bool.html) - `DECIMAL`, `NUMERIC` | ✓ | Standard | [`DECIMAL` documentation](decimal.html) - `NULL` | ✓ | Standard | [*NULL*-handling documentation](null-handling.html) - `BYTES` | ✓ | CockroachDB Extension | [`BYTES` documentation](bytes.html) - Automatic key generation | ✓ | Common Extension | [Automatic key generation FAQ](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) - `STRING`, `CHARACTER` | ✓ | Standard | [`STRING` documentation](string.html) - `COLLATE` | ✓ | Standard | [`COLLATE` documentation](collate.html) - `AUTO INCREMENT` | Alternative | Common Extension | [Automatic key generation FAQ](sql-faqs.html#how-do-i-auto-generate-unique-row-ids-in-cockroachdb) - Key-value pairs | Alternative | Extension | [Key-Value FAQ](sql-faqs.html#can-i-use-cockroachdb-as-a-key-value-store) - `ARRAY` | ✓ | Standard | [`ARRAY` documentation](array.html) - `UUID` | ✓ | PostgreSQL Extension | [`UUID` documentation](uuid.html) - JSON | ✓ | Common Extension | [`JSONB` documentation](jsonb.html) - `TIME` | ✓ | Standard | [`TIME` documentation](time.html) - XML | ✗ | Standard | XML data can be stored as `BYTES`, but we do not offer XML parsing. - `UNSIGNED INT` | ✗ | Common Extension | `UNSIGNED INT` causes numerous casting issues, so we do not plan to support it. - `SET`, `ENUM` | ✗ | MySQL, PostgreSQL Extension | Only allow rows to contain values from a defined set of terms. - `INET` | ✓ | PostgreSQL Extension | [`INET` documentation](inet.html) - -### Constraints - - Component | Supported | Type | Details ------------|-----------|------|--------- - Not Null | ✓ | Standard | [Not Null documentation](not-null.html) - Unique | ✓ | Standard | [Unique documentation](unique.html) - Primary Key | ✓ | Standard | [Primary Key documentation](primary-key.html) - Check | ✓ | Standard | [Check documentation](check.html) - Foreign Key | ✓ | Standard | [Foreign Key documentation](foreign-key.html) - Default Value | ✓ | Standard | [Default Value documentation](default-value.html) - -### Transactions - - Component | Supported | Type | Details ------------|-----------|------|--------- - Transactions (ACID semantics) | ✓ | Standard | [Transactions documentation](transactions.html) - `BEGIN` | ✓ | Standard | [`BEGIN` documentation](begin-transaction.html) - `COMMIT` | ✓ | Standard | [`COMMIT` documentation](commit-transaction.html) - `ROLLBACK` | ✓ | Standard | [`ROLLBACK` documentation](rollback-transaction.html) - `SAVEPOINT` | ✓ | CockroachDB Extension | While `SAVEPOINT` is part of the SQL standard, we only support [our extension of it](transactions.html#transaction-retries) - -### Indexes - - Component | Supported | Type | Details ------------|-----------|------|--------- - Indexes | ✓ | Common Extension | [Indexes documentation](indexes.html) - Multi-column indexes | ✓ | Common Extension | We do not limit on the number of columns indexes can include - Covering indexes | ✓ | Common Extension | [Storing Columns documentation](create-index.html#store-columns) - Inverted indexes | ✓ | Common Extension | [Inverted Indexes documentation](inverted-indexes.html) - Multiple indexes per query | Planned | Common Extension | Use multiple indexes to filter the table's values for a single query - Full-text indexes | Planned | Common Extension | [GitHub issue tracking full-text index support](https://github.com/cockroachdb/cockroach/issues/7821) - Prefix/Expression Indexes | Potential | Common Extension | Apply expressions (such as `LOWER()`) to values before indexing them - Geospatial indexes | Potential | Common Extension | Improves performance of queries calculating geospatial data - Hash indexes | ✗ | Common Extension | Improves performance of queries looking for single, exact values - Partial indexes | ✗ | Common Extension | Only index specific rows from indexed columns - -### Schema changes - - Component | Supported | Type | Details ------------|-----------|------|--------- - `ALTER TABLE` | ✓ | Standard | [`ALTER TABLE` documentation](alter-table.html) - Database renames | ✓ | Standard | [`RENAME DATABASE` documentation](rename-database.html) - Table renames | ✓ | Standard | [`RENAME TABLE` documentation](rename-table.html) - Column renames | ✓ | Standard | [`RENAME COLUMN` documentation](rename-column.html) - Adding columns | ✓ | Standard | [`ADD COLUMN` documentation](add-column.html) - Removing columns | ✓ | Standard | [`DROP COLUMN` documentation](drop-column.html) - Adding constraints | ✓ | Standard | [`ADD CONSTRAINT` documentation](add-constraint.html) - Removing constraints | ✓ | Standard | [`DROP CONSTRAINT` documentation](drop-constraint.html) - Index renames | ✓ | Standard | [`RENAME INDEX` documentation](rename-index.html) - Adding indexes | ✓ | Standard | [`CREATE INDEX` documentation](create-index.html) - Removing indexes | ✓ | Standard | [`DROP INDEX` documentation](drop-index.html) - -### Statements - - Component | Supported | Type | Details ------------|-----------|------|--------- - Common statements | ✓ | Standard | [SQL Statements documentation](sql-statements.html) - `UPSERT` | ✓ | PostgreSQL, MSSQL Extension | [`UPSERT` documentation](upsert.html) - `EXPLAIN` | ✓ | Common Extension | [`EXPLAIN` documentation](explain.html) - `SELECT INTO` | Alternative | Common Extension | You can replicate similar functionality using [`CREATE TABLE`](create-table.html) and then `INSERT INTO ... SELECT ...`. - -### Clauses - - Component | Supported | Type | Details ------------|-----------|------|--------- - Common clauses | ✓ | Standard | [SQL Grammar documentation](sql-grammar.html) - `LIMIT` | ✓ | Common Extension | Limit the number of rows a statement returns. - `LIMIT` with `OFFSET` | ✓ | Common Extension | Skip a number of rows, and then limit the size of the return set. - `RETURNING` | ✓ | Common Extension | Retrieve a table of rows statements affect. - -### Table expressions - - Component | Supported | Type | Details ------------|-----------|------|--------- - Table and View references | ✓ | Standard | [Table expressions documentation](table-expressions.html#table-or-view-names) - `AS` in table expressions | ✓ | Standard | [Aliased table expressions documentation](table-expressions.html#aliased-table-expressions) - `JOIN` (`INNER`, `LEFT`, `RIGHT`, `FULL`, `CROSS`) | [Functional](https://www.cockroachlabs.com/blog/better-sql-joins-in-cockroachdb/) | Standard | [Join expressions documentation](table-expressions.html#join-expressions) - Sub-queries as table expressions | Partial | Standard | Non-correlated subqueries are [supported](table-expressions.html#subqueries-as-table-expressions), as are most [correlated subqueries](subqueries.html#correlated-subqueries). - Table generator functions | Partial | PostgreSQL Extension | [Table generator functions documentation](table-expressions.html#table-generator-functions) - `WITH ORDINALITY` | ✓ | CockroachDB Extension | [Ordinality annotation documentation](table-expressions.html#ordinality-annotation) - -### Scalar expressions and boolean formulas - - Component | Supported | Type | Details ------------|-----------|------|--------- - Common functions | ✓ | Standard | [Functions calls and SQL special forms documentation](scalar-expressions.html#function-calls-and-sql-special-forms) - Common operators | ✓ | Standard | [Operators documentation](scalar-expressions.html#unary-and-binary-operations) - `IF`/`CASE`/`NULLIF` | ✓ | Standard | [Conditional expressions documentation](scalar-expressions.html#conditional-expressions) - `COALESCE`/`IFNULL` | ✓ | Standard | [Conditional expressions documentation](scalar-expressions.html#conditional-expressions) -`AND`/`OR` | ✓ | Standard | [Conditional expressions documentation](scalar-expressions.html#conditional-expressions) - `LIKE`/`ILIKE` | ✓ | Standard | [String pattern matching documentation](scalar-expressions.html#string-pattern-matching) - `SIMILAR TO` | ✓ | Standard | [SQL regexp pattern matching documentation](scalar-expressions.html#string-matching-using-sql-regular-expressions) - Matching using POSIX regular expressions | ✓ | Common Extension | [POSIX regexp pattern matching documentation](scalar-expressions.html#string-matching-using-posix-regular-expressions) - `EXISTS` | Partial | Standard | Non-correlated subqueries are [supported](scalar-expressions.html#existence-test-on-the-result-of-subqueries), as are most [correlated subqueries](subqueries.html#correlated-subqueries). Currently works only with small data sets. - Scalar subqueries | Partial | Standard | Non-correlated subqueries are [supported](scalar-expressions.html#scalar-subqueries), as are most [correlated subqueries](subqueries.html#correlated-subqueries). Currently works only with small data sets. - Bitwise arithmetic | ✓ | Common Extension | [Operators documentation](scalar-expressions.html#unary-and-binary-operations) - Array constructors and subscripting | Partial | PostgreSQL Extension | Array expression documentation: [Constructor syntax](scalar-expressions.html#array-constructors) and [Subscripting](scalar-expressions.html#subscripted-expressions) - `COLLATE`| ✓ | Standard | [Collation expressions documentation](scalar-expressions.html#collation-expressions) - Column ordinal references | ✓ | CockroachDB Extension | [Column references documentation](scalar-expressions.html#column-references) - Type annotations | ✓ | CockroachDB Extension | [Type annotations documentation](scalar-expressions.html#explicitly-typed-expressions) - -### Permissions - - Component | Supported | Type | Details ------------|-----------|------|--------- - Users | ✓ | Standard | [`GRANT` documentation](grant.html) - Privileges | ✓ | Standard | [Privileges documentation](authorization.html#assign-privileges) - -### Miscellaneous - - Component | Supported | Type | Details ------------|-----------|------|--------- - Column families | ✓ | CockroachDB Extension | [Column Families documentation](column-families.html) - Interleaved tables | ✓ | CockroachDB Extension | [Interleaved Tables documentation](interleave-in-parent.html) - Parallel Statement Execution | ✓ | CockroachDB Extension | [Parallel Statement Execution documentation](parallel-statement-execution.html) - Information Schema | ✓ | Standard | [Information Schema documentation](information-schema.html) - Views | ✓ | Standard | [Views documentation](views.html) - Window functions | ✓ | Standard | [Window Functions documentation](window-functions.html) - Common Table Expressions | Partial | Common Extension | [Common Table Expressions documentation](common-table-expressions.html) - Stored Procedures | Planned | Common Extension | Execute a procedure explicitly. - Cursors | ✗ | Standard | Traverse a table's rows. - Triggers | ✗ | Standard | Execute a set of commands whenever a specified event occurs. - Sequences | ✓ | Common Extension | [`CREATE SEQUENCE` documentation](create-sequence.html) diff --git a/src/current/v19.1/sql-grammar.md b/src/current/v19.1/sql-grammar.md deleted file mode 100644 index f0dec490c08..00000000000 --- a/src/current/v19.1/sql-grammar.md +++ /dev/null @@ -1,44 +0,0 @@ ---- -title: SQL Grammar -summary: The full SQL grammar for CockroachDB, generated automatically from the CockroachDB code. -toc: true -back_to_top: true ---- - - - -{{site.data.alerts.callout_success}} -This page describes the full CockroachDB SQL grammar. However, as a starting point, it's best to reference our [SQL statements pages](sql-statements.html) first, which provide detailed explanations and examples. -{{site.data.alerts.end}} - -{% comment %} -TODO: clean up the SQL diagrams not to link to these missing nonterminals. -{% endcomment %} - - - - - - - - - - - - - - -
      - {% include {{ page.version.version }}/sql/diagrams/stmt_block.html %} -
      diff --git a/src/current/v19.1/sql-name-resolution.md b/src/current/v19.1/sql-name-resolution.md deleted file mode 100644 index 3b80a2033ae..00000000000 --- a/src/current/v19.1/sql-name-resolution.md +++ /dev/null @@ -1,261 +0,0 @@ ---- -title: Name Resolution -summary: Table and function names can exist in multiple places. Resolution decides which one to use. -toc: true ---- - -This page documents how CockroachDB performs **name resolution**. - -To reference an object (e.g., a table) in a query, you can specify a database, a schema, both, or neither. To resolve which object a query references, CockroachDB scans the [appropriate namespaces](#naming-hierarchy), following [a set of rules outlined below](#how-name-resolution-works). - -## Naming hierarchy - -For compatibility with PostgreSQL, CockroachDB supports a **three-level structure for names**. This is called the "naming hierarchy". - -In the naming hierarchy, the path to a stored object has three components: - -- database name (also called "catalog") -- schema name -- object name - -In CockroachDB versions < v20.2, user-defined schemas are not supported, and the only schema available for stored objects is the preloaded `public` schema. As a result, CockroachDB effectively supports a two-level storage structure: databases and objects. To provide a multi-level structure for stored objects, we recommend using database namespaces in the same way as [schema namespaces are used in PostgreSQL](https://www.postgresql.org/docs/current/ddl-schemas.html). A CockroachDB cluster can store multiple databases, and each database can store multiple tables/views/sequences. The list of all databases can be obtained with [`SHOW DATABASES`](show-databases.html). - -In addition to the `public` schema, CockroachDB supports a fixed set of virtual schemas, available in every database, that provide ancillary, non-stored data to client applications. For example, [`information_schema`](information-schema.html) is provided for compatibility with the SQL standard. The list of all schemas for a given database can be obtained with [`SHOW SCHEMAS`](show-schemas.html). The list of all objects for a given schema can be obtained with other `SHOW` statements. - -## How name resolution works - -Name resolution occurs separately to **look up existing objects** and to -**decide the full name of a new object**. - -The rules to look up an existing object are as follows: - -1. If the name already fully specifies the database and schema, use that information. -2. If the name has a single component prefix, try to find a schema with the prefix name in the [current database](#current-database). If that fails, try to find the object in the `public` schema of a database with the prefix name. -3. If the name has no prefix, use the [search path](#search-path) with the [current database](#current-database). - -Similarly, the rules to decide the full name of a new object are as follows: - -1. If the name already fully specifies the database and schema, use that. -2. If the name has a single component prefix, try to find a schema with that name. If no such schema exists, use the `public` schema in the database with the prefix name. -3. If the name has no prefix, use the [current schema](#current-schema) in the [current database](#current-database). - -## Parameters for name resolution - -### Current database - -The current database is used when a name is unqualified or has only one component prefix. It is the current value of the `database` session variable. - -- You can view the current value of the `database` session variable with [`SHOW -database`](show-vars.html) and change it with [`SET database`](set-vars.html). - -- You can inspect the list of valid database names that can be specified in `database` with [`SHOW DATABASES`](show-databases.html). - -- For client apps that connect to CockroachDB using a URL of the form `postgres://...`, the initial value of the `database` session variable can be set using the path component of the URL. For example, `postgres://node/mydb` sets `database` to `mydb` when the connection is established. - -### Search path - -The search path is used when a name is unqualified (has no prefix). It lists the schemas where objects are looked up. Its first element is also the [current schema](#current-schema) where new objects are created. - -- You can set the current search path with [`SET search_path`](set-vars.html) and inspected it with [`SHOW -search_path`](show-vars.html). - -- You can inspect the list of valid schemas that can be listed in `search_path` with [`SHOW SCHEMAS`](show-schemas.html). - -- By default, the search path contains `public` and `pg_catalog`. For compatibility with PostgreSQL, `pg_catalog` is forced to be present in `search_path` at all times, even when not specified with -`SET search_path`. - -### Current schema - -The current schema is used as target schema when creating a new object if the name is unqualified (has no prefix). - -- The current schema is always the first value of `search_path`, for compatibility with PostgreSQL. - -- You can inspect the current schema using the special built-in function/identifier `current_schema()`. - -## Index name resolution - -CockroachDB supports the following ways to specify an index name for statements that require one (e.g., [`DROP INDEX`](drop-index.html), [`ALTER INDEX ... RENAME`](alter-index.html), etc.): - -1. Index names are resolved relative to a table name using the `@` character, e.g., `DROP INDEX tbl@idx;`. This is the default and most common syntax. -2. Index names are resolved by searching all tables in the current schema to find a table with an index named `idx`, e.g., `DROP INDEX idx;` or (with optional schema prefix) `DROP INDEX public.idx;`. This syntax is necessary for Postgres compatibility because Postgres index names live in the schema namespace such that e.g., `public.idx` will resolve to the index `idx` of some table in the public schema. This capability is used by some ORMs. - -The name resolution algorithm for index names supports both partial and complete qualification, using the same [name resolution rules](#how-name-resolution-works) as other objects. - -## Examples - -The examples below use the following logical schema as a starting point: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE mydb; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE mydb.mytable(x INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET database = mydb; -~~~ - -### Lookup with unqualified names - -An unqualified name is a name with no prefix, that is, a simple identifier. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mytable; -~~~ - -This uses the search path over the current database. The search path -is `public` by default, in the current database. The resolved name is -`mydb.public.mytable`. - -{% include copy-clipboard.html %} -~~~ sql -> SET database = system; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mytable; -~~~ - -~~~ -pq: relation "mytable" does not exist -~~~ - -This uses the search path over the current database, which is now -`system`. No schema in the search path contain table `mytable`, so the -look up fails with an error. - -### Lookup with fully qualified names - -A fully qualified name is a name with two prefix components, that is, -three identifiers separated by periods. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.public.mytable; -~~~ - -Both the database and schema components are specified. The lookup -succeeds if and only if the object exists at that specific location. - -### Lookup with partially qualified names - -A partially qualified name is a name with one prefix component, that is, two identifiers separated by a period. When a name is partially qualified, CockroachDB will try to use the prefix as a schema name first; and if that fails, use it as a database name. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM public.mytable; -~~~ - -This looks up `mytable` in the `public` schema of the current -database. If the current database is `mydb`, the lookup succeeds. - -For compatibility with CockroachDB 1.x, and to ease development in -multi-database scenarios, CockroachDB also allows queries to specify -a database name in a partially qualified name. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.mytable; -~~~ - -In that case, CockroachDB will first attempt to find a schema called -`mydb` in the current database. When no such schema exists (which is -the case with the starting point in this section), it then tries to -find a database called `mydb` and uses the `public` schema in that. In -this example, this rule applies and the fully resolved name is -`mydb.public.mytable`. - -### Using the search path to use tables across schemas - -Suppose that a client frequently accesses a stored table as well as a virtual table in the [Information Schema](information-schema.html). Because `information_schema` is not in the search path by default, all queries that need to access it must mention it explicitly. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.information_schema.schemata; -- valid -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM information_schema.schemata; -- valid; uses mydb implicitly -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schemata; -- invalid; information_schema not in search_path -~~~ - -For clients that use `information_schema` often, you can add it to the -search path to simplify queries. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SET search_path = public, information_schema; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM schemata; -- now valid, uses search_path -~~~ - -## Databases with special names - -When resolving a partially qualified name with just one component -prefix, CockroachDB will look up a schema with the given prefix name -first, and only look up a database with that name if the schema lookup -fails. This matters in the (likely uncommon) case where you wish your -database to be called `information_schema`, `public`, `pg_catalog` -or `crdb_internal`. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE public; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SET database = mydb; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE public.mypublictable (x INT); -~~~ - -The [`CREATE TABLE`](create-table.html) statement in this example uses a partially -qualified name. Because the `public` prefix designates a valid schema -in the current database, the full name of `mypublictable` becomes -`mydb.public.mypublictable`. The table is created in database `mydb`. - -To create the table in database `public`, one would instead use a -fully qualified name, as follows: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE public; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE public.public.mypublictable (x INT); -~~~ - -## See also - -- [`SET`](set-vars.html) -- [`SHOW`](show-vars.html) -- [`SHOW DATABASES`](show-databases.html) -- [`SHOW SCHEMAS`](show-schemas.html) -- [Information Schema](information-schema.html) diff --git a/src/current/v19.1/sql-statements.md b/src/current/v19.1/sql-statements.md deleted file mode 100644 index d3c78a66cfc..00000000000 --- a/src/current/v19.1/sql-statements.md +++ /dev/null @@ -1,177 +0,0 @@ ---- -title: SQL Statements -summary: Overview of SQL statements supported by CockroachDB. -toc: true ---- - -CockroachDB supports the following SQL statements. Click a statement for more details. - -{{site.data.alerts.callout_success}} -In the [built-in SQL shell](use-the-built-in-sql-client.html#help), use `\h [statement]` to get inline help about a specific statement. -{{site.data.alerts.end}} - - -## Data manipulation statements - -Statement | Usage -----------|------------ -[`CREATE TABLE AS`](create-table-as.html) | Create a new table in a database using the results from a [selection query](selection-queries.html). -[`DELETE`](delete.html) | Delete specific rows from a table. -[`EXPORT`](export.html) | Export an entire table's data, or the results of a `SELECT` statement, to CSV files. This statement is available only to [enterprise](https://www.cockroachlabs.com/product/cockroachdb/) users. -[`IMPORT`](import.html) | Import an entire table's data via CSV files. -[`INSERT`](insert.html) | Insert rows into a table. -[`SELECT`](select-clause.html) | Select specific rows and columns from a table and optionally compute derived values. -[`TABLE`](selection-queries.html#table-clause) | Select all rows and columns from a table. -[`TRUNCATE`](truncate.html) | Delete all rows from specified tables. -[`UPDATE`](update.html) | Update rows in a table. -[`UPSERT`](upsert.html) | Insert rows that do not violate uniqueness constraints; update rows that do. -[`VALUES`](selection-queries.html#values-clause) | Return rows containing specific values. - -## Data definition statements - -Statement | Usage -----------|------------ -[`ADD COLUMN`](add-column.html) | Add columns to a table. -[`ADD CONSTRAINT`](add-constraint.html) | Add a constraint to a column. -[`ALTER COLUMN`](alter-column.html) | Change a column's [Default constraint](default-value.html) or drop the [`NOT NULL` constraint](not-null.html). -[`ALTER DATABASE`](alter-database.html) | Apply a schema change to a database. -[`ALTER INDEX`](alter-index.html) | Apply a schema change to an index. -[`ALTER RANGE`](alter-range.html) | Change an existing system range. -[`ALTER SEQUENCE`](alter-sequence.html) | Apply a schema change to a sequence. -[`ALTER TABLE`](alter-table.html) | Apply a schema change to a table. -[`ALTER TYPE`](alter-type.html) | Change a column's [data type](data-types.html). -[`ALTER USER`](alter-user.html) | Add or change a user's password. -[`ALTER VIEW`](alter-view.html) | Rename a view. -[`COMMENT ON`](comment-on.html) | Associate a comment to a database, table, or column. -[`CONFIGURE ZONE`](configure-zone.html) | Add, modify, reset, and remove [replication zones](configure-replication-zones.html). -[`CREATE DATABASE`](create-database.html) | Create a new database. -[`CREATE INDEX`](create-index.html) | Create an index for a table. -[`CREATE SEQUENCE`](create-sequence.html) | Create a new sequence. -[`CREATE TABLE`](create-table.html) | Create a new table in a database. -[`CREATE TABLE AS`](create-table-as.html) | Create a new table in a database using the results from a [selection query](selection-queries.html). -[`CREATE VIEW`](create-view.html) | Create a new [view](views.html) in a database. -[`DROP COLUMN`](drop-column.html) | Remove columns from a table. -[`DROP CONSTRAINT`](drop-constraint.html) | Remove constraints from a column. -[`DROP DATABASE`](drop-database.html) | Remove a database and all its objects. -[`DROP INDEX`](drop-index.html) | Remove an index for a table. -[`DROP SEQUENCE`](drop-sequence.html) | Remove a sequence. -[`DROP TABLE`](drop-table.html) | Remove a table. -[`DROP VIEW`](drop-view.html)| Remove a view. -[`EXPERIMENTAL_AUDIT`](experimental-audit.html) | Turn SQL audit logging on or off for a table. -[`RENAME COLUMN`](rename-column.html) | Rename a column in a table. -[`RENAME CONSTRAINT`](rename-constraint.html) | Rename a constraint on a column. -[`RENAME DATABASE`](rename-database.html) | Rename a database. -[`RENAME INDEX`](rename-index.html) | Rename an index for a table. -[`RENAME SEQUENCE`](rename-sequence.html) | Rename a sequence. -[`RENAME TABLE`](rename-table.html) | Rename a table or move a table between databases. -[`SHOW COLUMNS`](show-columns.html) | View details about columns in a table. -[`SHOW CONSTRAINTS`](show-constraints.html) | List constraints on a table. -[`SHOW CREATE`](show-create.html) | View the `CREATE` statement for a table, view, or sequence. -[`SHOW DATABASES`](show-databases.html) | List databases in the cluster. -[`SHOW INDEX`](show-index.html) | View index information for a table. -[`SHOW SCHEMAS`](show-schemas.html) | List the schemas in a database. -[`SHOW SEQUENCES`](show-sequences.html) | List the sequences in a database. -[`SHOW TABLES`](show-tables.html) | List tables or views in a database or virtual schema. -[`SHOW EXPERIMENTAL_RANGES`](show-experimental-ranges.html) | Show range information about a specific table or index. -[`SHOW ZONE CONFIGURATIONS`](show-zone-configurations.html) | List details about existing [replication zones](configure-replication-zones.html). -[`SPLIT AT`](split-at.html) | Force a key-value layer range split at the specified row in the table or index. -[`VALIDATE CONSTRAINT`](validate-constraint.html) | Check whether values in a column match a [constraint](constraints.html) on the column. - -## Transaction management statements - -Statement | Usage -----------|------------ -[`BEGIN`](begin-transaction.html)| Initiate a [transaction](transactions.html). -[`COMMIT`](commit-transaction.html) | Commit the current [transaction](transactions.html). -[`RELEASE SAVEPOINT`](release-savepoint.html) | When using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), commit the transaction's changes once there are no retry errors. -[`ROLLBACK`](rollback-transaction.html) | Discard all updates made by the current [transaction](transactions.html) or, when using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), rollback to the savepoint and retry the transaction. -[`SAVEPOINT`](savepoint.html) | When using the CockroachDB-provided function for client-side [transaction retries](transactions.html#transaction-retries), start a retryable transaction. -[`SET TRANSACTION`](set-transaction.html) | Set the priority for the session or for an individual [transaction](transactions.html). -[`SHOW`](show-vars.html) | View the current [transaction settings](transactions.html). - -## Access management statements - -Statement | Usage -----------|------------ -[`CREATE ROLE`](create-role.html) | Create SQL [roles](authorization.html#create-and-manage-roles), which are groups containing any number of roles and users as members. -[`CREATE USER`](create-user.html) | Create SQL users, which lets you control [privileges](authorization.html#assign-privileges) on your databases and tables. -[`DROP ROLE`](drop-role.html) | Remove one or more SQL [roles](authorization.html#create-and-manage-roles). -[`DROP USER`](drop-user.html) | Remove one or more SQL users. -[`GRANT `](grant.html) | Grant privileges to [users](create-and-manage-users.html) or [roles](authorization.html#create-and-manage-roles). -[`GRANT `](grant-roles.html) | Add a [role](authorization.html#create-and-manage-roles) or [user](create-and-manage-users.html) as a member to a role. -[`REVOKE `](revoke.html) | Revoke privileges from [users](create-and-manage-users.html) or [roles](authorization.html#create-and-manage-roles). -[`REVOKE `](revoke-roles.html) | Revoke a [role](authorization.html#create-and-manage-roles) or [user's](create-and-manage-users.html) membership to a role. -[`SHOW GRANTS`](show-grants.html) | View privileges granted to users. -[`SHOW ROLES`](show-roles.html) | Lists the roles for all databases. -[`SHOW USERS`](show-users.html) | Lists the users for all databases. - -## Session management statements - -Statement | Usage -----------|------------ -[`RESET`](reset-vars.html) | Reset a session variable to its default value. -[`SET`](set-vars.html) | Set a current session variable. -[`SET TRANSACTION`](set-transaction.html) | Set the priority for an individual [transaction](transactions.html). -[`SHOW TRACE FOR SESSION`](show-trace.html) | Return details about how CockroachDB executed a statement or series of statements recorded during a session. -[`SHOW`](show-vars.html) | List the current session or transaction settings. - -## Cluster management statements - -Statement | Usage -----------|------------ -[`RESET CLUSTER SETTING`](reset-cluster-setting.html) | Reset a cluster setting to its default value. -[`SET CLUSTER SETTING`](set-cluster-setting.html) | Set a cluster-wide setting. -[`SHOW ALL CLUSTER SETTINGS`](show-cluster-setting.html) | List the current cluster-wide settings. -[`SHOW SESSIONS`](show-sessions.html) | List details about currently active sessions. -[`CANCEL SESSION`](cancel-session.html) | Cancel a long-running session. - -## Query management statements - -Statement | Usage -----------|------------ -[`CANCEL QUERY`](cancel-query.html) | Cancel a running SQL query. -[`SHOW QUERIES`](show-queries.html) | List details about current active SQL queries. - -## Query planning statements - -Statement | Usage -----------|------------ -[`CREATE STATISTICS`](create-statistics.html) | Create table statistics for the [cost-based optimizer](cost-based-optimizer.html) to use. -[`EXPLAIN`](explain.html) | View debugging and analysis details for a statement that operates over tabular data. -[`EXPLAIN ANALYZE`](explain-analyze.html) | Execute the query and generate a physical query plan with execution statistics. -[`SHOW STATISTICS`](show-statistics.html) | List table statistics used by the [cost-based optimizer](cost-based-optimizer.html). - - -## Job management statements - -Jobs in CockroachDB represent tasks that might not complete immediately, such as schema changes or enterprise backups or restores. - -Statement | Usage -----------|------------ -[`CANCEL JOB`](cancel-job.html) | Cancel a `BACKUP`, `RESTORE`, `IMPORT`, or `CHANGEFEED` job. -[`PAUSE JOB`](pause-job.html) | Pause a `BACKUP`, `RESTORE`, `IMPORT`, or `CHANGEFEED` job. -[`RESUME JOB`](resume-job.html) | Resume a paused `BACKUP`, `RESTORE`, `IMPORT`, or `CHANGEFEED` job. -[`SHOW JOBS`](show-jobs.html) | View information on jobs. - -## Backup and restore statements (Enterprise) - -The following statements are available only to [enterprise](https://www.cockroachlabs.com/product/cockroachdb/) users. - -{{site.data.alerts.callout_info}} -For non-enterprise users, see [Back up Data](backup.html) and [Restore Data](restore.html). -{{site.data.alerts.end}} - -Statement | Usage -----------|------------ -[`BACKUP`](backup.html) | Create disaster recovery backups of databases and tables. -[`RESTORE`](restore.html) | Restore databases and tables using your backups. -[`SHOW BACKUP`](show-backup.html) | List the contents of a backup. - -## Changefeed statements (Enterprise) - -[Change data capture](change-data-capture.html) (CDC) provides an enterprise and core version of row-level change subscriptions for downstream processing. - -Statement | Usage -----------|------------ -[`CREATE CHANGEFEED`](create-changefeed.html) | _(Enterprise)_ Create a new changefeed to stream row-level changes in a configurable format to a configurable sink (Kafka or a cloud storage sink). -[`EXPERIMENTAL CHANGEFEED FOR`](changefeed-for.html) | _(Core)_ Create a new changefeed to stream row-level changes to the client indefinitely until the underlying connection is closed or the changefeed is canceled. diff --git a/src/current/v19.1/start-a-local-cluster-in-docker.md b/src/current/v19.1/start-a-local-cluster-in-docker.md deleted file mode 100644 index 37baff1868b..00000000000 --- a/src/current/v19.1/start-a-local-cluster-in-docker.md +++ /dev/null @@ -1,281 +0,0 @@ ---- -title: Start a Cluster in Docker (Insecure) -summary: Run an insecure multi-node CockroachDB cluster across multiple Docker containers on a single host. -toc: true -allowed_hashes: [os-mac, os-linux, os-windows] ---- - - - -
      - - - -
      - -Once you've [installed the official CockroachDB Docker image](install-cockroachdb.html), it's simple to run an insecure multi-node cluster across multiple Docker containers on a single host, using Docker volumes to persist node data. - -{{site.data.alerts.callout_danger}} -Running a stateful application like CockroachDB in Docker is more complex and error-prone than most uses of Docker and is not recommended for production deployments. To run a physically distributed cluster in containers, use an orchestration tool like Kubernetes or Docker Swarm. See [Orchestration](orchestration.html) for more details. -{{site.data.alerts.end}} - - - -
      -{% include {{ page.version.version }}/start-in-docker/mac-linux-steps.md %} - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the Admin UI metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080`, and click **Metrics** on the left-hand navigation bar. - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}}For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our replication, rebalancing, fault tolerance demos.{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -{% include copy-clipboard.html %} -~~~ shell -$ docker stop roach1 roach2 roach3 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker rm roach1 roach2 roach3 -~~~ - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data -~~~ -
      - -
      -{% include {{ page.version.version }}/start-in-docker/mac-linux-steps.md %} - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the Admin UI metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080` and click **Metrics** on the left. - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}} -For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our [replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), [fault tolerance](demo-fault-tolerance-and-recovery.html) demos. -{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -{% include copy-clipboard.html %} -~~~ shell -$ docker stop roach1 roach2 roach3 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ docker rm roach1 roach2 roach3 -~~~ - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data -~~~ -
      - -
      -## Before you begin - -If you have not already installed the official CockroachDB Docker image, go to [Install CockroachDB](install-cockroachdb.html) and follow the instructions under **Use Docker**. - -## Step 1. Create a bridge network - -Since you'll be running multiple Docker containers on a single host, with one CockroachDB node per container, you need to create what Docker refers to as a [bridge network](https://docs.docker.com/engine/userguide/networking/#/a-bridge-network). The bridge network will enable the containers to communicate as a single cluster while keeping them isolated from external networks. - -
      PS C:\Users\username> docker network create -d bridge roachnet
      - -We've used `roachnet` as the network name here and in subsequent steps, but feel free to give your network any name you like. - -## Step 2. Start the first node - -{{site.data.alerts.callout_info}}Be sure to replace <username> in the -v flag with your actual username.{{site.data.alerts.end}} - -
      PS C:\Users\username> docker run -d `
      ---name=roach1 `
      ---hostname=roach1 `
      ---net=roachnet `
      --p 26257:26257 -p 8080:8080 `
      --v "//c/Users/<username>/cockroach-data/roach1:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure
      - -This command creates a container and starts the first CockroachDB node inside it. Let's look at each part: - -- `docker run`: The Docker command to start a new container. -- `-d`: This flag runs the container in the background so you can continue the next steps in the same shell. -- `--name`: The name for the container. This is optional, but a custom name makes it significantly easier to reference the container in other commands, for example, when opening a Bash session in the container or stopping the container. -- `--hostname`: The hostname for the container. You will use this to join other containers/nodes to the cluster. -- `--net`: The bridge network for the container to join. See step 1 for more details. -- `-p 26257:26257 -p 8080:8080`: These flags map the default port for inter-node and client-node communication (`26257`) and the default port for HTTP requests to the Admin UI (`8080`) from the container to the host. This enables inter-container communication and makes it possible to call up the Admin UI from a browser. -- `-v "//c/Users//cockroach-data/roach1:/cockroach/cockroach-data"`: This flag mounts a host directory as a data volume. This means that data and logs for this node will be stored in `Users//cockroach-data/roach1` on the host and will persist after the container is stopped or deleted. For more details, see Docker's Bind Mounts topic. -- `{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure`: The CockroachDB command to [start a node](start-a-node.html) in the container in insecure mode. - -## Step 3. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. - -To simulate a real deployment, scale your cluster by adding two more nodes: - -{{site.data.alerts.callout_info}}Again, be sure to replace <username> in the -v flag with your actual username.{{site.data.alerts.end}} - -
      # Start the second container/node:
      -PS C:\Users\username> docker run -d `
      ---name=roach2 `
      ---hostname=roach2 `
      ---net=roachnet `
      --v "//c/Users/<username>/cockroach-data/roach2:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
      -
      -# Start the third container/node:
      -PS C:\Users\username> docker run -d `
      ---name=roach3 `
      ---hostname=roach3 `
      ---net=roachnet `
      --v "//c/Users/<username>/cockroach-data/roach3:/cockroach/cockroach-data" `
      -{{page.release_info.docker_image}}:{{page.release_info.version}} start --insecure --join=roach1
      - -These commands add two more containers and start CockroachDB nodes inside them, joining them to the first node. There are only a few differences to note from step 2: - -- `-v`: This flag mounts a host directory as a data volume. Data and logs for these nodes will be stored in `Users//cockroach-data/roach2` and `Users//cockroach-data/roach3` on the host and will persist after the containers are stopped or deleted. -- `--join`: This flag joins the new nodes to the cluster, using the first container's `hostname`. Note that since each node is in a unique container, using identical default ports will not cause conflicts. - -## Step 4. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, use the `docker exec` command to start the [built-in SQL shell](use-the-built-in-sql-client.html) in the first container: - -
      PS C:\Users\username> docker exec -it roach1 ./cockroach sql --insecure
      -# Welcome to the cockroach SQL interface.
      -# All statements must be terminated by a semicolon.
      -# To exit: CTRL + D.
      - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then start the SQL shell in the second container: - -
      PS C:\Users\username> docker exec -it roach2 ./cockroach sql --insecure
      -# Welcome to the cockroach SQL interface.
      -# All statements must be terminated by a semicolon.
      -# To exit: CTRL + D.
      - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -When you're done, exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 5. Monitor the cluster - -When you started the first container/node, you mapped the node's default HTTP port `8080` to port `8080` on the host. To check out the [Admin UI](admin-ui-overview.html) metrics for your cluster, point your browser to that port on `localhost`, i.e., `http://localhost:8080` and click **Metrics** on the left. - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_success}} -For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our [replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), [fault tolerance](demo-fault-tolerance-and-recovery.html) demos. -{{site.data.alerts.end}} - -## Step 6. Stop the cluster - -Use the `docker stop` and `docker rm` commands to stop and remove the containers (and therefore the cluster): - -
      # Stop the containers:
      -PS C:\Users\username> docker stop roach1 roach2 roach3
      -
      -# Remove the containers:
      -PS C:\Users\username> docker rm roach1 roach2 roach3
      - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -
      Remove-Item C:\Users\username> cockroach-data -recurse
      - -
      - -## What's next? - -- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html) -- [Install the client driver](install-client-drivers.html) for your preferred language -- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, and fault tolerance diff --git a/src/current/v19.1/start-a-local-cluster.md b/src/current/v19.1/start-a-local-cluster.md deleted file mode 100644 index cc36877211e..00000000000 --- a/src/current/v19.1/start-a-local-cluster.md +++ /dev/null @@ -1,281 +0,0 @@ ---- -title: Start a Local Cluster (Insecure) -summary: Run an insecure multi-node CockroachDB cluster locally with each node listening on a different port. -toc: true -toc_not_nested: true ---- - - - -Once you’ve [installed CockroachDB](install-cockroachdb.html), it’s simple to start an insecure multi-node cluster locally. - -{{site.data.alerts.callout_info}} -Running multiple nodes on a single host is useful for testing out CockroachDB, but it's not recommended for production deployments. To run a physically distributed cluster in production, see [Manual Deployment](manual-deployment.html) or [Orchestrated Deployment](orchestration.html). Also be sure to review the [Production Checklist](recommended-production-settings.html). -{{site.data.alerts.end}} - - -## Before you begin - -Make sure you have already [installed CockroachDB](install-cockroachdb.html). - -## Step 1. Start the first node - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start --insecure --listen-addr=localhost -~~~ - -~~~ -CockroachDB node starting at 2018-09-13 01:25:57.878119479 +0000 UTC (took 0.3s) -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -webui: http://localhost:8080 -sql: postgresql://root@localhost:26257?sslmode=disable -client flags: cockroach --host=localhost:26257 --insecure -logs: cockroach/cockroach-data/logs -temp dir: cockroach-data/cockroach-temp998550693 -external I/O path: cockroach-data/extern -store[0]: path=cockroach-data -status: initialized new cluster -clusterID: 2711b3fa-43b3-4353-9a23-20c9fb3372aa -nodeID: 1 -~~~ - -This command starts a node in insecure mode, accepting most [`cockroach start`](start-a-node.html) defaults. - -- The `--insecure` flag makes communication unencrypted. -- Since this is a purely local cluster, `--listen-addr=localhost` tells the node to listen only on `localhost`, with default ports used for internal and client traffic (`26257`) and for HTTP requests from the Admin UI (`8080`). -- Node data is stored in the `cockroach-data` directory. -- The [standard output](start-a-node.html#standard-output) gives you helpful details such as the CockroachDB version, the URL for the Admin UI, and the SQL URL for clients. - -## Step 2. Add nodes to the cluster - -At this point, your cluster is live and operational. With just one node, you can already connect a SQL client and start building out your database. In real deployments, however, you'll always want 3 or more nodes to take advantage of CockroachDB's [automatic replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), and [fault tolerance](demo-fault-tolerance-and-recovery.html) capabilities. This step helps you simulate a real deployment locally. - -In a new terminal, add the second node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, add the third node: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---join=localhost:26257 -~~~ - -The main difference in these commands is that you use the `--join` flag to connect the new nodes to the cluster, specifying the address and port of the first node, in this case `localhost:26257`. Since you're running all nodes on the same machine, you also set the `--store`, `--listen-addr`, and `--http-addr` flags to locations and ports not used by other nodes, but in a real deployment, with each node on a different machine, the defaults would suffice. - -## Step 3. Test the cluster - -Now that you've scaled to 3 nodes, you can use any node as a SQL gateway to the cluster. To demonstrate this, open a new terminal and connect the [built-in SQL client](use-the-built-in-sql-client.html) to node 1: - -{{site.data.alerts.callout_info}} -The SQL client is built into the `cockroach` binary, so nothing extra is needed. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26257 -~~~ - -Run some basic [CockroachDB SQL statements](learn-cockroachdb-sql.html): - -{% include copy-clipboard.html %} -~~~ sql -> CREATE DATABASE bank; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE bank.accounts (id INT PRIMARY KEY, balance DECIMAL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO bank.accounts VALUES (1, 1000.50); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell on node 1: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Then connect the SQL shell to node 2, this time specifying the node's non-default port: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26258 -~~~ - -{{site.data.alerts.callout_info}} -In a real deployment, all nodes would likely use the default port `26257`, and so you wouldn't need to set the port portion of `--host`. -{{site.data.alerts.end}} - -Now run the same `SELECT` query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -As you can see, node 1 and node 2 behaved identically as SQL gateways. - -Exit the SQL shell on node 2: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -## Step 4. Monitor the cluster - -Access the [Admin UI](admin-ui-overview.html) for your cluster by pointing a browser to `http://localhost:8080`, or to the address in the `admin` field in the standard output of any node on startup. Then click **Metrics** on the left-hand navigation bar. - -CockroachDB Admin UI - -As mentioned earlier, CockroachDB automatically replicates your data behind-the-scenes. To verify that data written in the previous step was replicated successfully, scroll down to the **Replicas per Node** graph and hover over the line: - -CockroachDB Admin UI - -The replica count on each node is identical, indicating that all data in the cluster was replicated 3 times (the default). - -{{site.data.alerts.callout_info}} -Capacity metrics can be incorrect when running multiple nodes on a single machine. For more details, see this [limitation](known-limitations.html#available-capacity-metric-in-the-admin-ui). -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -For more insight into how CockroachDB automatically replicates and rebalances data, and tolerates and recovers from failures, see our [replication](demo-data-replication.html), [rebalancing](demo-automatic-rebalancing.html), [fault tolerance](demo-fault-tolerance-and-recovery.html) demos. -{{site.data.alerts.end}} - -## Step 5. Stop the cluster - -Once you're done with your test cluster, switch to the terminal running the first node and press **CTRL-C** to stop the node. - -At this point, with 2 nodes still online, the cluster remains operational because a majority of replicas are available. To verify that the cluster has tolerated this "failure", connect the built-in SQL shell to nodes 2 or 3. You can do this in the same terminal or in a new terminal. - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --host=localhost:26258 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+---------+ -| id | balance | -+----+---------+ -| 1 | 1000.5 | -+----+---------+ -(1 row) -~~~ - -Exit the SQL shell: - -{% include copy-clipboard.html %} -~~~ sql -> \q -~~~ - -Now stop nodes 2 and 3 by switching to their terminals and pressing **CTRL-C**. - -{{site.data.alerts.callout_success}} -For node 3, the shutdown process will take longer (about a minute) and will eventually force stop the node. This is because, with only 1 of 3 nodes left, a majority of replicas are not available, and so the cluster is no longer operational. To speed up the process, press **CTRL-C** a second time. -{{site.data.alerts.end}} - -If you do not plan to restart the cluster, you may want to remove the nodes' data stores: - -{% include copy-clipboard.html %} -~~~ shell -$ rm -rf cockroach-data node2 node3 -~~~ - -## Step 6. Restart the cluster - -If you decide to use the cluster for further testing, you'll need to restart at least 2 of your 3 nodes from the directories containing the nodes' data stores. - -Restart the first node from the parent directory of `cockroach-data/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---listen-addr=localhost -~~~ - -{{site.data.alerts.callout_info}} -With only 1 node back online, the cluster will not yet be operational, so you will not see a response to the above command until after you restart the second node. -{{site.data.alerts.end}} - -In a new terminal, restart the second node from the parent directory of `node2/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node2 \ ---listen-addr=localhost:26258 \ ---http-addr=localhost:8081 \ ---join=localhost:26257 -~~~ - -In a new terminal, restart the third node from the parent directory of `node3/`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---store=node3 \ ---listen-addr=localhost:26259 \ ---http-addr=localhost:8082 \ ---join=localhost:26257 -~~~ - -## What's next? - -- Learn more about [CockroachDB SQL](learn-cockroachdb-sql.html) and the [built-in SQL client](use-the-built-in-sql-client.html) -- [Install the client driver](install-client-drivers.html) for your preferred language -- [Build an app with CockroachDB](build-an-app-with-cockroachdb.html) -- [Explore core CockroachDB features](demo-data-replication.html) like automatic replication, rebalancing, fault tolerance, and cloud migration. diff --git a/src/current/v19.1/start-a-node.md b/src/current/v19.1/start-a-node.md deleted file mode 100644 index 4a4c85e7e88..00000000000 --- a/src/current/v19.1/start-a-node.md +++ /dev/null @@ -1,406 +0,0 @@ ---- -title: Start a Node -summary: To start a new CockroachDB cluster, or add a node to an existing cluster, run the cockroach start command. -toc: true ---- - -This page explains the `cockroach start` [command](cockroach-commands.html), which you use to start nodes as a new cluster or add nodes to an existing cluster. For a full walk-through of the cluster startup and initialization process, see one of the [Manual Deployment](manual-deployment.html) tutorials. - -{{site.data.alerts.callout_info}} -Node-level settings are defined by [flags](#flags) passed to the `cockroach start` command and cannot be changed without stopping and restarting the node. In contrast, some cluster-wide settings are defined via SQL statements and can be updated anytime after a cluster has been started. For more details, see [Cluster Settings](cluster-settings.html). -{{site.data.alerts.end}} - -## Synopsis - -Start a single-node cluster: - -~~~ shell -$ cockroach start -~~~ - -Start a node to be part of a new multi-node cluster: - -~~~ shell -$ cockroach start -~~~ - -Initialize a new multi-node cluster: - -~~~ shell -$ cockroach init -~~~ - -Add a node to an existing cluster: - -~~~ shell -$ cockroach start -~~~ - -View help: - -~~~ shell -$ cockroach start --help -~~~ - -## Flags - -The `start` command supports the following [general-use](#general), [networking](#networking), [security](#security), and [logging](#logging) flags. - -Many flags have useful defaults that can be overridden by specifying the flags explicitly. If you specify flags explicitly, however, be sure to do so each time the node is restarted, as they will not be remembered. The one exception is the `--join` flag, which is stored in a node's data directory, but even for `--join`, it's best practices to specify the flag every time, as that will allow restarted nodes to join the cluster even if their data directory was destroyed. - -{{site.data.alerts.callout_success}} -When adding a node to an existing cluster, include the `--join` flag. -{{site.data.alerts.end}} - -### General - -Flag | Description ------|----------- -`--attrs` | Arbitrary strings, separated by colons, specifying node capability, which might include specialized hardware or number of cores, for example:

      `--attrs=ram:64gb`

      These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details. -`--background` | Set this to start the node in the background. This is better than appending `&` to the command because control is returned to the shell only once the node is ready to accept requests.

      **Note:** `--background` is suitable for writing automated test suites or maintenance procedures that need a temporary server process running in the background. It is not intended to be used to start a long-running server, because it does not fully detach from the controlling terminal. Consider using a service manager or a tool like [daemon(8)](https://www.freebsd.org/cgi/man.cgi?query=daemon&sektion=8) instead. -`--cache` | The total size for caches, shared evenly if there are multiple storage devices. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

      `--cache=.25`
      `--cache=25%`
      `--cache=1000000000 ----> 1000000000 bytes`
      `--cache=1GB ----> 1000000000 bytes`
      `--cache=1GiB ----> 1073741824 bytes`

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead.

      **Default:** `128MiB`

      The default cache size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. See [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size) for more details. -`--external-io-dir` | The path of the external IO directory with which the local file access paths are prefixed while performing backup and restore operations using local node directories or NFS drives. If set to `disabled`, backups and restores using local node directories and NFS drives are disabled.

      **Default:** `extern` subdirectory of the first configured [`store`](#store).

      To set the `--external-io-dir` flag to the locations you want to use without needing to restart nodes, create symlinks to the desired locations from within the `extern` directory. -`--listening-url-file` | The file to which the node's SQL connection URL will be written on successful startup, in addition to being printed to the [standard output](#standard-output).

      This is particularly helpful in identifying the node's port when an unused port is assigned automatically (`--port=0`). -`--locality` | Arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. For more details, see [Locality](#locality) below. -`--max-disk-temp-storage` | The maximum on-disk storage capacity available to store temporary data for SQL queries that exceed the memory budget (see `--max-sql-memory`). This ensures that JOINs, sorts, and other memory-intensive SQL operations are able to spill intermediate results to disk. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit (e.g., `.25`, `25%`, `500GB`, `1TB`, `1TiB`).

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. Also, if expressed as a percentage, this value is interpreted relative to the size of the first store. However, the temporary space usage is never counted towards any store usage; therefore, when setting this value, it's important to ensure that the size of this temporary storage plus the size of the first store doesn't exceed the capacity of the storage device.

      The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

      **Default:** `32GiB` -`--max-offset` | The maximum allowed clock offset for the cluster. If observed clock offsets exceed this limit, servers will crash to minimize the likelihood of reading inconsistent data. Increasing this value will increase the time to recovery of failures as well as the frequency of uncertainty-based read restarts.

      Note that this value must be the same on all nodes in the cluster and cannot be changed with a [rolling upgrade](upgrade-cockroach-version.html). In order to change it, first stop every node in the cluster. Then once the entire cluster is offline, restart each node with the new value.

      **Default:** `500ms` -`--max-sql-memory` | The maximum in-memory storage capacity available to store temporary data for SQL queries, including prepared queries and intermediate data rows during query execution. This can be a percentage (notated as a decimal or with `%`) or any bytes-based unit, for example:

      `--max-sql-memory=.25`
      `--max-sql-memory=25%`
      `--max-sql-memory=10000000000 ----> 1000000000 bytes`
      `--max-sql-memory=1GB ----> 1000000000 bytes`
      `--max-sql-memory=1GiB ----> 1073741824 bytes`

      The temporary files are located in the path specified by the `--temp-dir` flag, or in the subdirectory of the first store (see `--store`) by default.

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead.

      **Default:** `128MiB`

      The default SQL memory size is reasonable for local development clusters. For production deployments, this should be increased to 25% or higher. See [Recommended Production Settings](recommended-production-settings.html#cache-and-sql-memory-size) for more details. -`--pid-file` | The file to which the node's process ID will be written on successful startup. When this flag is not set, the process ID is not written to file. -`--store`
      `-s` | The file path to a storage device and, optionally, store attributes and maximum size. When using multiple storage devices for a node, this flag must be specified separately for each device, for example:

      `--store=/mnt/ssd01 --store=/mnt/ssd02`

      For more details, see [Store](#store) below. -`--temp-dir` | The path of the node's temporary store directory. On node start up, the location for the temporary files is printed to the standard output.

      **Default:** Subdirectory of the first [store](#store) - -### Networking - -Flag | Description ------|----------- -`--advertise-addr` | The IP address/hostname and port to tell other nodes to use. If using a hostname, it must be resolvable from all nodes. If using an IP address, it must be routable from all nodes; for IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

      This flag's effect depends on how it is used in combination with `--listen-addr`. For example, if the port number is different than the one used in `--listen-addr`, port forwarding is required. For more details, see [Networking](recommended-production-settings.html#networking).

      **Default:** The value of `--listen-addr`; if `--listen-addr` is not specified, advertises the node's canonical hostname and port `26257` -`--listen-addr` | The IP address/hostname and port to listen on for connections from other nodes and clients. For IPv6, use the notation `[...]`, e.g., `[::1]` or `[fe80::f6f2:::]`.

      This flag's effect depends on how it is used in combination with `--advertise-addr`. For example, the node will also advertise itself to other nodes using this value if `--advertise-addr` is not specified. For more details, see [Networking](recommended-production-settings.html#networking).

      **Default:** Listen on all IP addresses on port `26257`; if `--advertise-addr` is not specified, also advertise the node's canonical hostname to other nodes -`--http-addr` | The IP address/hostname and port to listen on for Admin UI HTTP requests. For IPv6, use the notation `[...]`, e.g., `[::1]:8080` or `[fe80::f6f2:::]:8080`.

      **Default:** Listen on the address part of `--listen-addr` on port `8080` -`--locality-advertise-addr` | The IP address/hostname and port to tell other nodes in specific localities to use. This flag is useful when running a cluster across multiple networks, where nodes in a given network have access to a private or local interface while nodes outside the network do not. In this case, you can use `--locality-advertise-addr` to tell nodes within the same network to prefer the private or local address to improve performance and use `--advertise-addr` to tell nodes outside the network to use another address that is reachable from them.

      This flag relies on nodes being started with the [`--locality`](#locality) flag and uses the `locality@address` notation, for example:

      `--locality-advertise-addr=region=us-west@10.0.0.0:26257`

      See the [example](#start-a-multi-node-cluster-across-private-networks) below for more details. -`--join`
      `-j` | The addresses for connecting the node to a cluster.

      When starting a multi-node cluster for the first time, set this flag to the addresses of 3-5 of the initial nodes. Then run the [`cockroach init`](initialize-a-cluster.html) command against any of the nodes to complete cluster startup. See the [example](#start-a-multi-node-cluster) below for more details.

      When starting a singe-node cluster, leave this flag out. This will cause the node to initialize a new single-node cluster without needing to run the `cockroach init` command. See the [example](#start-a-single-node-cluster) below for more details.

      When adding a node to an existing cluster, set this flag to 3-5 of the nodes already in the cluster; it's easiest to use the same list of addresses that was used to start the initial nodes. -`--advertise-host` | **Deprecated.** Use `--advertise-addr` instead. -`--host` | **Deprecated.** Use `--listen-addr` instead. -`--port`
      `-p` | **Deprecated.** Specify port in `--advertise-addr` and/or `--listen-addr` instead. -`--http-host` | **Deprecated.** Use `--http-addr` instead. -`--http-port` | **Deprecated.** Specify port in `--http-addr` instead. - -### Security - -Flag | Description ------|----------- -`--certs-dir` | The path to the [certificate directory](create-security-certificates.html). The directory must contain valid certificates if running in secure mode.

      **Default:** `${HOME}/.cockroach-certs/` -`--insecure` | Run in insecure mode. If this flag is not set, the `--certs-dir` flag must point to valid certificates.

      Note the following risks: An insecure cluster is open to any client that can access any node's IP addresses; any user, even `root`, can log in without providing a password; any user, connecting as `root`, can read or write any data in your cluster; and there is no network encryption or authentication, and thus no confidentiality.

      **Default:** `false` -`--enterprise-encryption` | This optional flag specifies the encryption options for one of the stores on the node. If multiple stores exist, the flag must be specified for each store.

      This flag takes a number of options. For a complete list of options, and usage instructions, see [Encryption at Rest](encryption.html).

      Note that this is an [enterprise feature](enterprise-licensing.html). - -### Locality - -The `--locality` flag accepts arbitrary key-value pairs that describe the location of the node. Locality might include country, region, datacenter, rack, etc. The key-value pairs should be ordered from most to least inclusive, and the keys and order of key-value pairs must be the same on all nodes. It's typically better to include more pairs than fewer. - -- CockroachDB spreads the replicas of each piece of data across as diverse a set of localities as possible, with the order determining the priority. However, locality can also be used to influence the location of data replicas in various ways using [replication zones](configure-replication-zones.html#replication-constraints). - -- When there is high latency between nodes (e.g., cross-datacenter deployments), CockroachDB uses locality to move range leases closer to the current workload, reducing network round trips and improving read performance, also known as ["follow-the-workload"](demo-follow-the-workload.html). In a deployment across more than 3 datacenters, however, to ensure that all data benefits from "follow-the-workload", you must increase your replication factor to match the total number of datacenters. - -- Locality is also a prerequisite for using the [table partitioning](partitioning.html) and [**Node Map**](enable-node-map.html) enterprise features. - -#### Example - -~~~ shell -# Locality flag for nodes in US East datacenter: ---locality=region=us,datacenter=us-east - -# Locality flag for nodes in US Central datacenter: ---locality=region=us,datacenter=us-central - -# Locality flag for nodes in US West datacenter: ---locality=region=us,datacenter=us-west -~~~ - -### Store - -The `--store` flag supports the following fields. Note that commas are used to separate fields, and so are forbidden in all field values. - -{{site.data.alerts.callout_info}} -In-memory storage is not suitable for production deployments at this time. -{{site.data.alerts.end}} - -{% include {{ page.version.version }}/misc/multi-store-nodes.md %} - -Field | Description -------|------------ -`type` | For in-memory storage, set this field to `mem`; otherwise, leave this field out. The `path` field must not be set when `type=mem`. -`path` | The file path to the storage device. When not setting `attr` or `size`, the `path` field label can be left out:

      `--store=/mnt/ssd01`

      When either of those fields are set, however, the `path` field label must be used:

      `--store=path=/mnt/ssd01,size=20GB`

      **Default:** `cockroach-data` -`attrs` | Arbitrary strings, separated by colons, specifying disk type or capability. These can be used to influence the location of data replicas. See [Configure Replication Zones](configure-replication-zones.html#replication-constraints) for full details.

      In most cases, node-level `--locality` or `--attrs` are preferable to store-level attributes, but this field can be used to match capabilities for storage of individual databases or tables. For example, an OLTP database would probably want to allocate space for its tables only on solid state devices, whereas append-only time series might prefer cheaper spinning drives. Typical attributes include whether the store is flash (`ssd`) or spinny disk (`hdd`), as well as speeds and other specs, for example:

      `--store=path=/mnt/hda1,attrs=hdd:7200rpm` -`size` | The maximum size allocated to the node. When this size is reached, CockroachDB attempts to rebalance data to other nodes with available capacity. When there's no capacity elsewhere, this limit will be exceeded. Also, data may be written to the node faster than the cluster can rebalance it away; in this case, as long as capacity is available elsewhere, CockroachDB will gradually rebalance data down to the store limit.

      The `size` can be specified either in a bytes-based unit or as a percentage of hard drive space (notated as a decimal or with `%`), for example:

      `--store=path=/mnt/ssd01,size=10000000000 ----> 10000000000 bytes`
      `--store=path=/mnt/ssd01,size=20GB ----> 20000000000 bytes`
      `--store=path=/mnt/ssd01,size=20GiB ----> 21474836480 bytes`
      `--store=path=/mnt/ssd01,size=0.02TiB ----> 21474836480 bytes`
      `--store=path=/mnt/ssd01,size=20% ----> 20% of available space`
      `--store=path=/mnt/ssd01,size=0.2 ----> 20% of available space`
      `--store=path=/mnt/ssd01,size=.2 ----> 20% of available space`

      **Default:** 100%

      For an in-memory store, the `size` field is required and must be set to the true maximum bytes or percentage of available memory, for example:

      `--store=type=mem,size=20GB`
      `--store=type=mem,size=90%`

      Note: If you use the `%` notation, you might need to escape the `%` sign, for instance, while configuring CockroachDB through `systemd` service files. For this reason, it's recommended to use the decimal notation instead. - -### Logging - -By default, `cockroach start` writes all messages to log files, and prints nothing to `stderr`. However, you can control the process's [logging](debug-and-error-logs.html) behavior with the following flags: - -{% include {{ page.version.version }}/misc/logging-flags.md %} - -#### Defaults - -`cockroach start` uses the equivalent values for these logging flags by default: - -- `--log-dir=/logs` -- `--logtostderr=NONE` - -This means, by default, CockroachDB writes all messages to log files, and never prints to `stderr`. - -## Standard output - -When you run `cockroach start`, some helpful details are printed to the standard output: - -~~~ shell -CockroachDB node starting at {{ now | date: "%Y-%m-%d %H:%M:%S.%6 +0000 UTC" }} -build: CCL {{page.release_info.version}} @ {{page.release_info.build_time}} -webui: http://localhost:8080 -sql: postgresql://root@localhost:26257?sslmode=disable -client flags: cockroach --listen-addr=localhost:26257 --insecure -logs: /cockroach-data/logs -temp dir: /cockroach-data/cockroach-temp430873933 -external I/O path: /cockroach-data/extern -attrs: ram:64gb -locality: datacenter=us-east1 -store[0]: path=cockroach-data,attrs=ssd -status: initialized new cluster -clusterID: 7b9329d0-580d-4035-8319-53ba8b74b213 -nodeID: 1 -~~~ - -{{site.data.alerts.callout_success}} -These details are also written to the `INFO` log in the `/logs` directory in case you need to refer to them at a later time. -{{site.data.alerts.end}} - -Field | Description -------|------------ -`build` | The version of CockroachDB you are running. -`webui` | The URL for accessing the Admin UI. -`sql` | The connection URL for your client. -`client flags` | The flags to use when connecting to the node via [`cockroach` client commands](cockroach-commands.html). -`logs` | The directory containing debug log data. -`temp dir` | The temporary store directory of the node. -`external I/O path` | The external IO directory with which the local file access paths are prefixed while performing [backup](backup.html) and [restore](restore.html) operations using local node directories or NFS drives. -`attrs` | If node-level attributes were specified in the `--attrs` flag, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`locality` | If values describing the locality of the node were specified in the `--locality` field, they are listed in this field. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`store[n]` | The directory containing store data, where `[n]` is the index of the store, e.g., `store[0]` for the first store, `store[1]` for the second store.

      If store-level attributes were specified in the `attrs` field of the [`--store`](#store) flag, they are listed in this field as well. These details are potentially useful for [configuring replication zones](configure-replication-zones.html). -`status` | Whether the node is the first in the cluster (`initialized new cluster`), joined an existing cluster for the first time (`initialized new node, joined pre-existing cluster`), or rejoined an existing cluster (`restarted pre-existing node`). -`clusterID` | The ID of the cluster.

      When trying to join a node to an existing cluster, if this ID is different than the ID of the existing cluster, the node has started a new cluster. This may be due to conflicting information in the node's data directory. For additional guidance, see the [troubleshooting](common-errors.html#node-belongs-to-cluster-cluster-id-but-is-attempting-to-connect-to-a-gossip-network-for-cluster-another-cluster-id) docs. -`nodeID` | The ID of the node. - -## Known limitations - -{% include {{ page.version.version }}/known-limitations/adding-stores-to-node.md %} - -## Examples - -### Start a single-node cluster - -
      - - -
      - -To start a single-node cluster, run the `cockroach start` command without the `--join` flag: - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -### Start a multi-node cluster - -
      - - -
      - -To start a multi-node cluster, run the `cockroach start` command for each node, setting the `--join` flag to the addresses of 3-5 of the initial nodes: - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -Then run the [`cockroach init`](initialize-a-cluster.html) command against any node to perform a one-time cluster initialization: - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---certs-dir=certs \ ---host=
      -~~~ -
      - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach init \ ---insecure \ ---host=
      -~~~ -
      - -### Start a multi-node cluster across private networks - -**Scenario:** - -- You have a cluster that spans GCE and AWS. -- The nodes on each cloud can reach each other on private addresses, but the private addresses aren't reachable from the other cloud. - -**Approach:** - -1. Start each node on GCE with `--locality` set to describe its location, `--locality-advertise-addr` set to advertise its private address to other nodes in on GCE, `--advertise-addr` set to advertise its public address to nodes on AWS, and `--join` set to the public addresses of 3-5 of the initial nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --locality=cloud=gce \ - --locality-advertise-addr=cloud=gce@ \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 - ~~~ - -2. Start each node on AWS with `--locality` set to describe its location, `--locality-advertise-addr` set to advertise its private address to other nodes on AWS, `--advertise-addr` set to advertise its public address to nodes on GCE, and `--join` set to the public addresses of 3-5 of the initial nodes: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --locality=cloud=aws \ - --locality-advertise-addr=cloud=aws@ \ - --advertise-addr= \ - --join=,, \ - --cache=.25 \ - --max-sql-memory=.25 - ~~~ - -3. Run the [`cockroach init`](initialize-a-cluster.html) command against any node to perform a one-time cluster initialization: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach init \ - --certs-dir=certs \ - --host=
      - ~~~ - -### Add a node to a cluster - -
      - - -
      - -To add a node to an existing cluster, run the `cockroach start` command, setting the `--join` flag to the addresses of 3-5 of the nodes already in the cluster: - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---certs-dir=certs \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -
      -{% include copy-clipboard.html %} -~~~ shell -$ cockroach start \ ---insecure \ ---advertise-addr= \ ---join=,, \ ---cache=.25 \ ---max-sql-memory=.25 -~~~ -
      - -## See also - -- [Initialize a Cluster](initialize-a-cluster.html) -- [Manual Deployment](manual-deployment.html) -- [Orchestrated Deployment](orchestration.html) -- [Local Deployment](start-a-local-cluster.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/stop-a-node.md b/src/current/v19.1/stop-a-node.md deleted file mode 100644 index 28a6857d261..00000000000 --- a/src/current/v19.1/stop-a-node.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: Stop a Node -summary: This page shows you how to use the cockroach quit command to temporarily stop a node that you plan to restart. -toc: true ---- - -This page shows you how to use the `cockroach quit` [command](cockroach-commands.html) to temporarily stop a node that you plan to restart. - -You might do this, for example, during the process of [upgrading your cluster's version of CockroachDB](upgrade-cockroach-version.html) or to perform planned maintenance (e.g., upgrading system software). - -{{site.data.alerts.callout_info}} -In other scenarios, such as when downsizing a cluster or reacting to hardware failures, it's best to remove nodes from your cluster entirely. For information about this, see [Decommission Nodes](remove-nodes.html). -{{site.data.alerts.end}} - -## Overview - -### How it works - -When you stop a node, it performs the following steps: - -- Finishes in-flight requests. Note that this is a best effort that times out after the duration specified by the `server.shutdown.query_wait` [cluster setting](cluster-settings.html). -- Gossips its draining state to the cluster, so that other nodes do not try to distribute query planning to the draining node. Note that this is a best effort that times out after the duration specified by the `server.shutdown.drain_wait` [cluster setting](cluster-settings.html), so other nodes may not receive the gossip info in time. -- No new ranges are transferred to the draining node, to avoid a possible loss of quorum after the node shuts down. - -If the node then stays offline for a certain amount of time (5 minutes by default), the cluster considers the node dead and starts to transfer its **range replicas** to other nodes as well. - -After that, if the node comes back online, its range replicas will determine whether or not they are still valid members of replica groups. If a range replica is still valid and any data in its range has changed, it will receive updates from another replica in the group. If a range replica is no longer valid, it will be removed from the node. - -Basic terms: - -- **Range**: CockroachDB stores all user data and almost all system data in a giant sorted map of key value pairs. This keyspace is divided into "ranges", contiguous chunks of the keyspace, so that every key can always be found in a single range. -- **Range Replica:** CockroachDB replicates each range (3 times by default) and stores each replica on a different node. - -### Considerations - -{% include {{ page.version.version }}/faq/planned-maintenance.md %} - -## Synopsis - -Temporarily stop a node: - -~~~ shell -$ cockroach quit -~~~ - -View help: - -~~~ shell -$ cockroach quit --help -~~~ - -## Flags - -The `quit` command supports the following [general-use](#general), [client connection](#client-connection), and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--decommission` | If specified, the node will be removed from the cluster instead of temporarily stopped. See [Remove Nodes](remove-nodes.html) for more details. -`--drain-wait` | Amount of time to wait for the node to drain before stopping the node. See [`cockroach node drain`](view-node-details.html) for more details.

      **Default:** `10m` - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `quit` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Examples - -### Stop a node from the machine where it's running - -1. SSH to the machine where the node is running. - -2. If the node is running in the background and you are using a process manager for automatic restarts, use the process manager to stop the `cockroach` process without restarting it. - - If the node is running in the background and you are not using a process manager, send a kill signal to the `cockroach` process, for example: - - {% include copy-clipboard.html %} - ~~~ shell - $ pkill cockroach - ~~~ - - If the node is running in the foreground, press `CTRL-C`. - -3. Verify that the `cockroach` process has stopped: - - {% include copy-clipboard.html %} - ~~~ shell - $ ps aux | grep cockroach - ~~~ - - Alternately, you can check the node's logs for the message `server drained and shutdown completed`. - -### Stop a node from another machine - -
      - - -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Create a `certs` directory and copy the CA certificate and the client certificate and key for the `root` user into the directory. - -3. Run the `cockroach quit` command without the `--decommission` flag: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --certs-dir=certs --host=
      - ~~~ -
      - -
      -1. [Install the `cockroach` binary](install-cockroachdb.html) on a machine separate from the node. - -2. Run the `cockroach quit` command without the `--decommission` flag: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach quit --insecure --host=
      - ~~~ -
      - -## See also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Decommission Nodes](remove-nodes.html) -- [Upgrade a Cluster's Version](upgrade-cockroach-version.html) diff --git a/src/current/v19.1/string.md b/src/current/v19.1/string.md deleted file mode 100644 index ddeeab42df2..00000000000 --- a/src/current/v19.1/string.md +++ /dev/null @@ -1,165 +0,0 @@ ---- -title: STRING -summary: The STRING data type stores a string of Unicode characters. -toc: true ---- - -The `STRING` [data type](data-types.html) stores a string of Unicode characters. - -## Aliases - -In CockroachDB, the following are aliases for `STRING`: - -- `CHARACTER` -- `CHAR` -- `VARCHAR` -- `TEXT` - -And the following are aliases for `STRING(n)`: - -- `CHARACTER(n)` -- `CHARACTER VARYING(n)` -- `CHAR(n)` -- `CHAR VARYING(n)` -- `VARCHAR(n)` - -## Length - -To limit the length of a string column, use `STRING(n)`, where `n` is the maximum number of Unicode code points (normally thought of as "characters") allowed. - -When inserting a string: - -- If the value exceeds the column's length limit, CockroachDB gives an error. -- If the value is cast as a string with a length limit (e.g., `CAST('hello world' AS STRING(5))`), CockroachDB truncates to the limit. -- If the value is under the column's length limit, CockroachDB does **not** add padding. This applies to `STRING(n)` and all its aliases. - -## Syntax - -A value of type `STRING` can be expressed using a variety of formats. -See [string literals](sql-constants.html#string-literals) for more details. - -When printing out a `STRING` value in the [SQL shell](use-the-built-in-sql-client.html), the shell uses the simple -SQL string literal format if the value doesn't contain special character, -or the escaped format otherwise. - -### Collations - -`STRING` values accept [collations](collate.html), which lets you sort strings according to language- and country-specific rules. - -## Size - -The size of a `STRING` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification](https://en.wikipedia.org/wiki/Write_amplification) and other considerations may cause significant performance degradation. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE strings (a STRING PRIMARY KEY, b STRING(4), c TEXT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM strings; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -+-------------+-----------+-------------+----------------+-----------------------+-------------+-----------+ - a | STRING | false | NULL | | {"primary"} | false - b | STRING(4) | true | NULL | | {} | false - c | STRING | true | NULL | | {} | false -(3 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO strings VALUES ('a1b2c3d4', 'e5f6', 'g7h8i9'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM strings; -~~~ - -~~~ - a | b | c -+----------+------+--------+ - a1b2c3d4 | e5f6 | g7h8i9 -(1 row) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE aliases (a STRING PRIMARY KEY, b VARCHAR, c CHAR); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM aliases; -~~~ - -~~~ - column_name | data_type | is_nullable | column_default | generation_expression | indices | is_hidden -+-------------+-----------+-------------+----------------+-----------------------+-------------+-----------+ - a | STRING | false | NULL | | {"primary"} | false - b | VARCHAR | true | NULL | | {} | false - c | CHAR | true | NULL | | {} | false -(3 rows) -~~~ - -## Supported casting and conversion - -`STRING` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`BIT` | Requires supported [`BIT`](bit.html) string format, e.g., `'101001'`. -`BOOL` | Requires supported [`BOOL`](bool.html) string format, e.g., `'true'`. -`BYTES` | For more details, [see here](bytes.html#supported-conversions). -`DATE` | Requires supported [`DATE`](date.html) string format, e.g., `'2016-01-25'`. -`DECIMAL` | Requires supported [`DECIMAL`](decimal.html) string format, e.g., `'1.1'`. -`FLOAT` | Requires supported [`FLOAT`](float.html) string format, e.g., `'1.1'`. -`INET` | Requires supported [`INET`](inet.html) string format, e.g, `'192.168.0.1'`. -`INT` | Requires supported [`INT`](int.html) string format, e.g., `'10'`. -`INTERVAL` | Requires supported [`INTERVAL`](interval.html) string format, e.g., `'1h2m3s4ms5us6ns'`. -`TIME` | Requires supported [`TIME`](time.html) string format, e.g., `'01:22:12'` (microsecond precision). -`TIMESTAMP` | Requires supported [`TIMESTAMP`](timestamp.html) string format, e.g., `'2016-01-25 10:10:10.555555'`. - -### `STRING` vs. `BYTES` - -While both `STRING` and `BYTES` can appear to have similar behavior in many situations, one should understand their nuance before casting one into the other. - -`STRING` treats all of its data as characters, or more specifically, Unicode code points. `BYTES` treats all of its data as a byte string. This difference in implementation can lead to dramatically different behavior. For example, let's take a complex Unicode character such as ☃ ([the snowman emoji](https://emojipedia.org/snowman/)): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT length('☃'::string); -~~~ - -~~~ - length -+--------+ - 1 -~~~ - -~~~ sql -> SELECT length('☃'::bytes); -~~~ -~~~ - length -+--------+ - 3 -~~~ - -In this case, [`LENGTH(string)`](functions-and-operators.html#string-and-byte-functions) measures the number of Unicode code points present in the string, whereas [`LENGTH(bytes)`](functions-and-operators.html#string-and-byte-functions) measures the number of bytes required to store that value. Each character (or Unicode code point) can be encoded using multiple bytes, hence the difference in output between the two. - -#### Translating literals to `STRING` vs. `BYTES` - -A literal entered through a SQL client will be translated into a different value based on the type: - -+ `BYTES` give a special meaning to the pair `\x` at the beginning, and translates the rest by substituting pairs of hexadecimal digits to a single byte. For example, `\xff` is equivalent to a single byte with the value of 255. For more information, see [SQL Constants: String literals with character escapes](sql-constants.html#string-literals-with-character-escapes). -+ `STRING` does not give a special meaning to `\x`, so all characters are treated as distinct Unicode code points. For example, `\xff` is treated as a `STRING` with length 4 (`\`, `x`, `f`, and `f`). - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/subqueries.md b/src/current/v19.1/subqueries.md deleted file mode 100644 index 375500cd129..00000000000 --- a/src/current/v19.1/subqueries.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -title: Subqueries -summary: Subqueries enable the use of the results from a query within another query. -toc: true ---- - -SQL subqueries enable reuse of the results from a [selection query](selection-queries.html) within another query. - - -## Overview - -CockroachDB supports two kinds of subqueries: - -- **Relational** subqueries which appear as operand in [selection queries](selection-queries.html) or [table expressions](table-expressions.html). -- **Scalar** subqueries which appear as operand in a [scalar expression](scalar-expressions.html). - -## Data writes in subqueries - -When a subquery contains a data-modifying statement (`INSERT`, -`DELETE`, etc.), the data modification is always executed to -completion even if the surrounding query only uses a subset of the -result rows. - -This is true both for subqueries defined using the `(...)` or `[...]` -notations, and those defined using -[`WITH`](common-table-expressions.html). - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * - FROM [INSERT INTO t(x) VALUES (1), (2), (3) RETURNING x] - LIMIT 1; -~~~ - -This query always inserts 3 rows into `t`, even though the surrounding -query only observes 1 row using [`LIMIT`](limit-offset.html). - -## Correlated subqueries - -New in v19.1: CockroachDB's [cost-based optimizer](cost-based-optimizer.html) supports most correlated subqueries. - -A subquery is said to be "correlated" when it uses table or column names defined in the surrounding query. - -For example, to find every customer with at least one order, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT - c.name - FROM - customers AS c - WHERE - EXISTS( - SELECT * FROM orders AS o WHERE o.customer_id = c.id - ); -~~~ - -The subquery is correlated because it uses `c` defined in the surrounding query. - -### Limitations - -The [cost-based optimizer](cost-based-optimizer.html) supports most correlated subqueries, with the following exceptions: - -- Correlated subqueries that generate side effects inside a `CASE` statement. - -- Correlated subqueries that result in implicit `LATERAL` joins. Given a cross-join expression `a,b`, if `b` is an application of a [set-returning function](functions-and-operators.html#set-returning-functions) that references a variable defined in the surrounding query, the `LATERAL` keyword is assumed as shown below. - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT - e.last_name, s.salary, noise - FROM - employees AS e, - salaries AS s, - -- Join with a set-returning function implies LATERAL below - generate_series(0, s.salary, 10000) AS noise - WHERE - e.emp_no = s.emp_no - ORDER BY - s.salary DESC - LIMIT - 10; - ~~~ - - ~~~ - ERROR: no data source matches prefix: s - ~~~ - - For more information, see [the Github issue tracking `LATERAL` join implementation](https://github.com/cockroachdb/cockroach/issues/24560). - - Note that the example above uses the [employees data set](https://github.com/datacharmer/test_db) that is also used in our [Migrate from MySQL](migrate-from-mysql.html) instructions (and the [MySQL docs](https://dev.mysql.com/doc/employee/en/)). - -{{site.data.alerts.callout_info}} -If you come across an unsupported correlated subquery other than those described above, please [file a Github issue](file-an-issue.html). -{{site.data.alerts.end}} - -## Performance best practices - -{{site.data.alerts.callout_info}} -CockroachDB is currently undergoing major changes to evolve and improve the performance of subqueries. The restrictions and workarounds listed in this section will be lifted or made unnecessary over time. -{{site.data.alerts.end}} - -- Scalar subqueries currently disable the distribution of the execution of a query. To ensure maximum performance on queries that process a large number of rows, make the client application compute the subquery results ahead of time and pass these results directly in the surrounding query. - -- The results of scalar subqueries are currently loaded entirely into memory when the execution of the surrounding query starts. To prevent execution errors due to memory exhaustion, ensure that subqueries return as few results as possible. - -## See also - -- [Selection Queries](selection-queries.html) -- [Scalar Expressions](scalar-expressions.html) -- [Table Expressions](table-expressions.html) -- [Performance Best Practices - Overview](performance-best-practices-overview.html) diff --git a/src/current/v19.1/support-resources.md b/src/current/v19.1/support-resources.md deleted file mode 100644 index c44f9406bbc..00000000000 --- a/src/current/v19.1/support-resources.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: Support Resources -summary: There are various ways to reach out for support from Cockroach Labs and our community. -toc: false ---- - -For each major release of CockroachDB, Cockroach Labs provides maintenance support for at least 365 days and assistance support for at least an additional 180 days. For more details, see the [Release Support Policy](../releases/release-support-policy.html). - -If you're having an issue with CockroachDB, you can reach out for support from Cockroach Labs and our community: - -- [Troubleshooting documentation](troubleshooting-overview.html) -- [CockroachDB Community Forum](https://forum.cockroachlabs.com) -- [CockroachDB Community Slack](https://cockroachdb.slack.com) -- [StackOverflow](http://stackoverflow.com/questions/tagged/cockroachdb) -- [File a GitHub issue](file-an-issue.html) -- [CockroachDB Support Portal](https://support.cockroachlabs.com) - -We also rely on contributions from users like you. If you know how to help users who might be struggling with a problem, we hope you will! diff --git a/src/current/v19.1/table-expressions.md b/src/current/v19.1/table-expressions.md deleted file mode 100644 index e9bbad097e0..00000000000 --- a/src/current/v19.1/table-expressions.md +++ /dev/null @@ -1,397 +0,0 @@ ---- -title: Table Expressions -summary: Table expressions define a data source in selection clauses. -toc: true ---- - -Table expressions define a data source in the `FROM` sub-clause of -[simple `SELECT` clauses](select-clause.html), or as parameter to -[`TABLE`](selection-queries.html#table-clause). - -[SQL Joins](joins.html) are a particular kind of table -expression. - - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/table_ref.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | A [table or view name](#table-or-view-names). -`table_alias_name` | A name to use in an [aliased table expression](#aliased-table-expressions). -`name` | One or more aliases for the column names, to use in an [aliased table expression](#aliased-table-expressions). -`index_name` | Optional syntax to [force index selection](#force-index-selection). -`func_application` | [Results from a function](#results-from-a-function). -`preparable_stmt` | [Use the result rows](#using-the-output-of-other-statements) of a [preparable statement](sql-grammar.html#preparable_stmt). -`select_stmt` | A [selection query](selection-queries.html) to use as [subquery](#subqueries-as-table-expressions). -`joined_table` | A [join expression](joins.html). - -## Table expressions language - -The synopsis above really defines a mini-language to construct -complex table expressions from simpler parts. - -Construct | Description | Examples -----------|-------------|------------ -`table_name [@ scan_parameters]` | [Access a table or view](#access-a-table-or-view). | `accounts`, `accounts@name_idx` -`function_name ( exprs ... )` | Generate tabular data using a [scalar function](#scalar-function-as-data-source) or [table generator function](#table-generator-functions). | `sin(1.2)`, `generate_series(1,10)` -`
      [AS] name [( name [, ...] )]` | [Rename a table and optionally columns](#aliased-table-expressions). | `accounts a`, `accounts AS a`, `accounts AS a(id, b)` -`
      WITH ORDINALITY` | [Enumerate the result rows](#ordinality-annotation). | `accounts WITH ORDINALITY` -`
      JOIN
      ON ...` | [Join expression](joins.html). | `orders o JOIN customers c ON o.customer_id = c.id` -`(... subquery ...)` | A [selection query](selection-queries.html) used as [subquery](#subqueries-as-table-expressions). | `(SELECT * FROM customers c)` -`[... statement ...]` | [Use the result rows](#using-the-output-of-other-statements) of an [explainable statement](sql-grammar.html#preparable_stmt).

      This is a CockroachDB extension. | `[SHOW COLUMNS FROM accounts]` - -The following sections provide details on each of these options. - -## Table expressions that generate data - -The following sections describe primary table expressions that produce -data. - -### Access a table or view - -#### Table or view names - -Syntax: - -~~~ -identifier -identifier.identifier -identifier.identifier.identifier -~~~ - -A single SQL identifier in a table expression context designates -the contents of the table, [view](views.html), or sequence with that name -in the current database, as configured by [`SET DATABASE`](set-vars.html). - -If the name is composed of two or more identifiers, [name resolution](sql-name-resolution.html) rules apply. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM users; -- uses table `users` in the current database -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM mydb.users; -- uses table `users` in database `mydb` -~~~ - -#### Force index selection - -{% include {{page.version.version}}/misc/force-index-selection.md %} - -### Access a common table expression - -A single identifier in a table expression context can refer to a -[common table expression](common-table-expressions.html) defined -earlier. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH a AS (SELECT * FROM users) - SELECT * FROM a; -- "a" refers to "WITH a AS .." -~~~ - -### Results from a function - -A table expression can use the results from a function application as -a data source. - -Syntax: - -~~~ -name ( arguments... ) -~~~ - -The name of a function, followed by an opening parenthesis, followed -by zero or more [scalar expressions](scalar-expressions.html), followed by -a closing parenthesis. - -The resolution of the function name follows the same rules as the -resolution of table names. See [Name -Resolution](sql-name-resolution.html) for more details. - -#### Scalar function as data source - -When a [function returning a single -value](scalar-expressions.html#function-calls-and-sql-special-forms) is -used as a table expression, it is interpreted as tabular data with a -single column and single row containing the function results. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM sin(3.2) -~~~ -~~~ -+-----------------------+ -| sin | -+-----------------------+ -| -0.058374143427580086 | -+-----------------------+ -~~~ - -{{site.data.alerts.callout_info}}CockroachDB only supports this syntax for compatibility with PostgreSQL. The canonical syntax to evaluate scalar functions is as a direct target of SELECT, for example SELECT sin(3.2).{{site.data.alerts.end}} - - -#### Table generator functions - -Some functions directly generate tabular data with multiple rows from -a single function application. This is also called a "set-returning -function". - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM generate_series(1, 3); -~~~ -~~~ -+-----------------+ -| generate_series | -+-----------------+ -| 1 | -| 2 | -| 3 | -+-----------------+ -~~~ - -Set-returning functions (SRFs) can now be accessed using `(SRF).x` where `x` is one of the following: - -- The name of a column returned from the function -- `*` to denote all columns. - -For example (note that the output of queries against [`information_schema`](information-schema.html) will vary per database): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT (i.keys).* FROM (SELECT information_schema._pg_expandarray(indkey) AS keys FROM pg_index) AS i; -~~~ - -~~~ - x | n ----+--- - 1 | 1 - 2 | 1 -(2 rows) -~~~ - -{{site.data.alerts.callout_info}} -Currently CockroachDB only supports a small set of generator functions compatible with [the PostgreSQL set-generating functions with the same -names](https://www.postgresql.org/docs/9.6/static/functions-srf.html). -{{site.data.alerts.end}} - -## Operators that extend a table expression - -The following sections describe table expressions that change the -metadata around tabular data, or add more data, without modifying the -data of the underlying operand. - -### Aliased table expressions - -Aliased table expressions rename tables and columns temporarily in -the context of the current query. - -Syntax: - -~~~ -
      AS -
      AS (, , ...) -~~~ - -In the first form, the table expression is equivalent to its left operand -with a new name for the entire table, and where columns retain their original name. - -In the second form, the columns are also renamed. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT c.x FROM (SELECT COUNT(*) AS x FROM users) AS c; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT c.x FROM (SELECT COUNT(*) FROM users) AS c(x); -~~~ - -### Ordinality annotation - -Syntax: - -~~~ -
      WITH ORDINALITY -~~~ - -Designates a data source equivalent to the table expression operand with -an extra "Ordinality" column that enumerates every row in the data source. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM (VALUES('a'),('b'),('c')); -~~~ -~~~ -+---------+ -| column1 | -+---------+ -| a | -| b | -| c | -+---------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM (VALUES ('a'), ('b'), ('c')) WITH ORDINALITY; -~~~ -~~~ -+---------+------------+ -| column1 | ordinality | -+---------+------------+ -| a | 1 | -| b | 2 | -| c | 3 | -+---------+------------+ -~~~ - -{{site.data.alerts.callout_info}} -`WITH ORDINALITY` necessarily prevents some optimizations of the surrounding query. Use it sparingly if performance is a concern, and always check the output of [`EXPLAIN`](explain.html) in case of doubt. -{{site.data.alerts.end}} - -## Join expressions - -Join expressions combine the results of two or more table expressions -based on conditions on the values of particular columns. - -See [Join Expressions](joins.html) for more details. - -## Using other queries as table expressions - -The following sections describe how to use the results produced by -another SQL query or statement as a table expression. - -### Subqueries as table expressions - -Any [selection -query](selection-queries.html) enclosed -between parentheses can be used as a table expression, including -[simple `SELECT` clauses](select-clause.html). This is called a -"[subquery](subqueries.html)". - -Syntax: - -~~~ -( ... subquery ... ) -~~~ - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT c+2 FROM (SELECT COUNT(*) AS c FROM users); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM (VALUES(1), (2), (3)); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT firstname || ' ' || lastname FROM (TABLE employees); -~~~ - -{{site.data.alerts.callout_info}} -- See also [Subqueries](subqueries.html) for more details and performance best practices. -- To use other statements that produce data in a table expression, for example `SHOW`, use the [square bracket notation](#using-the-output-of-other-statements). -{{site.data.alerts.end}} - -
      - -### Using the output of other statements - -Syntax: - -~~~ -[ ] -~~~ - -An [explainable statement](sql-grammar.html#preparable_stmt) -between square brackets in a table expression context designates the -output of executing said statement. - -{{site.data.alerts.callout_info}} -This is a CockroachDB extension. This syntax complements the [subquery syntax using parentheses](#subqueries-as-table-expressions), which is restricted to [selection queries](selection-queries.html). It was introduced to enable use of any [explainable statement](sql-grammar.html#preparable_stmt) as subquery, including `SHOW` and other non-query statements. -{{site.data.alerts.end}} - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT "column_name" FROM [SHOW COLUMNS FROM customer]; -~~~ - -~~~ -+-------------+ -| column_name | -+-------------+ -| id | -| name | -| address | -+-------------+ -(3 rows) -~~~ - -The following statement inserts Albert in the `employee` table and -immediately creates a matching entry in the `management` table with the -auto-generated employee ID, without requiring a round trip with the SQL -client: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO management(manager, reportee) - VALUES ((SELECT id FROM employee WHERE name = 'Diana'), - (SELECT id FROM [INSERT INTO employee(name) VALUES ('Albert') RETURNING id])); -~~~ - -## Composability - -Table expressions are used in the [`SELECT`](select-clause.html) and -[`TABLE`](selection-queries.html#table-clause) variants of [selection -clauses](selection-queries.html#selection-clauses), and thus can appear everywhere where -a selection clause is possible. For example: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT ... FROM
      ,
      , ... -> TABLE
      -> INSERT INTO ... SELECT ... FROM
      ,
      , ... -> INSERT INTO ... TABLE
      -> CREATE TABLE ... AS SELECT ... FROM
      ,
      , ... -> UPSERT INTO ... SELECT ... FROM
      ,
      , ... -~~~ - -For more options to compose query results, see [Selection Queries](selection-queries.html). - -## See also - -- [Constants](sql-constants.html) -- [Selection Queries](selection-queries.html) - - [Selection Clauses](selection-queries.html#selection-clauses) -- [Explainable Statements](sql-grammar.html#preparable_stmt) -- [Scalar Expressions](scalar-expressions.html) -- [Data Types](data-types.html) -- [Subqueries](subqueries.html) diff --git a/src/current/v19.1/third-party-database-tools.md b/src/current/v19.1/third-party-database-tools.md deleted file mode 100644 index 881afc05957..00000000000 --- a/src/current/v19.1/third-party-database-tools.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Third-Party Database Tools -summary: Learn about third-party software that works with CockroachDB. -toc: true ---- - -CockroachDB's support of the PostgreSQL wire protocol enables support for many [drivers](build-an-app-with-cockroachdb.html), [ORMs](build-an-app-with-cockroachdb.html), and other types of third-party database tools. - -## Support - -We offer the following levels of support with third-party tools: - -- **Comprehensive support** indicates that the vast majority of the tool's features should work without issue with CockroachDB. -- **Partial support** indicates that the tool works with CockroachDB, but its integration might require additional steps, lack support for all features, or exhibit undesirable behavior. - -## Graphical User Interface - -- [DBeaver](dbeaver.html) _(comprehensive support)_ - -## Integrated Development Environment (IDE) - -- [Intellij IDEA (Java)](intellij-idea.html) _(partial support)_ - -## See Also - -- [Build an App with CockroachDB](build-an-app-with-cockroachdb.html) diff --git a/src/current/v19.1/time.md b/src/current/v19.1/time.md deleted file mode 100644 index ba0a198b569..00000000000 --- a/src/current/v19.1/time.md +++ /dev/null @@ -1,110 +0,0 @@ ---- -title: TIME -summary: CockroachDB's TIME data type stores a time of day in UTC. -toc: true ---- -The `TIME` [data type](data-types.html) stores the time of day in UTC. - -## Aliases - -In CockroachDB, the following are aliases: - -`TIME WITHOUT TIME ZONE` - -## Syntax - -A constant value of type `TIME` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `TIME` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`TIME`. - -The string format for time is `HH:MM:SS.SSSSSS`. For example: `TIME '05:40:00.000001'`. - -When it is unambiguous, a simple unannotated [string literal](sql-constants.html#string-literals) can also -be automatically interpreted as type `TIME`. - -Note that the fractional portion of `TIME` is optional and is rounded to microseconds (i.e., six digits after the decimal) for compatibility with the [PostgreSQL wire protocol](https://www.postgresql.org/docs/current/static/protocol.html). - -## Size - -A `TIME` column supports values up to 8 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE time (time_id INT PRIMARY KEY, time_val TIME); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM time; -~~~ - -~~~ -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -| time_id | INT | false | NULL | | {"primary"} | -| time_val | TIME | true | NULL | | {} | -+-------------+-----------+-------------+----------------+-----------------------+-------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO time VALUES (1, TIME '05:40:00'), (2, TIME '05:41:39'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM time; -~~~ - -~~~ -+---------+---------------------------+ -| time_id | time_val | -+---------+---------------------------+ -| 1 | 0000-01-01 05:40:00+00:00 | -| 2 | 0000-01-01 05:41:39+00:00 | -+---------+---------------------------+ -(2 rows) -~~~ - -{{site.data.alerts.callout_info}}The cockroach sql shell displays the date and time zone due to the Go SQL driver it uses. Other client drivers may behave similarly. In such cases, however, the date and time zone are not relevant and are not stored in the database.{{site.data.alerts.end}} - -Comparing `TIME` values: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT (SELECT time_val FROM time WHERE time_id = 1) < (SELECT time_val FROM time WHERE time_id = 2); -~~~ - -~~~ -+--------------------------------+ -| (SELECT time_val FROM "time" | -| WHERE time_id = 1) < (SELECT | -| time_val FROM "time" WHERE | -| time_id = 2) | -+--------------------------------+ -| true | -+--------------------------------+ -(1 row) -~~~ - -## Supported casting & conversion - -`TIME` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`INTERVAL` | Converts to the span of time since midnight (00:00) -`STRING` | Converts to format `'HH:MM:SS.SSSSSS'` (microsecond precision) - -## See also - -- [Data Types](data-types.html) -- [SQL Feature Support](sql-feature-support.html) diff --git a/src/current/v19.1/timestamp.md b/src/current/v19.1/timestamp.md deleted file mode 100644 index 8347e63cd59..00000000000 --- a/src/current/v19.1/timestamp.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -title: TIMESTAMP / TIMESTAMPTZ -summary: The TIMESTAMP and TIMESTAMPTZ data types stores a date and time pair in UTC. -toc: true ---- - -The `TIMESTAMP` and `TIMESTAMPTZ` [data types](data-types.html) stores a date and time pair in UTC. - - -## Variants - -`TIMESTAMP` has two variants: - -- `TIMESTAMP` presents all `TIMESTAMP` values in UTC. - -- `TIMESTAMPTZ` converts `TIMESTAMP` values from UTC to the client's session time zone (unless another time zone is specified for the value). However, it is conceptually important to note that `TIMESTAMPTZ` **does not** store any time zone data. - - {{site.data.alerts.callout_info}} - The default session time zone is UTC, which means that by default `TIMESTAMPTZ` values display in UTC. - {{site.data.alerts.end}} - -The difference between these two variants is that `TIMESTAMPTZ` uses the client's session time zone, while the other simply does not. This behavior extends to functions like `now()` and `extract()` on `TIMESTAMPTZ` values. - -## Best practices - -We recommend always using the `TIMESTAMPTZ` variant because the `TIMESTAMP` variant can sometimes lead to unexpected behaviors when it ignores a session offset. However, we also recommend you avoid setting a session time for your database. - -## Aliases - -In CockroachDB, the following are aliases: - -- `TIMESTAMP`, `TIMESTAMP WITHOUT TIME ZONE` -- `TIMESTAMPTZ`, `TIMESTAMP WITH TIME ZONE` - -## Syntax - -A constant value of type `TIMESTAMP`/`TIMESTAMPTZ` can be expressed using an -[interpreted literal](sql-constants.html#interpreted-literals), or a -string literal -[annotated with](scalar-expressions.html#explicitly-typed-expressions) -type `TIMESTAMP`/`TIMESTAMPTZ` or -[coerced to](scalar-expressions.html#explicit-type-coercions) type -`TIMESTAMP`/`TIMESTAMPTZ`. - -`TIMESTAMP` constants can be expressed using the -following string literal formats: - -Format | Example --------|-------- -Date only | `TIMESTAMP '2016-01-25'` -Date and Time | `TIMESTAMP '2016-01-25 10:10:10.555555'` -ISO 8601 | `TIMESTAMP '2016-01-25T10:10:10.555555'` - -To express a `TIMESTAMPTZ` value (with time zone offset from UTC), use -the following format: `TIMESTAMPTZ '2016-01-25 10:10:10.555555-05:00'` - -When it is unambiguous, a simple unannotated [string literal](sql-constants.html#string-literals) can also -be automatically interpreted as type `TIMESTAMP` or `TIMESTAMPTZ`. - -Note that the fractional portion is optional and is rounded to -microseconds (6 digits after decimal) for compatibility with the -PostgreSQL wire protocol. - -## Size - -A `TIMESTAMP`/`TIMESTAMPTZ` column supports values up to 12 bytes in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE timestamps (a INT PRIMARY KEY, b TIMESTAMPTZ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW COLUMNS FROM timestamps; -~~~ - -~~~ -+-------------+--------------------------+-------------+----------------+-----------------------+-------------+ -| column_name | data_type | is_nullable | column_default | generation_expression | indices | -+-------------+--------------------------+-------------+----------------+-----------------------+-------------+ -| a | INT | false | NULL | | {"primary"} | -| b | TIMESTAMP WITH TIME ZONE | true | NULL | | {} | -+-------------+--------------------------+-------------+----------------+-----------------------+-------------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO timestamps VALUES (1, TIMESTAMPTZ '2016-03-26 10:10:10-05:00'), (2, TIMESTAMPTZ '2016-03-26'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM timestamps; -~~~ - -~~~ -+---+---------------------------+ -| a | b | -+---+---------------------------+ -| 1 | 2016-03-26 15:10:10+00:00 | -| 2 | 2016-03-26 00:00:00+00:00 | -+---+---------------------------+ -# Note that the first timestamp is UTC-05:00, which is the equivalent of EST. -~~~ - -## Supported casting and conversion - -`TIMESTAMP` values can be [cast](data-types.html#data-type-conversions-and-casts) to any of the following data types: - -Type | Details ------|-------- -`DECIMAL` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`FLOAT` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`TIME` | Converts to the time portion (HH:MM:SS) of the timestamp -`INT` | Converts to number of seconds since the Unix epoch (Jan. 1, 1970). This is a CockroachDB experimental feature which may be changed without notice. -`DATE` | -- -`STRING` | -- - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/topology-basic-production.md b/src/current/v19.1/topology-basic-production.md deleted file mode 100644 index 1201214e22f..00000000000 --- a/src/current/v19.1/topology-basic-production.md +++ /dev/null @@ -1,89 +0,0 @@ ---- -title: Basic Production Topology -summary: Guidance for a single-region production deployment. -toc: true ---- - -When you're ready to run CockroachDB in production in a single region, it's important to deploy at least 3 CockroachDB nodes to take advantage of CockroachDB's automatic replication, distribution, rebalancing, and resiliency capabilities. - -{{site.data.alerts.callout_success}} -If you haven't already, [review the full range of topology patterns](topology-patterns.html) to ensure you choose the right one for your use case. -{{site.data.alerts.end}} - -## Prerequisites - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -## Configuration - -Basic production topology - -1. Provision hardware as follows: - - 1 region with 3 AZs - - 3+ VMs evenly distributed across AZs; add more VMs to increase throughput - - App and load balancer in same region as VMs for CockroachDB - - The load balancer redirects to CockroachDB nodes in the region - -2. Start each node on a separate VM, setting the [`--locality`](start-a-node.html#locality) flag to the node's region and AZ combination. For example, the following command starts a node in the east1 availability zone of the us-east region: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --locality=region=us-east,zone=east1 \ - --certs-dir=certs \ - --advertise-addr= \ - --join=:26257,:26257,:26257 \ - --cache=.25 \ - --max-sql-memory=.25 \ - --background - ~~~ - -With the default 3-way replication factor and `--locality` set as described above, CockroachDB balances each range of table data across AZs, one replica per AZ. System data is replicated 5 times by default and also balanced across AZs, thus increasing the [resiliency of the cluster](configure-replication-zones.html#create-a-replication-zone-for-a-system-range) as a whole. - -## Characteristics - -### Latency - -#### Reads - -Since all ranges, including leaseholder replicas, are in a single region, read latency is very low. - -For example, in the animation below: - -1. The read request reaches the load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the relevant leaseholder. -4. The leaseholder retrieves the results and returns to the gateway node. -5. The gateway node returns the results to the client. - -Basic production topology - -#### Writes - -Since all ranges are in a single region, writes achieve consensus without leaving the region and, thus, write latency is very low as well. - -For example, in the animation below: - -1. The write request reaches the load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replicas for the relevant table and secondary index. -4. While each leaseholder appends the write to its Raft log, it notifies its follower replicas. -5. In each case, as soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholders then return acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Leaseholder preferences topology - -### Resiliency - -Because each range is balanced across AZs, one AZ can fail without interrupting access to any data: - -Basic production topology - -However, if an additional AZ fails at the same time, the ranges that lose consensus become unavailable for reads and writes: - -Basic production topology - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-development.md b/src/current/v19.1/topology-development.md deleted file mode 100644 index c146685df9a..00000000000 --- a/src/current/v19.1/topology-development.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Development Topology -summary: Guidance for a single-node cluster for local development. -toc: true ---- - -While developing an application against CockroachDB, it's sufficient to deploy a single-node cluster close to your test application, whether that's on a single VM or on your laptop. - -{{site.data.alerts.callout_success}} -If you haven't already, [review the full range of topology patterns](topology-patterns.html) to ensure you choose the right one for your use case. -{{site.data.alerts.end}} - -## Prerequisites - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -## Configuration - -Development topology - -For this pattern, you can either [run CockroachDB locally](start-a-local-cluster.html) or [deploy a single-node cluster on a cloud VM](manual-deployment.html). - -## Characteristics - -### Latency - -With the CockroachDB node in the same region as your client, and without the overhead of replication, both read and write latency are very low: - -Development topology - -### Resiliency - -In a single-node cluster, CockroachDB does not replicate data and, therefore, is not resilient to failures. If the machine where the node is running fails, or if the region or availability zone containing the machine fails, the cluster becomes unavailable: - -Development topology - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-duplicate-indexes.md b/src/current/v19.1/topology-duplicate-indexes.md deleted file mode 100644 index ac0e9b85570..00000000000 --- a/src/current/v19.1/topology-duplicate-indexes.md +++ /dev/null @@ -1,152 +0,0 @@ ---- -title: Duplicate Indexes Topology -summary: Guidance on using the duplicate indexes topology in a multi-region deployment. -toc: true ---- - -In a multi-region deployment, the duplicate indexes pattern is a good choice for tables with the following requirements: - -- Read latency must be low, but write latency can be much higher. -- Reads must be up-to-date for business reasons or because the table is reference by [foreign keys](foreign-key.html). -- Rows in the table, and all latency-sensitive queries, **cannot** be tied to specific geographies. -- Table data must remain available during a region failure. - -In general, this pattern is suited well for immutable/reference tables that are rarely or never updated. - -{% include_cached youtube.html video_id="xde_Oz-dJxM" %} - -{{site.data.alerts.callout_success}} -**See It In Action** - Read about how a [financial software company](https://www.cockroachlabs.com/guides/banking-guide-to-the-cloud/) is using the Duplicate Indexes topology for low latency reads in their identity access management layer. -{{site.data.alerts.end}} - -## Prerequisites - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -{{site.data.alerts.callout_info}} -Pinning secondary indexes requires an [Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). -{{site.data.alerts.end}} - -### Summary - -Using this pattern, you tell CockroachDB to put the leaseholder for the table itself (also called the primary index) in one region, create 2 secondary indexes on the table, and tell CockroachDB to put the leaseholder for each secondary index in one of the other regions. This means that reads will access the local leaseholder (either for the table itself or for one of the secondary indexes). Writes, however, will still leave the region to get consensus for the table and its secondary indexes. - -Duplicate Indexes topology - -### Steps - -Assuming you have a [cluster deployed across three regions](#cluster-setup) and a table like the following: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE postal_codes ( - id INT PRIMARY KEY, - code STRING -); -~~~ - -1. If you do not already have one, [request a trial Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). - -2. [Create a replication zone](configure-zone.html) for the table and set a leaseholder preference telling CockroachDB to put the leaseholder for the table in one of the regions, for example `us-west`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE postal_codes - CONFIGURE ZONE USING - num_replicas = 3, - constraints = '{"+region=us-west":1}', - lease_preferences = '[[+region=us-west]]'; - ~~~ - -3. [Create secondary indexes](create-index.html) on the table for each of your other regions, including all of the columns you wish to read either in the key or in the key and a [`STORING`](create-index.html#store-columns) clause: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE INDEX idx_central ON postal_codes (id) - STORING (code); - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE INDEX idx_east ON postal_codes (id) - STORING (code); - ~~~ - -4. [Create a replication zone](configure-zone.html) for each secondary index, in each case setting a leaseholder preference telling CockroachDB to put the leaseholder for the index in a distinct region: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX postal_codes@idx_central - CONFIGURE ZONE USING - constraints = '{"+region=us-central":1}', - lease_preferences = '[[+region=us-central]]'; - ~~~ - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX postal_codes@idx_east - CONFIGURE ZONE USING - constraints = '{"+region=us-east":1}', - lease_preferences = '[[+region=us-east]]'; - ~~~ - -## Characteristics - -### Latency - -#### Reads - -Reads access the local leaseholder and, therefore, never leave the region. This makes read latency very low. - -For example, in the animation below: - -1. The read request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the relevant leaseholder. In `us-west`, the leaseholder is for the table itself. In the other regions, the leaseholder is for the relevant index, which the [cost-based optimizer](cost-based-optimizer.html) uses due to the leaseholder preferences. -4. The leaseholder retrieves the results and returns to the gateway node. -5. The gateway node returns the results to the client. - -Pinned secondary indexes topology - -#### Writes - -The replicas for the table and its secondary indexes are spread across all 3 regions, so writes involve multiple network hops across regions to achieve consensus. This increases write latency significantly. It's also important to understand that the replication of extra indexes can reduce throughput and increase storage cost. - -For example, in the animation below: - -1. The write request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replicas for the table and its secondary indexes. -4. While each leaseholder appends the write to its Raft log, it notifies its follower replicas. -5. In each case, as soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholders then return acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Duplicate Indexes topology - -### Resiliency - -Because this pattern balances the replicas for the table and its secondary indexes across regions, one entire region can fail without interrupting access to the table: - -Pinned Secondary Indexes topology - - - -## Alternatives - -- If reads from a table can be historical (48 seconds or more in the past), consider the [Follower Reads](topology-follower-reads.html) pattern. -- If rows in the table, and all latency-sensitive queries, can be tied to specific geographies, consider the [Geo-Partitioned Leaseholders](topology-geo-partitioned-leaseholders.html) pattern. Both patterns avoid extra secondary indexes, which increase data replication and, therefore, higher throughput and less storage. - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-follow-the-workload.md b/src/current/v19.1/topology-follow-the-workload.md deleted file mode 100644 index dc78d48bb1d..00000000000 --- a/src/current/v19.1/topology-follow-the-workload.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: Follow-the-Workload Topology -summary: Guidance on using the follow-the-workload topology in a multi-region deployment. -toc: true ---- - -In a multi-region deployment, follow-the-workload is the default pattern for tables that use no other pattern. In general, this default pattern is a good choice only for tables with the following requirements: - -- The table is active mostly in one region at a time, e.g., following the sun. -- In the active region, read latency must be low, but write latency can be higher. -- In non-active regions, both read and write latency can be higher. -- Table data must remain available during a region failure. - -{{site.data.alerts.callout_success}} -If read performance is your main focus for a table, but you want low-latency reads everywhere instead of just in the most active region, consider the [Duplicate Indexes](topology-duplicate-indexes.html) or [Follower Reads](topology-follower-reads.html) pattern. -{{site.data.alerts.end}} - -## Prerequisites - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -Aside from [deploying a cluster across three regions](#cluster-setup) properly, with each node started with the [`--locality`](start-a-node.html#locality) flag specifying its region and AZ combination, this pattern requires no extra configuration. CockroachDB will balance the replicas for a table across the three regions and will assign the range lease to the replica in the region with the greatest demand at any given time (the [follow-the-workload](demo-follow-the-workload.html) feature). This means that read latency in the active region will be low while read latency in other regions will be higher due to having to leave the region to reach the leaseholder. Write latency will be higher as well due to always involving replicas in multiple regions. - -Follower reads topology - -{{site.data.alerts.callout_info}} -This pattern is also used by [system ranges containing important internal data](configure-replication-zones.html#create-a-replication-zone-for-a-system-range). -{{site.data.alerts.end}} - -## Characteristics - -### Latency - -#### Reads - -Reads in the region with the most demand will access the local leaseholder and, therefore, never leave the region. This makes read latency very low in the currently most active region. Reads in other regions, however, will be routed to the leaseholder in a different region and, thus, read latency will be higher. - -For example, in the animation below, the most active region is `us-east` and, thus, the table's leaseholder is in that region: - -1. The read request in `us-east` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replica. -4. The leaseholder retrieves the results and returns to the gateway node. -5. The gateway node returns the results to the client. In this case, reads in the `us-east` remain in the region and are lower-latency than reads in other regions. - -Follow-the-workload topology - -#### Writes - -The replicas for the table are spread across all 3 regions, so writes involve multiple network hops across regions to achieve consensus. This increases write latency significantly. - -For example, in the animation below, assuming the most active region is still `us-east`: - -1. The write request in `us-east` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replica. -4. While the leaseholder appends the write to its Raft log, it notifies its follower replicas. -5. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholders then return acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Follow-the-workload topology - -### Resiliency - -Because this pattern balances the replicas for the table across regions, one entire region can fail without interrupting access to the table: - -Follow-the-workload topology - - - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-follower-reads.md b/src/current/v19.1/topology-follower-reads.md deleted file mode 100644 index 18e943af65f..00000000000 --- a/src/current/v19.1/topology-follower-reads.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: Follower Reads Topology -summary: Guidance on using the follower reads topology in a multi-region deployment. -toc: true ---- - -In a multi-region deployment, the follower reads pattern is a good choice for tables with the following requirements: - -- Read latency must be low, but write latency can be higher. -- Reads can be historical (48 seconds or more in the past). -- Rows in the table, and all latency-sensitive queries, **cannot** be tied to specific geographies (e.g., a reference table). -- Table data must remain available during a region failure. - -{{site.data.alerts.callout_success}} -This pattern is compatible with all of the other multi-region patterns except [Geo-Partitioned Replicas](topology-geo-partitioned-replicas.html). However, if reads from a table must be exactly up-to-date, use the [Duplicate Indexes](topology-duplicate-indexes.html) or [Geo-Partitioned Leaseholders](topology-geo-partitioned-leaseholders.html) pattern instead. Up-to-date reads are required by tables referenced by [foreign keys](foreign-key.html), for example. -{{site.data.alerts.end}} - -## Prerequisites - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -{{site.data.alerts.callout_info}} -Follower reads requires an [Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). -{{site.data.alerts.end}} - -### Summary - -Using this pattern, you configure your application to use the [follower reads](follower-reads.html) feature by adding an `AS OF SYSTEM TIME` clause when reading from the table. This tells CockroachDB to read slightly historical data (at least 48 seconds in the past) from the closest replica so as to avoid being routed to the leaseholder, which may be in an entirely different region. Writes, however, will still leave the region to get consensus for the table. - -### Steps - -Follower reads topology - -Assuming you have a [cluster deployed across three regions](#cluster-setup) and a table like the following: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE postal_codes ( - id INT PRIMARY KEY, - code STRING -); -~~~ - -Insert some data: - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO postal_codes (ID, code) VALUES (1, '10001'), (2, '10002'), (3, '10003'), (4,'60601'), (5,'60602'), (6,'60603'), (7,'90001'), (8,'90002'), (9,'90003'); -~~~ - -1. If you do not already have one, [request a trial Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). - -2. Configure your app to use `AS OF SYSTEM TIME experimental_follower_read_timestamp()` whenever reading from the table: - - {{site.data.alerts.callout_info}} - The `experimental_follower_read_timestamp()` [function](functions-and-operators.html) will set the [`AS OF SYSTEM TIME`](as-of-system-time.html) value to the minimum required for follower reads. - {{site.data.alerts.end}} - - {% include copy-clipboard.html %} - ~~~ sql - > SELECT code FROM postal_codes - AS OF SYSTEM TIME experimental_follower_read_timestamp() - WHERE id = 5; - ~~~ - - Alternately, instead of modifying individual read queries on the table, you can set the `AS OF SYSTEM TIME` value for all operations in a read-only transaction: - - {% include copy-clipboard.html %} - ~~~ sql - > BEGIN AS OF SYSTEM TIME experimental_follower_read_timestamp(); - - SELECT code FROM postal_codes - WHERE id = 5; - - SELECT code FROM postal_codes - WHERE id = 6; - - COMMIT; - ~~~ - -## Characteristics - -### Latency - -#### Reads - -Reads retrieve historical data from the closest replica and, therefore, never leave the region. This makes read latency very low but slightly stale. - -For example, in the animation below: - -1. The read request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the closest replica for the table. In this case, the replica is *not* the leaseholder. -4. The replica retrieves the results as of at least 48 seconds in the past and returns to the gateway node. -5. The gateway node returns the results to the client. - -Follower reads topology - -#### Writes - -The replicas for the table are spread across all 3 regions, so writes involve multiple network hops across regions to achieve consensus. This increases write latency significantly. - -For example, in the animation below: - -1. The write request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replica for the table in `us-east`. -4. Once the leaseholder has appended the write to its Raft log, it notifies its follower replicas. -5. As soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholder then returns acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Follower reads topology - -### Resiliency - -Because this pattern balances the replicas for the table across regions, one entire region can fail without interrupting access to the table: - -Follower reads topology - - - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-geo-partitioned-leaseholders.md b/src/current/v19.1/topology-geo-partitioned-leaseholders.md deleted file mode 100644 index fe56ca63ed5..00000000000 --- a/src/current/v19.1/topology-geo-partitioned-leaseholders.md +++ /dev/null @@ -1,180 +0,0 @@ ---- -title: Geo-Partitioned Leaseholders Topology -summary: Common cluster topology patterns with setup examples and performance considerations. -toc: true ---- - -In a multi-region deployment, the geo-partitioned [leaseholders](architecture/replication-layer.html#leases) topology is a good choice for tables with the following requirements: - -- Read latency must be low, but write latency can be higher. -- Reads must be up-to-date for business reasons or because the table is referenced by [foreign keys](foreign-key.html). -- Rows in the table, and all latency-sensitive queries, can be tied to specific geographies, e.g., city, state, region. -- Table data must remain available during a region failure. - -{{site.data.alerts.callout_success}} -**See It In Action** - Read about how a [large telecom provider](https://www.cockroachlabs.com/case-studies/telecom-provider-replaces-amazon-aurora-with-cockroachdb-to-attain-analways-on-customer-experience/) with millions of customers across the United States is using the Geo-Partitioned Leaseholders topology in production for strong resiliency and performance. -{{site.data.alerts.end}} - -## Prerequisites - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -{{site.data.alerts.callout_info}} -Geo-partitioning requires an [Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). -{{site.data.alerts.end}} - -### Summary - -Using this pattern, you design your table schema to allow for [partitioning](partitioning.html#table-creation), with a column identifying geography as the first column in the table's compound primary key (e.g., city/id). You tell CockroachDB to partition the table and all of its secondary indexes by that geography column, each partition becoming its own range of 3 replicas. You then tell CockroachDB to put the leaseholder for each partition in the relevant region (e.g., LA partitions in `us-west`, NY partitions in `us-east`). The other replicas of a partition remain balanced across the other regions. This means that reads in each region will access local leaseholders and, therefore, will have low, intra-region latencies. Writes, however, will leave the region to get consensus and, therefore, will have higher, cross-region latencies. - -Geo-partitioned leaseholders topology - -### Steps - -Assuming you have a [cluster deployed across three regions](#cluster-setup) and a table and secondary index like the following: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - first_name STRING NOT NULL, - last_name STRING NOT NULL, - address STRING NOT NULL, - PRIMARY KEY (city ASC, id ASC) -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX users_last_name_index ON users (city, last_name); -~~~ - -1. If you do not already have one, [request a trial Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). - -2. Partition the table by `city`. For example, assuming there are three possible `city` values, `los angeles`, `chicago`, and `new york`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE users PARTITION BY LIST (city) ( - PARTITION la VALUES IN ('los angeles'), - PARTITION chicago VALUES IN ('chicago'), - PARTITION ny VALUES IN ('new york') - ); - ~~~ - - This creates distinct ranges for each partition of the table. - -3. Partition the secondary index by `city` as well: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX users_last_name_index PARTITION BY LIST (city) ( - PARTITION la_idx VALUES IN ('los angeles'), - PARTITION chicago_idx VALUES IN ('chicago'), - PARTITION ny_idx VALUES IN ('new york') - ); - ~~~ - - This creates distinct ranges for each partition of the secondary index. - -4. For each partition of the table, [create a replication zone](configure-zone.html) that tells CockroachDB to put the partition's leaseholder in the relevant region: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION la OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-west":1}', - lease_preferences = '[[+region=us-west]]'; - ALTER PARTITION chicago OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-central":1}', - lease_preferences = '[[+region=us-central]]'; - ALTER PARTITION ny OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-east":1}', - lease_preferences = '[[+region=us-east]]'; - ~~~ - -5. For each partition of the secondary index, [create a replication zone](configure-zone.html) that tells CockroachDB to put the partition's leaseholder in the relevant region: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION la_idx OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-west":1}', - lease_preferences = '[[+region=us-west]]'; - ALTER PARTITION chicago_idx OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-central":1}', - lease_preferences = '[[+region=us-central]]'; - ALTER PARTITION ny_idx OF TABLE users - CONFIGURE ZONE USING - constraints = '{"+region=us-east":1}', - lease_preferences = '[[+region=us-east]]'; - ~~~ - -{{site.data.alerts.callout_success}} -As you scale and add more cities, you can repeat steps 2 and 3 with the new complete list of cities to re-partition the table and its secondary indexes, and then repeat steps 4 and 5 to create replication zones for the new partitions. -{{site.data.alerts.end}} - -## Characteristics - -### Latency - -#### Reads - -Because each partition's leaseholder is constrained to the relevant region (e.g., the `la` and `la_idx` partitions' leaseholders are located in the `us-west` region), reads that specify the local region key access the relevant leaseholder locally. This makes read latency very low, with the exception of reads that do not specify a region key or that refer to a partition in another region. - -For example, in the animation below: - -1. The read request in `us-west` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder for the relevant partition. -4. The leaseholder retrieves the results and returns to the gateway node. -5. The gateway node returns the results to the client. - -Geo-partitioned leaseholders topology - -#### Writes - -Just like for reads, because each partition's leaseholder is constrained to the relevant region (e.g., the `la` and `la_idx` partitions' leaseholders are located in the `us-west` region), writes that specify the local region key access the relevant leaseholder replicas locally. However, a partition's other replicas are spread across the other regions, so writes involve multiple network hops across regions to achieve consensus. This increases write latency significantly. - -For example, in the animation below: - -1. The write request in `us-west` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replicas for the relevant table and secondary index partitions. -4. While each leaseholder appends the write to its Raft log, it notifies its follower replicas, which are in the other regions. -5. In each case, as soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholders then return acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Geo-partitioned leaseholders topology - -### Resiliency - -Because this pattern balances the replicas for each partition across regions, one entire region can fail without interrupting access to any partitions. In this case, if any range loses its leaseholder in the region-wide outage, CockroachDB makes one of the range's other replicas the leaseholder: - -Geo-partitioning topology - - - -## Alternatives - -- If reads from a table can be historical (48 seconds or more in the past), consider the [Follower Reads](topology-follower-reads.html) pattern. -- If rows in the table, and all latency-sensitive queries, **cannot** be tied to specific geographies, consider the [Duplicate Indexes](topology-duplicate-indexes.html) pattern. - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-geo-partitioned-replicas.md b/src/current/v19.1/topology-geo-partitioned-replicas.md deleted file mode 100644 index 711eb3c9004..00000000000 --- a/src/current/v19.1/topology-geo-partitioned-replicas.md +++ /dev/null @@ -1,170 +0,0 @@ ---- -title: Geo-Partitioned Replicas Topology -summary: Guidance on using the geo-partitioned replicas topology in a multi-region deployment. -toc: true ---- - -In a multi-region deployment, the geo-partitioned replicas topology is a good choice for tables with the following requirements: - -- Read and write latency must be low. -- Rows in the table, and all latency-sensitive queries, can be tied to specific geographies, e.g., city, state, region. -- Regional data must remain available during an AZ failure, but it's OK for regional data to become unavailable during a region-wide failure. - -{{site.data.alerts.callout_success}} -**See It In Action** - Read about how an [electronic lock manufacturer](https://www.cockroachlabs.com/case-studies/european-electronic-lock-manufacturer-modernizes-iam-system-with-managed-cockroachdb/) and [multi-national bank](https://www.cockroachlabs.com/case-studies/top-five-multinational-bank-modernizes-its-european-core-banking-services-migrating-from-oracle-to-cockroachdb/) are using the Geo-Partitioned Replicas topology in production for improved performance and regulatory compliance. -{{site.data.alerts.end}} - -## Prerequisites - -### Fundamentals - -{% include {{ page.version.version }}/topology-patterns/fundamentals.md %} - -### Cluster setup - -{% include {{ page.version.version }}/topology-patterns/multi-region-cluster-setup.md %} - -## Configuration - -{{site.data.alerts.callout_info}} -Geo-partitioning requires an [Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). -{{site.data.alerts.end}} - -### Summary - -Using this pattern, you design your table schema to allow for [partitioning](partitioning.html#table-creation), with a column identifying geography as the first column in the table's compound primary key (e.g., city/id). You tell CockroachDB to partition the table and all of its secondary indexes by that geography column, each partition becoming its own range of 3 replicas. You then tell CockroachDB to pin each partition (all of its replicas) to the relevant region (e.g., LA partitions in `us-west`, NY partitions in `us-east`). This means that reads and writes in each region will always have access to the relevant replicas and, therefore, will have low, intra-region latencies. - -Geo-partitioning topology - -### Steps - -Assuming you have a [cluster deployed across three regions](#cluster-setup) and a table and secondary index like the following: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE users ( - id UUID NOT NULL DEFAULT gen_random_uuid(), - city STRING NOT NULL, - first_name STRING NOT NULL, - last_name STRING NOT NULL, - address STRING NOT NULL, - PRIMARY KEY (city ASC, id ASC) -); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE INDEX users_last_name_index ON users (city, last_name); -~~~ - -{{site.data.alerts.callout_info}} -A geo-partitioned table does not require a secondary index. However, if the table does have one or more secondary indexes, each index must be partitioned as well. This means that the indexes must start with the column identifying geography, like the table itself, which impacts the queries they'll be useful for. If you cannot partition all secondary indexes on a table you want to geo-partition, consider the [Geo-Partitioned Leaseholders](topology-geo-partitioned-leaseholders.html) pattern instead. -{{site.data.alerts.end}} - -1. If you do not already have one, [request a trial Enterprise license](https://www.cockroachlabs.com/get-started-cockroachdb/). - -2. Partition the table by `city`. For example, assuming there are three possible `city` values, `los angeles`, `chicago`, and `new york`: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER TABLE users PARTITION BY LIST (city) ( - PARTITION la VALUES IN ('los angeles'), - PARTITION chicago VALUES IN ('chicago'), - PARTITION ny VALUES IN ('new york') - ); - ~~~ - - This creates distinct ranges for each partition of the table. - -3. Partition the secondary index by `city` as well: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER INDEX users_last_name_index PARTITION BY LIST (city) ( - PARTITION la_idx VALUES IN ('los angeles'), - PARTITION chicago_idx VALUES IN ('chicago'), - PARTITION ny_idx VALUES IN ('new york') - ); - ~~~ - - This creates distinct ranges for each partition of the secondary index. - -4. For each partition of the table, [create a replication zone](configure-zone.html) that constrains the partition's replicas to nodes in the relevant region: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION la OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-west]'; - ALTER PARTITION chicago OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-central]'; - ALTER PARTITION ny OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-east]'; - ~~~ - -5. For each partition of the secondary index, [create a replication zone](configure-zone.html) that constrains the partition's replicas to nodes in the relevant region: - - {% include copy-clipboard.html %} - ~~~ sql - > ALTER PARTITION la_idx OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-west]'; - ALTER PARTITION chicago_idx OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-central]'; - ALTER PARTITION ny_idx OF TABLE users - CONFIGURE ZONE USING constraints = '[+region=us-east]'; - ~~~ - -{{site.data.alerts.callout_success}} -As you scale and add more cities, you can repeat steps 2 and 3 with the new complete list of cities to re-partition the table and its secondary indexes, and then repeat steps 4 and 5 to create replication zones for the new partitions. -{{site.data.alerts.end}} - -## Characteristics - -### Latency - -#### Reads - -Because each partition is constrained to the relevant region (e.g., the `la` and `la_idx` partitions are located in the `us-west` region), reads that specify the local region key access the relevant leaseholder locally. This makes read latency very low, with the exception of reads that do not specify a region key or that refer to a partition in another region; such reads will be transactionally consistent but will not have local latencies. - -For example, in the animation below: - -1. The read request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder for the relevant partition. -4. The leaseholder retrieves the results and returns to the gateway node. -5. The gateway node returns the results to the client. - -Geo-partitioning topology - -#### Writes - -Just like for reads, because each partition is constrained to the relevant region (e.g., the `la` and `la_idx` partitions are located in the `us-west` region), writes that specify the local region key access the relevant replicas without leaving the region. This makes write latency very low, with the exception of writes that do not specify a region key or that refer to a partition in another region; such writes will be transactionally consistent but will not have local latencies. - -For example, in the animation below: - -1. The write request in `us-central` reaches the regional load balancer. -2. The load balancer routes the request to a gateway node. -3. The gateway node routes the request to the leaseholder replicas for the relevant table and secondary index partitions. -4. While each leaseholder appends the write to its Raft log, it notifies its follower replicas, which are in the same region. -5. In each case, as soon as one follower has appended the write to its Raft log (and thus a majority of replicas agree based on identical Raft logs), it notifies the leaseholder and the write is committed on the agreeing replicas. -6. The leaseholders then return acknowledgement of the commit to the gateway node. -7. The gateway node returns the acknowledgement to the client. - -Geo-partitioning topology - -### Resiliency - -Because each partition is constrained to the relevant region and balanced across the 3 AZs in the region, one AZ can fail per region without interrupting access to the partitions in that region: - -Geo-partitioning topology - -However, if an entire region fails, the partitions in that region become unavailable for reads and writes, even if your load balancer can redirect requests to a different region: - -Geo-partitioning topology - -## Tutorial - -For a step-by-step demonstration of how this pattern gets you low-latency reads and writes in a broadly distributed cluster, see the [Geo-Partitioning tutorial](demo-geo-partitioning.html). - -## See also - -{% include {{ page.version.version }}/topology-patterns/see-also.md %} diff --git a/src/current/v19.1/topology-patterns.md b/src/current/v19.1/topology-patterns.md deleted file mode 100644 index 758b9e5a6a7..00000000000 --- a/src/current/v19.1/topology-patterns.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: Topology Patterns -summary: Recommended topology patterns for running CockroachDB in a cloud environment. -toc: true -key: cluster-topology-patterns.html ---- - -This section provides recommended topology patterns for running CockroachDB in a cloud environment, each with required configurations and latency and resiliency characteristics. - -## Single-region patterns - -When your clients are in a single geographic region, choosing a topology is straightforward. - -Pattern | Latency | Resiliency | Configuration ---------|---------|------------|-------------- -[Development](topology-development.html) |
      • Fast reads and writes
      |
      • None
      |
      • 1 node
      • No replication
      -[Basic Production](topology-basic-production.html) |
      • Fast reads and writes
      |
      • 1 AZ failure
      |
      • 1 region
      • 3 AZs
      • 3+ nodes across AZs
      - -## Multi-region patterns - -When your clients are in multiple geographic regions, it is important to deploy your cluster across regions properly and then carefully choose the right topology for each of your tables. Not doing so can result in unexpected latency and resiliency. - -{{site.data.alerts.callout_info}} -Multi-region patterns are almost always table-specific. For example, you might use the [Geo-Partitioning Replicas](topology-geo-partitioned-replicas.html) pattern for frequently updated tables that are geographically specific and the [Duplicate Indexes](topology-duplicate-indexes.html) pattern for reference tables that are not tied to geography and that are read frequently but updated infrequently. -{{site.data.alerts.end}} - -Pattern | Latency | Resiliency | Configuration ---------|---------|------------|-------------- -[Geo-Partitioned Replicas](topology-geo-partitioned-replicas.html) |
      • Fast regional reads and writes
      |
      • 1 AZ failure per partition
      |
      • Geo-partitioned table
      • Partition replicas pinned to regions
      -[Geo-Partitioned Leaseholders](topology-geo-partitioned-leaseholders.html) |
      • Fast regional reads
      • Slower cross-region writes
      |
      • 1 region failure
      |
      • Geo-partitioned table
      • Partition replicas spread across regions
      • Partition leaseholders pinned to regions
      -[Duplicate Indexes](topology-duplicate-indexes.html) |
      • Fast regional reads (current)
      • Much slower cross-region writes
      |
      • 1 region failure
      |
      • Multiple identical indexes
      • Index replicas spread across regions
      • Index leaseholders pinned to regions
      -[Follower Reads](topology-follower-reads.html) |
      • Fast regional reads (historical)
      • Slower cross-region writes
      |
      • 1 region failure
      |
      • App configured to use follower reads
      -[Follow-the-Workload](topology-follow-the-workload.html) |
      • Fast regional reads (active region)
      • Slower cross-region reads (elsewhere)
      • Slower cross-region writes
      • |
        • 1 region failure
        |
        • None
        - -## Anti-patterns - -The following anti-patterns are ineffective or risky: - -- Single-region deployments using 2 AZs, or multi-region deployments using 2 regions. In these cases, the cluster would be unable to survive the loss of a single AZ or a single region, respectively. -- Broadly distributed multi-region deployments (e.g., `us-west`, `asia`, and `europe`) using only the default [Follow-the-Workload](topology-follow-the-workload.html) pattern. In this case, latency will likely be unacceptably high. -- [Geo-partitioned tables](topology-geo-partitioned-replicas.html) with non-partitioned secondary indexes. In this case, writes will incur cross-region latency to achieve consensus on the non-partitioned indexes. diff --git a/src/current/v19.1/transactions.md b/src/current/v19.1/transactions.md deleted file mode 100644 index 7061051e7db..00000000000 --- a/src/current/v19.1/transactions.md +++ /dev/null @@ -1,257 +0,0 @@ ---- -title: Transactions -summary: CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. -toc: true ---- - -CockroachDB supports bundling multiple SQL statements into a single all-or-nothing transaction. Each transaction guarantees [ACID semantics](https://en.wikipedia.org/wiki/ACID) spanning arbitrary tables and rows, even when data is distributed. If a transaction succeeds, all mutations are applied together with virtual simultaneity. If any part of a transaction fails, the entire transaction is aborted, and the database is left unchanged. CockroachDB guarantees that while a transaction is pending, it is isolated from other concurrent transactions with serializable [isolation](#isolation-levels). - -{{site.data.alerts.callout_info}} -For a detailed discussion of CockroachDB transaction semantics, see [How CockroachDB Does Distributed Atomic Transactions](https://www.cockroachlabs.com/blog/how-cockroachdb-distributes-atomic-transactions/) and [Serializable, Lockless, Distributed: Isolation in CockroachDB](https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/). Note that the explanation of the transaction model described in this blog post is slightly out of date. See the [Transaction Retries](#transaction-retries) section for more details. -{{site.data.alerts.end}} - -## SQL statements - -Each of the following SQL statements control transactions in some way. - -| Statement | Function | -|------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [`BEGIN`](begin-transaction.html) | Initiate a transaction, as well as control its [priority](#transaction-priorities). | -| [`SET TRANSACTION`](set-transaction.html) | Control a transaction's [priority](#transaction-priorities). | -| [`COMMIT`](commit-transaction.html) | Commit a regular transaction, or clear the connection after committing a transaction using the [advanced retry protocol](advanced-client-side-transaction-retries.html). | -| [`ROLLBACK`](rollback-transaction.html) | Abort a transaction and roll the database back to its state before the transaction began. | -| [`SHOW`](show-vars.html) | Display the current transaction settings. | -| [`SAVEPOINT`](savepoint.html) | (**Advanced**) Used to implement [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), which can improve performance and avoid starvation when transactions are retried. | -| [`RELEASE SAVEPOINT`](release-savepoint.html) | (**Advanced**) Commit a [retryable transaction](advanced-client-side-transaction-retries.html). | -| [`ROLLBACK TO SAVEPOINT`](rollback-transaction.html) | (**Advanced**) Handle [retry errors](#error-handling) by rolling back a transaction's changes and increasing its priority. | - -{{site.data.alerts.callout_info}} -The **Advanced** statements above are used to implement [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), and are mostly of use to driver and ORM authors. - -Application developers who are using a framework or library that does not have advanced retry logic built in should implement an application-level retry loop with exponential backoff as shown in [Client-side intervention](#client-side-intervention). -{{site.data.alerts.end}} - -## Syntax - -In CockroachDB, a transaction is set up by surrounding SQL statements with the [`BEGIN`](begin-transaction.html) and [`COMMIT`](commit-transaction.html) statements. - -To use [advanced client-side transaction retries](advanced-client-side-transaction-retries.html), you should also include the [`SAVEPOINT`](savepoint.html), [`ROLLBACK TO SAVEPOINT`](rollback-transaction.html) and [`RELEASE SAVEPOINT`](release-savepoint.html) statements. - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; - -> SAVEPOINT cockroach_restart; - - - -> RELEASE SAVEPOINT cockroach_restart; - -> COMMIT; -~~~ - -At any time before it's committed, you can abort the transaction by executing the [`ROLLBACK`](rollback-transaction.html) statement. - -Clients using transactions must also include logic to handle [retries](#transaction-retries). - -## Error handling - -To handle errors in transactions, you should check for the following types of server-side errors: - -Type | Description ------|------------ -**Retry Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client as described in [client-side intervention](#client-side-intervention). -**Ambiguous Errors** | Errors with the code `40003` which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. For information about how to handle ambiguous errors, see [here](common-errors.html#result-is-ambiguous). -**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the `UNIQUE` constraint generates a `23505` error. After encountering these errors, you can either issue a [`COMMIT`][commit] or [`ROLLBACK`][rollback] to abort the transaction and revert the database to its state before the transaction began.

        If you want to attempt the same set of statements again, you must begin a completely new transaction. - -## Transaction retries - -Transactions may require retries if they experience deadlock or [read/write contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention) with other concurrent transactions which cannot be resolved without allowing potential [serializable anomalies](https://en.wikipedia.org/wiki/Serializability). (However, it's possible to mitigate read-write conflicts by performing reads using [`AS OF SYSTEM TIME`](performance-best-practices-overview.html#use-as-of-system-time-to-decrease-conflicts-with-long-running-queries).) - -There are two cases in which transaction retries occur: - -1. [Automatic retries](#automatic-retries), which CockroachDB processes for you. -2. [Client-side intervention](#client-side-intervention), which your application must handle. - -### Automatic retries - -CockroachDB automatically retries individual statements (implicit transactions) and transactions sent from the client as a single batch, as long as the size of the results being produced for the client, including protocol overhead, is less than 16KiB by default. Once that buffer overflows, CockroachDB starts streaming results back to the client, at which point automatic retries cannot be performed any more. As long as the results of a single statement or batch of statements are known to stay clear of this limit, the client does not need to worry about transaction retries. - -{{site.data.alerts.callout_success}} -You can change the results buffer size for all new sessions using the `sql.defaults.results_buffer.size` [cluster setting](cluster-settings.html), or for a specific session using the `results_buffer_size` [session variable](set-vars.html). Note, however, that decreasing the buffer size can increase the number of transaction retry errors a client receives, whereas increasing the buffer size can increase the delay until the client receives the first result row. -{{site.data.alerts.end}} - -In future versions of CockroachDB, we plan on providing stronger guarantees for read-only queries that return at most one row, regardless of the size of that row. - -#### Individual statements - -Individual statements are treated as implicit transactions, and so they fall -under the rules described above. If the results are small enough, they will be -automatically retried. In particular, `INSERT/UPDATE/DELETE` statements without -a `RETURNING` clause are guaranteed to have minuscule result sizes. -For example, the following statement would be automatically retried by CockroachDB: - -~~~ sql -> DELETE FROM customers WHERE id = 1; -~~~ - -#### Batched statements - -Transactions can be sent from the client as a single batch. Batching implies that CockroachDB receives multiple statements without being asked to return results in between them; instead, CockroachDB returns results after executing all of the statements, except when the accumulated results overflow the buffer mentioned above, in which case they are returned sooner and automatic retries can no longer be performed. - -Batching is generally controlled by your driver or client's behavior. Technically, it can be achieved in two ways, both supporting automatic retries: - -1. When the client/driver is using the [PostgreSQL Extended Query protocol](https://www.postgresql.org/docs/10/static/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY), a batch is made up of all queries sent in between two `Sync` messages. Many drivers support such batches through explicit batching constructs. - -2. When the client/driver is using the [PostgreSQL Simple Query protocol](https://www.postgresql.org/docs/10/static/protocol-flow.html#id-1.10.5.7.4), a batch is made up of semicolon-separated strings sent as a unit to CockroachDB. For example, in Go, this code would send a single batch (which would be automatically retried): - - ~~~ go - db.Exec( - "BEGIN; - - DELETE FROM customers WHERE id = 1; - - DELETE orders WHERE customer = 1; - - COMMIT;" - ) - ~~~ - -{{site.data.alerts.callout_info}} -Within a batch of statements, CockroachDB infers that the statements are not -conditional on the results of previous statements, so it can retry all of them. -Of course, if the transaction relies on conditional logic (e.g., statement 2 is -executed only for some results of statement 1), then the transaction cannot be -all sent to CockroachDB as a single batch. In these common cases, CockroachDB -cannot retry, say, statement 2 in isolation. Since results for statement 1 have -already been delivered to the client by the time statement 2 is forcing the -transaction to retry, the client needs to be involved in retrying the whole -transaction and so you should write your transactions to use -[client-side intervention](#client-side-intervention). -{{site.data.alerts.end}} - -### Client-side intervention - -Your application should include client-side retry handling when the statements are sent individually, such as: - -{% include copy-clipboard.html %} -~~~ sql -> BEGIN; - -> UPDATE products SET inventory = 0 WHERE sku = '8675309'; - -> INSERT INTO orders (customer, status) VALUES (1, 'new'); - -> COMMIT; -~~~ - -To indicate that a transaction must be retried, CockroachDB signals an error with the code `40001` and an error message that begins with the string `"retry transaction"`. - -To handle these types of errors you have the following options: - -1. If your database library or framework provides a method for retryable transactions (it will often be documented as a tool for handling deadlocks), use it. If you're building an application in the following languages, we have code to make client-side retries simpler: - - **Go** developers can use the [`github.com/cockroachdb/cockroach-go/crdb`](https://github.com/cockroachdb/cockroach-go/tree/master/crdb) package, which handles retries automatically. For more information, see [Build a Go App with CockroachDB](build-a-go-app-with-cockroachdb.html#transaction-with-retry-logic). - - **Python** developers can use [SQLAlchemy](https://www.sqlalchemy.org) with the [`sqlalchemy-cockroachdb` adapter](https://github.com/cockroachdb/sqlalchemy-cockroachdb). For more information, see [Build a Python App with CockroachDB](build-a-python-app-with-cockroachdb-sqlalchemy.html). - - **Java** developers accessing the database with [JDBC](https://jdbc.postgresql.org) can re-use the example code implementing retry logic shown in [Build a Java app with CockroachDB](build-a-java-app-with-cockroachdb.html). -2. **Most users, such as application authors**: Abort the transaction using the [`ROLLBACK`](rollback-transaction.html) statement, and then reissue all of the statements in the transaction. For an example, see the [Client-side intervention example](#client-side-intervention-example). -3. **Advanced users, such as library authors**: Use the [`SAVEPOINT`](savepoint.html) statement to create retryable transactions. Retryable transactions can improve performance because their priority is increased each time they are retried, making them more likely to succeed the longer they're in your system. For instructions showing how to do this, see [Advanced Client-Side Transaction Retries](advanced-client-side-transaction-retries.html). - -#### Client-side intervention example - -The Python-like pseudocode below shows how to implement an application-level retry loop; it does not require your driver or ORM to implement [advanced retry handling logic](advanced-client-side-transaction-retries.html), so it can be used from any programming language or environment. In particular, your retry loop must: - -- Raise an error if the `max_retries` limit is reached -- Retry on `40001` error codes -- [`COMMIT`](commit-transaction.html) at the end of the `try` block -- Implement [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) logic as shown below for best performance - -~~~ python -while true: - n++ - if n == max_retries: - throw Error("did not succeed within N retries") - try: - # add logic here to run all your statements - conn.exec('COMMIT') - catch error: - if error.code != "40001": - throw error - else: - # This is a retry error, so we roll back the current transaction - # and sleep for a bit before retrying. The sleep time increases - # for each failed transaction. Adapted from - # https://colintemple.com/2017/03/java-exponential-backoff/ - conn.exec('ROLLBACK'); - sleep_ms = int(((2**n) * 100) + rand( 100 - 1 ) + 1) - sleep(sleep_ms) # Assumes your sleep() takes milliseconds -~~~ - -## Transaction contention - -Transactions in CockroachDB lock data resources that are written during their execution. When a pending write from one transaction conflicts with a write of a concurrent transaction, the concurrent transaction must wait for the earlier transaction to complete before proceeding. When a dependency cycle is detected between transactions, the transaction with the higher priority aborts the dependent transaction to avoid deadlock, which must be [retried](#client-side-intervention). - -For more details about transaction contention and best practices for avoiding contention, see [Understanding and Avoiding Transaction Contention](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention). - -## Transaction priorities - -Every transaction in CockroachDB is assigned an initial **priority**. By default, that priority is `NORMAL`, but for transactions that should be given preference in [high-contention scenarios](performance-best-practices-overview.html#understanding-and-avoiding-transaction-contention), the client can set the priority within the [`BEGIN`](begin-transaction.html) statement: - -~~~ sql -> BEGIN PRIORITY ; -~~~ - -Alternately, the client can set the priority immediately after the transaction is started as follows: - -~~~ sql -> SET TRANSACTION PRIORITY ; -~~~ - -The client can also display the current priority of the transaction with [`SHOW TRANSACTION PRIORITY`](show-vars.html). - -{{site.data.alerts.callout_info}} -When two transactions contend for the same resources indirectly, they may create a dependency cycle leading to a deadlock situation, where both transactions are waiting on the other to finish. In these cases, CockroachDB allows the transaction with higher priority to abort the other, which must then retry. On retry, the transaction inherits the higher priority. This means that each retry makes a transaction more likely to succeed in the event it again experiences deadlock. -{{site.data.alerts.end}} - -## Isolation levels - -CockroachDB executes all transactions at the strongest ANSI transaction isolation level: `SERIALIZABLE`. All other ANSI transaction isolation levels (e.g., `SNAPSHOT`, `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ`) are automatically upgraded to `SERIALIZABLE`. Weaker isolation levels have historically been used to maximize transaction throughput. However, [recent research](http://www.bailis.org/papers/acidrain-sigmod2017.pdf) has demonstrated that the use of weak isolation levels results in substantial vulnerability to concurrency-based attacks. - -{{site.data.alerts.callout_info}} -For a detailed discussion of isolation in CockroachDB transactions, see [Serializable, Lockless, Distributed: Isolation in CockroachDB](https://www.cockroachlabs.com/blog/serializable-lockless-distributed-isolation-cockroachdb/). -{{site.data.alerts.end}} - -#### Serializable isolation - -With `SERIALIZABLE` isolation, a transaction behaves as though it has the entire database all to itself for the duration of its execution. This means that no concurrent writers can affect the transaction unless they commit before it starts, and no concurrent readers can be affected by the transaction until it has successfully committed. This is the strongest level of isolation provided by CockroachDB and it's the default. - -`SERIALIZABLE` isolation permits no anomalies. To prevent [write skew](https://en.wikipedia.org/wiki/Snapshot_isolation) anomalies, `SERIALIZABLE` isolation may require transaction restarts. For a demonstration of `SERIALIZABLE` preventing write skew, see [Serializable Transactions](demo-serializable.html). - -### Comparison to ANSI SQL isolation levels - -CockroachDB uses slightly different isolation levels than [ANSI SQL isolation levels](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Isolation_levels). - -#### Aliases - -`SNAPSHOT`, `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` are aliases for `SERIALIZABLE`. - -#### Comparison - -The CockroachDB `SERIALIZABLE` level is stronger than the ANSI SQL `READ UNCOMMITTED`, `READ COMMITTED`, and `REPEATABLE READ` levels and equivalent to the ANSI SQL `SERIALIZABLE` level. - -For more information about the relationship between these levels, see [A Critique of ANSI SQL Isolation Levels](https://arxiv.org/ftp/cs/papers/0701/0701157.pdf). - -## See also - -- [`BEGIN`](begin-transaction.html) -- [`COMMIT`](commit-transaction.html) -- [`ROLLBACK`](rollback-transaction.html) -- [`SAVEPOINT`](savepoint.html) -- [`RELEASE SAVEPOINT`](release-savepoint.html) -- [`SHOW`](show-vars.html) -- [Retryable transaction example code in Java using JDBC](build-a-java-app-with-cockroachdb.html) -- [CockroachDB Architecture: Transaction Layer](architecture/transaction-layer.html) - - - -[commit]: commit-transaction.html -[rollback]: rollback-transaction.html diff --git a/src/current/v19.1/troubleshooting-overview.md b/src/current/v19.1/troubleshooting-overview.md deleted file mode 100644 index 07eb30887ce..00000000000 --- a/src/current/v19.1/troubleshooting-overview.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -title: Troubleshooting Overview -summary: Initial steps to take if you run in to issues with CockroachDB. -toc: false ---- - -If you run into issues with CockroachDB, there are a few initial steps you can always take: - -1. Check your [logs](debug-and-error-logs.html) for errors related to your issue. - - Logs are generated on a per-node basis, so you must either identify the node where the issue occurred or [collect the logs from all active nodes in your cluster](debug-zip.html). - - Alternately, you can [stop](stop-a-node.html) and [restart](start-a-node.html) problematic nodes with the `--logtostderr` flag to print logs to your terminal through `stderr`, letting you see all cluster activities as it occurs. - -2. Check our list of [common errors](common-errors.html) for a solution. - -3. If the problem doesn't match a common error, try the following pages: - - [Troubleshoot Cluster Setup](cluster-setup-troubleshooting.html) helps start your cluster and scale it by adding nodes. - - [Troubleshoot Query Behavior](query-behavior-troubleshooting.html) helps with unexpected query results. - -4. If you cannot resolve the issue easily yourself, the following tools can help you get unstuck: - - [Support Resources](support-resources.html) identifies ways you can get help with troubleshooting. - - [File an Issue](file-an-issue.html) provides details about filing issues that you're unable to resolve. diff --git a/src/current/v19.1/truncate.md b/src/current/v19.1/truncate.md deleted file mode 100644 index e8ad7e0aec3..00000000000 --- a/src/current/v19.1/truncate.md +++ /dev/null @@ -1,164 +0,0 @@ ---- -title: TRUNCATE -summary: The TRUNCATE statement deletes all rows from specified tables. -toc: true ---- - -The `TRUNCATE` [statement](sql-statements.html) removes all rows from a table. At a high level, it works by dropping the table and recreating a new table with the same name. - -{{site.data.alerts.callout_info}} -For smaller tables (with less than 1000 rows), using a [`DELETE` statement without a `WHERE` clause](delete.html#delete-all-rows) will be more performant than using `TRUNCATE`. -{{site.data.alerts.end}} - -{% include {{{ page.version.version }}/misc/schema-change-stmt-note.md %} - -## Synopsis - -
        - {% include {{ page.version.version }}/sql/diagrams/truncate.html %} -
        - -## Required privileges - -The user must have the `DROP` [privilege](authorization.html#assign-privileges) on the table. - -## Parameters - -Parameter | Description -----------|------------ -`table_name` | The name of the table to truncate. -`CASCADE` | Truncate all tables with [Foreign Key](foreign-key.html) dependencies on the table being truncated.

        `CASCADE` does not list dependent tables it truncates, so should be used cautiously. -`RESTRICT` | _(Default)_ Do not truncate the table if any other tables have [Foreign Key](foreign-key.html) dependencies on it. - -## Limitations - -`TRUNCATE` is a schema change, and as such is not transactional. For more information about how schema changes work, see [Online Schema Changes](online-schema-changes.html). - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -### Truncate a table (no foreign key dependencies) - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -| 1 | foo | -| 2 | bar | -+----+------+ -(2 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> TRUNCATE t1; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM t1; -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -+----+------+ -(0 rows) -~~~ - -### Truncate a table and dependent tables - -In these examples, the `orders` table has a [Foreign Key](foreign-key.html) relationship to the `customers` table. Therefore, it's only possible to truncate the `customers` table while simultaneously truncating the dependent `orders` table, either using `CASCADE` or explicitly. - -#### Truncate dependent tables using `CASCADE` - -{{site.data.alerts.callout_danger}}CASCADE truncates all dependent tables without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend truncating tables explicitly in most cases. See Truncate Dependent Tables Explicitly for more details.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> TRUNCATE customers; -~~~ - -~~~ -pq: "customers" is referenced by foreign key from table "orders" -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> TRUNCATE customers CASCADE; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ - -~~~ -+----+-------+ -| id | email | -+----+-------+ -+----+-------+ -(0 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -~~~ -+----+----------+------------+ -| id | customer | orderTotal | -+----+----------+------------+ -+----+----------+------------+ -(0 rows) -~~~ - -#### Truncate dependent tables explicitly - -{% include copy-clipboard.html %} -~~~ sql -> TRUNCATE customers, orders; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM customers; -~~~ - -~~~ -+----+-------+ -| id | email | -+----+-------+ -+----+-------+ -(0 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM orders; -~~~ - -~~~ -+----+----------+------------+ -| id | customer | orderTotal | -+----+----------+------------+ -+----+----------+------------+ -(0 rows) -~~~ - -## See also - -- [`DELETE`](delete.html) -- [`SHOW JOBS`](show-jobs.html) -- [Foreign Key constraint](foreign-key.html) -- [Online Schema Changes](online-schema-changes.html) diff --git a/src/current/v19.1/unique.md b/src/current/v19.1/unique.md deleted file mode 100644 index 873876ed915..00000000000 --- a/src/current/v19.1/unique.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -title: Unique Constraint -summary: The UNIQUE constraint specifies that each non-NULL value in the constrained column must be unique. -toc: true ---- - -The `UNIQUE` [constraint](constraints.html) specifies that each non-`NULL` value in the constrained column must be unique. - - -## Details - -- You can insert `NULL` values into columns with the `UNIQUE` constraint because `NULL` is the absence of a value, so it is never equal to other `NULL` values and not considered a duplicate value. This means that it's possible to insert rows that appear to be duplicates if one of the values is `NULL`. - - If you need to strictly enforce uniqueness, use the [`NOT NULL` constraint](not-null.html) in addition to the `UNIQUE` constraint. You can also achieve the same behavior through the table's [Primary Key](primary-key.html). - -- Columns with the `UNIQUE` constraint automatically have an [index](indexes.html) created with the name `
      __key`. To avoid having two identical indexes, you should not create indexes that exactly match the `UNIQUE` constraint's columns and order. - - The `UNIQUE` constraint depends on the automatically created index, so dropping the index also drops the `UNIQUE` constraint. - -- When using the `UNIQUE` constraint on multiple columns, the collective values of the columns must be unique. This *does not* mean that each value in each column must be unique, as if you had applied the `UNIQUE` constraint to each column individually. - -- You can define the `UNIQUE` constraint when [creating a table](#syntax), or you can add it to existing tables through [`ADD CONSTRAINT`](add-constraint.html#add-the-unique-constraint). - -## Syntax - -`UNIQUE` constraints can be defined at the [table level](#table-level). However, if you only want the constraint to apply to a single column, it can be applied at the [column level](#column-level). - -### Column level - -
      - {% include {{ page.version.version }}/sql/diagrams/unique_column_level.html %} -
      - -Parameter | Description -----------|------------ -`table_name` | The name of the table you're creating. -`column_name` | The name of the constrained column. -`column_type` | The constrained column's [data type](data-types.html). -`column_constraints` | Any other column-level [constraints](constraints.html) you want to apply to this column. -`column_def` | Definitions for any other columns in the table. -`table_constraints` | Any table-level [constraints](constraints.html) you want to apply. - -**Example** - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE warehouses ( - warehouse_id INT PRIMARY KEY NOT NULL, - warehouse_name STRING(35) UNIQUE, - location_id INT - ); -~~~ - -### Table level - -
      - {% include {{ page.version.version }}/sql/diagrams/unique_table_level.html %} -
      - -Parameter | Description -----------|------------ -`table_name` | The name of the table you're creating. -`column_def` | Definitions for any other columns in the table. -`name` | The name you want to use for the constraint, which must be unique to its table and follow these [identifier rules](keywords-and-identifiers.html#identifiers). -`column_name` | The name of the column you want to constrain. -`table_constraints` | Any other table-level [constraints](constraints.html) you want to apply. - -**Example** - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE logon ( - login_id INT PRIMARY KEY, - customer_id INT, - logon_date TIMESTAMP, - UNIQUE (customer_id, logon_date) - ); -~~~ - -## Usage example - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE IF NOT EXISTS logon ( - login_id INT PRIMARY KEY, - customer_id INT NOT NULL, - sales_id INT, - UNIQUE (customer_id, sales_id) - ); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (1, 2, 1); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (2, 2, 1); -~~~ - -~~~ -duplicate key value (customer_id,sales_id)=(2,1) violates unique constraint "logon_customer_id_sales_id_key" -~~~ - -As mentioned in the [details](#details) above, it is possible when using the `UNIQUE` constraint alone to insert *NULL* values in a way that causes rows to appear to have rows with duplicate values. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (3, 2, NULL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO logon (login_id, customer_id, sales_id) VALUES (4, 2, NULL); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT customer_id, sales_id FROM logon; -~~~ - -~~~ -+-------------+----------+ -| customer_id | sales_id | -+-------------+----------+ -| 2 | 1 | -| 2 | NULL | -| 2 | NULL | -+-------------+----------+ -~~~ - -## See also - -- [Constraints](constraints.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`CHECK` constraint](check.html) -- [`DEFAULT` value constraint](default-value.html) -- [Foreign key constraint](foreign-key.html) -- [`NOT NULL` constraint](not-null.html) -- [`PRIMARY` key constraint](primary-key.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) diff --git a/src/current/v19.1/update.md b/src/current/v19.1/update.md deleted file mode 100644 index c57d70bb221..00000000000 --- a/src/current/v19.1/update.md +++ /dev/null @@ -1,463 +0,0 @@ ---- -title: UPDATE -summary: The UPDATE statement updates one or more rows in a table. -toc: true ---- - -The `UPDATE` [statement](sql-statements.html) updates rows in a table. - -{{site.data.alerts.callout_danger}} -If you update a row that contains a column referenced by a [foreign key constraint](foreign-key.html) and has an [`ON UPDATE` action](foreign-key.html#foreign-key-actions), all of the dependent rows will also be updated. -{{site.data.alerts.end}} - - -## Required privileges - -The user must have the `SELECT` and `UPDATE` [privileges](authorization.html#assign-privileges) on the table. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/update.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The name of the table that contains the rows you want to update. -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of the column whose values you want to update. -`a_expr` | The new value you want to use, the [aggregate function](functions-and-operators.html#aggregate-functions) you want to perform, or the [scalar expression](scalar-expressions.html) you want to use. -`DEFAULT` | To fill columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `a_expr`. To fill a specific column with its default value, leave the value out of the `a_expr` or use `DEFAULT` at the appropriate position. -`column_name` | The name of a column to update. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column on the left side of `=`. -`WHERE a_expr`| `a_expr` must be a [scalar expression](scalar-expressions.html) that returns Boolean values using columns (e.g., ` = `). Update rows that return `TRUE`.

      **Without a `WHERE` clause in your statement, `UPDATE` updates all rows in the table.** -`sort_clause` | An `ORDER BY` clause. See [Ordering Query Results](query-order.html) for more details. -`limit_clause` | A `LIMIT` clause. See [Limiting Query Results](limit-offset.html) for more details. -`RETURNING target_list` | Return values based on rows updated, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      To return nothing in the response, not even the number of rows updated, use `RETURNING NOTHING`. - -## Examples - -### Update a single column in a single row - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 4000.0 | Julian | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = 5000.0 WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 5000.0 | Julian | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -### Update multiple columns in a single row - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET (balance, customer) = (9000.0, 'Kelly') WHERE id = 2; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 8700.0 | Dario | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = 6300.0, customer = 'Stanley' WHERE id = 3; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 6300.0 | Stanley | -| 4 | 3400.0 | Nitin | -+----+----------+----------+ -(4 rows) -~~~ - -### Update using `SELECT` statement - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET (balance, customer) = - (SELECT balance, customer FROM accounts WHERE id = 2) - WHERE id = 4; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | 6300.0 | Stanley | -| 4 | 9000.0 | Kelly | -+----+----------+----------+ -(4 rows) -~~~ - -### Update with default values - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = DEFAULT where customer = 'Stanley'; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+----------+ -| id | balance | customer | -+----+----------+----------+ -| 1 | 10000.50 | Ilya | -| 2 | 9000.0 | Kelly | -| 3 | NULL | Stanley | -| 4 | 9000.0 | Kelly | -+----+----------+----------+ -(4 rows) -~~~ - -### Update all rows - -{{site.data.alerts.callout_danger}} -If you do not use the `WHERE` clause to specify the rows to be updated, the values for all rows will be updated. -{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = 5000.0; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+---------+----------+ -| id | balance | customer | -+----+---------+----------+ -| 1 | 5000.0 | Ilya | -| 2 | 5000.0 | Kelly | -| 3 | 5000.0 | Stanley | -| 4 | 5000.0 | Kelly | -+----+---------+----------+ -(4 rows) -~~~ - -### Update and return values - -In this example, the `RETURNING` clause returns the `id` value of the row updated. The language-specific versions assume that you have installed the relevant [client drivers](install-client-drivers.html). - -{{site.data.alerts.callout_success}}This use of RETURNING mirrors the behavior of MySQL's last_insert_id() function.{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}}When a driver provides a query() method for statements that return results and an exec() method for statements that do not (e.g., Go), it's likely necessary to use the query() method for UPDATE statements with RETURNING.{{site.data.alerts.end}} - -
      - - - - - -
      - -
      -

      - -{% include copy-clipboard.html %} -~~~ sql -> UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id; -~~~ - -~~~ -+----+ -| id | -+----+ -| 1 | -+----+ -(1 row) -~~~ - -
      - -
      -

      - -{% include copy-clipboard.html %} -~~~ python -# Import the driver. -import psycopg2 - -# Connect to the "bank" database. -conn = psycopg2.connect( - database='bank', - user='root', - host='localhost', - port=26257 -) - -# Make each statement commit immediately. -conn.set_session(autocommit=True) - -# Open a cursor to perform database operations. -cur = conn.cursor() - -# Update a row in the "accounts" table -# and return the "id" value. -cur.execute( - 'UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id' -) - -# Print out the returned value. -rows = cur.fetchall() -print('ID:') -for row in rows: - print([str(cell) for cell in row]) - -# Close the database connection. -cur.close() -conn.close() -~~~ - -The printed value would look like: - -~~~ -ID: -['1'] -~~~ - -
      - -
      -

      - -{% include copy-clipboard.html %} -~~~ ruby -# Import the driver. -require 'pg' - -# Connect to the "bank" database. -conn = PG.connect( - user: 'root', - dbname: 'bank', - host: 'localhost', - port: 26257 -) - -# Update a row in the "accounts" table -# and return the "id" value. -conn.exec( - 'UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id' -) do |res| - -# Print out the returned value. -puts "ID:" - res.each do |row| - puts row - end -end - -# Close communication with the database. -conn.close() -~~~ - -The printed value would look like: - -~~~ -ID: -{"id"=>"1"} -~~~ - -
      - -
      -

      - -{% include copy-clipboard.html %} -~~~ go -package main - -import ( - "database/sql" - "fmt" - "log" - - _ "github.com/lib/pq" -) - -func main() { - //Connect to the "bank" database. - db, err := sql.Open( - "postgres", - "postgresql://root@localhost:26257/bank?sslmode=disable" - ) - if err != nil { - log.Fatal("error connecting to the database: ", err) - } - - // Update a row in the "accounts" table - // and return the "id" value. - rows, err := db.Query( - "UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id", - ) - if err != nil { - log.Fatal(err) - } - - // Print out the returned value. - defer rows.Close() - fmt.Println("ID:") - for rows.Next() { - var id int - if err := rows.Scan(&id); err != nil { - log.Fatal(err) - } - fmt.Printf("%d\n", id) - } -} -~~~ - -The printed value would look like: - -~~~ -ID: -1 -~~~ - -
      - -
      -

      - -{% include copy-clipboard.html %} -~~~ js -var async = require('async'); - -// Require the driver. -var pg = require('pg'); - -// Connect to the "bank" database. -var config = { - user: 'root', - host: 'localhost', - database: 'bank', - port: 26257 -}; - -pg.connect(config, function (err, client, done) { - // Closes communication with the database and exits. - var finish = function () { - done(); - process.exit(); - }; - - if (err) { - console.error('could not connect to cockroachdb', err); - finish(); - } - async.waterfall([ - function (next) { - // Update a row in the "accounts" table - // and return the "id" value. - client.query( - `UPDATE accounts SET balance = DEFAULT WHERE id = 1 RETURNING id`, - next - ); - } - ], - function (err, results) { - if (err) { - console.error('error updating and selecting from accounts', err); - finish(); - } - // Print out the returned value. - console.log('ID:'); - results.rows.forEach(function (row) { - console.log(row); - }); - - finish(); - }); -}); -~~~ - -The printed value would like: - -~~~ -ID: -{ id: '1' } -~~~ - -
      - -## See also - -- [`DELETE`](delete.html) -- [`INSERT`](insert.html) -- [`UPSERT`](upsert.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) -- [Limiting Query Results](limit-offset.html) diff --git a/src/current/v19.1/upgrade-cockroach-version.md b/src/current/v19.1/upgrade-cockroach-version.md deleted file mode 100644 index 24758e3f5f1..00000000000 --- a/src/current/v19.1/upgrade-cockroach-version.md +++ /dev/null @@ -1,248 +0,0 @@ ---- -title: Upgrade to CockroachDB v19.1 -summary: Learn how to upgrade your CockroachDB cluster to a new version. -toc: true -toc_not_nested: true ---- - -Because of CockroachDB's [multi-active availability](multi-active-availability.html) design, you can perform a "rolling upgrade" of your CockroachDB cluster. This means that you can upgrade nodes one at a time without interrupting the cluster's overall health and operations. - -## Step 1. Verify that you can upgrade - -To upgrade to a new version, you must first be on a [production release](../releases/#production-releases) of the previous version. The release does not need to be the **latest** production release of the previous version, but it must be a production release rather than a testing release (alpha/beta). - -Therefore, if you are upgrading from v2.0 to v19.1, or from a testing release (alpha/beta) of v2.1 to v19.1: - -1. First [upgrade to a production release of v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html). Be sure to complete all the steps. - -2. Then return to this page and perform a second rolling upgrade to v19.1. - -If you are upgrading from any production release of v2.1, or from any earlier v19.1 release, you do not have to go through intermediate releases; continue to step 2. - -## Step 2. Prepare to upgrade - -Before starting the upgrade, complete the following steps. - -1. Make sure your cluster is behind a [load balancer](recommended-production-settings.html#load-balancing), or your clients are configured to talk to multiple nodes. If your application communicates with a single node, stopping that node to upgrade its CockroachDB binary will cause your application to fail. - -2. Make sure there are no [schema changes](online-schema-changes.html) in progress. Schema changes are complex operations that involve coordination across nodes and can increase the potential for unexpected behavior during an upgrade. - - To check for ongoing schema changes, use [`SHOW JOBS`](show-jobs.html#show-schema-changes) or check the [**Jobs** page](admin-ui-jobs-page.html) in the Admin UI. - -3. Verify the overall health of your cluster using the [Admin UI](admin-ui-access-and-navigate.html). On the **Cluster Overview**: - - Under **Node Status**, make sure all nodes that should be live are listed as such. If any nodes are unexpectedly listed as suspect or dead, identify why the nodes are offline and either restart them or [decommission](remove-nodes.html) them before beginning your upgrade. If there are dead and non-decommissioned nodes in your cluster, it will not be possible to finalize the upgrade (either automatically or manually). - - Under **Replication Status**, make sure there are 0 under-replicated and unavailable ranges. Otherwise, performing a rolling upgrade increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. Therefore, it's important to identify and resolve the cause of range under-replication and/or unavailability before beginning your upgrade. - - In the **Node List**: - - Make sure all nodes are on the same version. If any nodes are behind, upgrade them to the cluster's current version first, and then start this process over. - - Make sure capacity and memory usage are reasonable for each node. Nodes must be able to tolerate some increase in case the new version uses more resources for your workload. Also go to **Metrics > Dashboard: Hardware** and make sure CPU percent is reasonable across the cluster. If there's not enough headroom on any of these metrics, consider [adding nodes](start-a-node.html) to your cluster before beginning your upgrade. - -4. Capture the cluster's current state by running the [`cockroach debug zip`](debug-zip.html) command against any node in the cluster. If the upgrade does not go according to plan, the captured details will help you and Cockroach Labs to troubleshoot the issues. - -5. [Back up the cluster](backup-and-restore.html). If the upgrade does not go according to plan, you can use the data to restore your cluster to its previous state. - -## Step 3. Decide how the upgrade will be finalized - -{{site.data.alerts.callout_info}} -This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step. -{{site.data.alerts.end}} - -By default, after all nodes are running the new version, the upgrade process will be **auto-finalized**. This will enable certain [features and performance improvements introduced in v19.1](#features-that-require-upgrade-finalization). However, it will no longer be possible to perform a downgrade to v2.1. In the event of a catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. For this reason, **we recommend disabling auto-finalization** so you can monitor the stability and performance of the upgraded cluster before finalizing the upgrade, but note that you will need to follow all of the subsequent directions, including the manual finalization in [step 5](#step-5-finish-the-upgrade): - -1. [Upgrade to v2.1](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version.html), if you haven't already. - -2. Start the [`cockroach sql`](use-the-built-in-sql-client.html) shell against any node in the cluster. - -3. Set the `cluster.preserve_downgrade_option` [cluster setting](cluster-settings.html): - - {% include copy-clipboard.html %} - ~~~ sql - > SET CLUSTER SETTING cluster.preserve_downgrade_option = '2.1'; - ~~~ - - It is only possible to set this setting to the current cluster version. - -### Features that require upgrade finalization - -When upgrading from v2.1 to v19.1, certain features and performance improvements will be enabled only after finalizing the upgrade, including but not limited to: - -- **Cascading replication zones:** After finalization, [replication zones](configure-replication-zones.html) will inherit empty values from their parent. For example, if the replication zone for a table is not explicitly set with `num_replicas`, it will inherit that value from its direct parent, whether that's the `.default` replication zone from the entire cluster or the replication zone for the database containing the table. -- **Table statistics generation:** After finalization, CockroachDB will generate [table statistics](cost-based-optimizer.html#table-statistics) automatically as tables are updated, and you will be able to manually generate table statistics using the [`CREATE STATISTICS`](create-statistics.html) statement. -- **Load-based splitting:** After finalization, CockroachDB will [automatically split frequently accessed keys](load-based-splitting.html) into smaller ranges to optimize your cluster’s performance. - -## Step 4. Perform the rolling upgrade - -For each node in your cluster, complete the following steps. - -{{site.data.alerts.callout_info}} -These steps apply to manual deployments. If you are running CockroachDB on Kubernetes, read our documentation on [single-cluster](orchestrate-cockroachdb-with-kubernetes.html#upgrade-the-cluster) and/or [multi-cluster](orchestrate-cockroachdb-with-kubernetes-multi-cluster.html#upgrade-the-cluster) orchestrated deployments of CockroachDB. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_success}} -We recommend creating scripts to perform these steps instead of performing them manually. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_danger}} -Upgrade only one node at a time, and wait at least one minute after a node rejoins the cluster to upgrade the next node. Simultaneously upgrading more than one node increases the risk that ranges will lose a majority of their replicas and cause cluster unavailability. - -Also, refrain from starting [schema changes](online-schema-changes.html) during the upgrade process. Schema changes are complex operations that involve coordination across nodes and can increase the potential for unexpected behavior during an upgrade. -{{site.data.alerts.end}} - -1. Connect to the node. - -2. Terminate the `cockroach` process. - - Without a process manager like `systemd`, use this command: - - {% include copy-clipboard.html %} - ~~~ shell - $ pkill cockroach - ~~~ - - If you are using `systemd` as the process manager, use this command to stop a node without `systemd` restarting it: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl stop - ~~~ - - Then verify that the process has stopped: - - {% include copy-clipboard.html %} - ~~~ shell - $ ps aux | grep cockroach - ~~~ - - Alternately, you can check the node's logs for the message `server drained and shutdown completed`. - -3. Download and install the CockroachDB binary you want to use: - -
      - - -
      -

      - -
      - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ tar -xzf cockroach-{{page.release_info.version}}.darwin-10.9-amd64.tgz - ~~~ -
      - -
      - {% include copy-clipboard.html %} - ~~~ shell - $ curl https://binaries.cockroachdb.com/cockroach-{{page.release_info.version}}.linux-amd64.tgz - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ tar -xzf cockroach-{{page.release_info.version}}.linux-amd64.tgz - ~~~ -
      - -4. If you use `cockroach` in your `$PATH`, rename the outdated `cockroach` binary, and then move the new one into its place: - -
      - - -
      -

      - -
      - {% include copy-clipboard.html %} - ~~~ shell - i="$(which cockroach)"; mv "$i" "$i"_old - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{page.release_info.version}}.darwin-10.9-amd64/cockroach /usr/local/bin/cockroach - ~~~ -
      - -
      - {% include copy-clipboard.html %} - ~~~ shell - i="$(which cockroach)"; mv "$i" "$i"_old - ~~~ - - {% include copy-clipboard.html %} - ~~~ shell - $ cp -i cockroach-{{page.release_info.version}}.linux-amd64/cockroach /usr/local/bin/cockroach - ~~~ -
      - -5. Start the node to have it rejoin the cluster. - - Without a process manager like `systemd`, re-run the [`cockroach start`](start-a-node.html) command that you used to start the node initially, for example: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach start \ - --certs-dir=certs \ - --advertise-addr= \ - --join=,, - ~~~ - - If you are using `systemd` as the process manager, run this command to start the node: - - {% include copy-clipboard.html %} - ~~~ shell - $ systemctl start - ~~~ - -6. Verify the node has rejoined the cluster through its output to `stdout` or through the [Admin UI](admin-ui-access-and-navigate.html). - - {{site.data.alerts.callout_info}} - To access the Admin UI for a secure cluster, [create a user with a password](create-user.html#create-a-user-with-a-password). Then open a browser and go to `https://:8080`. On accessing the Admin UI, you will see a Login screen, where you will need to enter your username and password. - {{site.data.alerts.end}} - -7. If you use `cockroach` in your `$PATH`, you can remove the old binary: - - {% include copy-clipboard.html %} - ~~~ shell - $ rm /usr/local/bin/cockroach_old - ~~~ - - If you leave versioned binaries on your servers, you do not need to do anything. - -8. Wait at least one minute after the node has rejoined the cluster, and then repeat these steps for the next node. - -## Step 5. Finish the upgrade - -{{site.data.alerts.callout_info}} -This step is relevant only when upgrading from v2.1.x to v19.1. For upgrades within the v19.1.x series, skip this step. -{{site.data.alerts.end}} - -If you disabled auto-finalization in [step 3](#step-3-decide-how-the-upgrade-will-be-finalized), monitor the stability and performance of your cluster for as long as you require to feel comfortable with the upgrade (generally at least a day). If during this time you decide to roll back the upgrade, repeat the rolling restart procedure with the old binary. - -Once you are satisfied with the new version, re-enable auto-finalization: - -1. Start the [`cockroach sql`](use-the-built-in-sql-client.html) shell against any node in the cluster. -2. Re-enable auto-finalization: - - {% include copy-clipboard.html %} - ~~~ sql - > RESET CLUSTER SETTING cluster.preserve_downgrade_option; - ~~~ - -## Step 6. Troubleshooting - -After the upgrade has finalized (whether manually or automatically), it is no longer possible to downgrade to the previous release. If you are experiencing problems, we therefore recommend that you: - -1. Run the [`cockroach debug zip`](debug-zip.html) command against any node in the cluster to capture your cluster's state. -2. [Reach out for support](support-resources.html) from Cockroach Labs, sharing your debug zip. - -In the event of catastrophic failure or corruption, the only option will be to start a new cluster using the old binary and then restore from one of the backups created prior to performing the upgrade. - -## See also - -- [View Node Details](view-node-details.html) -- [Collect Debug Information](debug-zip.html) -- [View Version Details](view-version-details.html) -- [Release notes for our latest version](../releases/{{page.version.version}}.html) diff --git a/src/current/v19.1/upsert.md b/src/current/v19.1/upsert.md deleted file mode 100644 index 75d4edbddbc..00000000000 --- a/src/current/v19.1/upsert.md +++ /dev/null @@ -1,286 +0,0 @@ ---- -title: UPSERT -summary: The UPSERT statement inserts rows when values do not violate uniqueness constraints, and it updates rows when values do violate uniqueness constraints. -toc: true ---- - -The `UPSERT` [statement](sql-statements.html) is semantically equivalent to [`INSERT ON CONFLICT`](insert.html#on-conflict-clause), but the two may have slightly different [performance characteristics](#considerations). It inserts rows in cases where specified values do not violate uniqueness constraints, and it updates rows in cases where values do violate uniqueness constraints. - -## Considerations - -- `UPSERT` considers uniqueness only for [Primary Key](primary-key.html) columns. `INSERT ON CONFLICT` is more flexible and can be used to consider uniqueness for other columns. For more details, see [How `UPSERT` transforms into `INSERT ON CONFLICT`](#how-upsert-transforms-into-insert-on-conflict) below. - -- When inserting/updating all columns of a table, and the table has no secondary indexes, `UPSERT` will be faster than the equivalent `INSERT ON CONFLICT` statement, as it will write without first reading. This may be particularly useful if you are using a simple SQL table of two columns to [simulate direct KV access](sql-faqs.html#can-i-use-cockroachdb-as-a-key-value-store). - -- A single [multi-row `UPSERT`](#upsert-multiple-rows) statement is faster than multiple single-row `UPSERT` statements. Whenever possible, use multi-row `UPSERT` instead of multiple single-row `UPSERT` statements. - -- If the input data contains duplicates, see [Import data containing duplicate rows using `DISTINCT ON`](#import-data-containing-duplicate-rows-using-distinct-on) below. - -## Required privileges - -The user must have the `INSERT`, `SELECT` and `UPDATE` [privileges](authorization.html#assign-privileges) on the table. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/upsert.html %} -
      - -## Parameters - -Parameter | Description -----------|------------ -`common_table_expr` | See [Common Table Expressions](common-table-expressions.html). -`table_name` | The name of the table. -`AS table_alias_name` | An alias for the table name. When an alias is provided, it completely hides the actual table name. -`column_name` | The name of a column to populate during the insert. -`select_stmt` | A [selection query](selection-queries.html). Each value must match the [data type](data-types.html) of its column. Also, if column names are listed after `INTO`, values must be in corresponding order; otherwise, they must follow the declared order of the columns in the table. -`DEFAULT VALUES` | To fill all columns with their [default values](default-value.html), use `DEFAULT VALUES` in place of `select_stmt`. To fill a specific column with its default value, leave the value out of the `select_stmt` or use `DEFAULT` at the appropriate position. -`RETURNING target_list` | Return values based on rows inserted, where `target_list` can be specific column names from the table, `*` for all columns, or computations using [scalar expressions](scalar-expressions.html).

      Within a [transaction](transactions.html), use `RETURNING NOTHING` to return nothing in the response, not even the number of rows affected. - -## How `UPSERT` transforms into `INSERT ON CONFLICT` - -`UPSERT` considers uniqueness only for [primary key](primary-key.html) columns. For example, assuming that columns `a` and `b` are the primary key, the following `UPSERT` and `INSERT ON CONFLICT` statements are equivalent: - -{% include copy-clipboard.html %} -~~~ sql -> UPSERT INTO t (a, b, c) VALUES (1, 2, 3); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO t (a, b, c) - VALUES (1, 2, 3) - ON CONFLICT (a, b) - DO UPDATE SET c = excluded.c; -~~~ - -`INSERT ON CONFLICT` is more flexible and can be used to consider uniqueness for columns not in the primary key. For more details, see the [Upsert that Fails (Conflict on Non-Primary Key)](#upsert-that-fails-conflict-on-non-primary-key) example below. - -## Examples - -### Upsert a row (no conflict) - -In this example, the `id` column is the primary key. Because the inserted `id` value does not conflict with the `id` value of any existing row, the `UPSERT` statement inserts a new row into the table. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -+----+----------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (3, 6325.20); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -+----+----------+ -~~~ - -### Upsert multiple rows - -In this example, the `UPSERT` statement inserts multiple rows into the table. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -+----+----------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (4, 1970.4), (5, 2532.9), (6, 4473.0); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ - -### Upsert that updates a row (conflict on primary key) - -In this example, the `id` column is the primary key. Because the inserted `id` value is not unique, the `UPSERT` statement updates the row with the new `balance`. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 6325.2 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPSERT INTO accounts (id, balance) VALUES (3, 7500.83); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM accounts; -~~~ - -~~~ -+----+----------+ -| id | balance | -+----+----------+ -| 1 | 10000.5 | -| 2 | 20000.75 | -| 3 | 7500.83 | -| 4 | 1970.4 | -| 5 | 2532.9 | -| 6 | 4473.0 | -+----+----------+ -~~~ - -### Upsert that fails (conflict on non-primary key) - -`UPSERT` will not update rows when the uniquness conflict is on columns not in the primary key. In this example, the `a` column is the primary key, but the `b` column also has the [`UNIQUE` constraint](unique.html). Because the inserted `b` value is not unique, the `UPSERT` fails. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM unique_test; -~~~ - -~~~ -+---+---+ -| a | b | -+---+---+ -| 1 | 1 | -| 2 | 2 | -| 3 | 3 | -+---+---+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> UPSERT INTO unique_test VALUES (4, 1); -~~~ - -~~~ -pq: duplicate key value (b)=(1) violates unique constraint "unique_test_b_key" -~~~ - -In such a case, you would need to use the [`INSERT ON CONFLICT`](insert.html) statement to specify the `b` column as the column with the `UNIQUE` constraint. - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO unique_test VALUES (4, 1) ON CONFLICT (b) DO UPDATE SET a = excluded.a; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM unique_test; -~~~ - -~~~ -+---+---+ -| a | b | -+---+---+ -| 2 | 2 | -| 3 | 3 | -| 4 | 1 | -+---+---+ -~~~ - -### Import data containing duplicate rows using `DISTINCT ON` - -If the input data to insert/update contains duplicate rows, you must -use [`DISTINCT ON`](select-clause.html#eliminate-duplicate-rows) to -ensure there is only one row for each value of the primary key. - -For example: - -{% include copy-clipboard.html %} -~~~ sql -> WITH - -- the following data contains duplicates on the conflict column "id": - inputrows AS (VALUES (8, 130), (8, 140)) - - UPSERT INTO accounts (id, balance) - (SELECT DISTINCT ON(id) id, balance FROM inputrows); -- de-duplicate the input rows -~~~ - -The `DISTINCT ON` clause does not guarantee which of the duplicates is -considered. To force the selection of a particular duplicate, use an -`ORDER BY` clause: - -{% include copy-clipboard.html %} -~~~ sql -> WITH - -- the following data contains duplicates on the conflict column "id": - inputrows AS (VALUES (8, 130), (8, 140)) - - UPSERT INTO accounts (id, balance) - (SELECT DISTINCT ON(id) id, balance - FROM inputrows - ORDER BY balance); -- pick the lowest balance as value to update in each account -~~~ - -{{site.data.alerts.callout_info}} -Using `DISTINCT ON` incurs a performance cost to search and eliminate duplicates. -For best performance, avoid using it when the input is known to not contain duplicates. -{{site.data.alerts.end}} - -## See also - -- [Selection Queries](selection-queries.html) -- [`DELETE`](delete.html) -- [`INSERT`](insert.html) -- [`UPDATE`](update.html) -- [`TRUNCATE`](truncate.html) -- [`ALTER TABLE`](alter-table.html) -- [`DROP TABLE`](drop-table.html) -- [`DROP DATABASE`](drop-database.html) -- [Other SQL Statements](sql-statements.html) diff --git a/src/current/v19.1/use-the-built-in-sql-client.md b/src/current/v19.1/use-the-built-in-sql-client.md deleted file mode 100644 index 0119b272d71..00000000000 --- a/src/current/v19.1/use-the-built-in-sql-client.md +++ /dev/null @@ -1,743 +0,0 @@ ---- -title: Use the Built-in SQL Client -summary: CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. -toc: true ---- - -CockroachDB comes with a built-in client for executing SQL statements from an interactive shell or directly from the command line. To use this client, run the `cockroach sql` [command](cockroach-commands.html) as described below. - -To exit the interactive shell, use `\q`, `quit`, `exit`, or `ctrl-d`. - -{{site.data.alerts.callout_success}} -If you want to experiment with CockroachDB SQL but do not have a cluster already running, you can use the [`cockroach demo`](cockroach-demo.html) command to open a shell to a temporary, in-memory cluster. -{{site.data.alerts.end}} - -## Synopsis - -Start the interactive SQL shell: - -~~~ shell -$ cockroach sql -~~~ - -Execute SQL from the command line: - -~~~ shell -$ cockroach sql --execute=";" --execute="" -~~~ -~~~ shell -$ echo ";" | cockroach sql -~~~ -~~~ shell -$ cockroach sql < file-containing-statements.sql -~~~ - -Exit the interactive SQL shell: - -~~~ shell -$ \q -~~~ -~~~ shell -$ quit -~~~ -~~~ shell -$ exit -~~~ -~~~ shell -ctrl-d -~~~ - -View help: - -~~~ shell -$ cockroach sql --help -~~~ - -## Flags - -The `sql` command supports the following types of flags: - -- [General Use](#general) -- [Client Connection](#client-connection) -- [Logging](#logging) - -### General - -- To start an interactive SQL shell, run `cockroach sql` with all appropriate connection flags or use just the [`--url` flag](#sql-flag-url), which includes [connection details](connection-parameters.html#connect-using-a-url). -- To execute SQL statements from the command line, use the [`--execute` flag](#sql-flag-execute). - -Flag | Description ------|------------ -`--database`
      `-d` | A database name to use as [current database](sql-name-resolution.html#current-database) in the newly created session. -`--echo-sql` | Reveal the SQL statements sent implicitly by the command-line utility. For a demonstration, see the [example](#reveal-the-sql-statements-sent-implicitly-by-the-command-line-utility) below.

      This can also be enabled within the interactive SQL shell via the `\set echo` [shell command](#commands). - `--execute`
      `-e` | Execute SQL statements directly from the command line, without opening a shell. This flag can be set multiple times, and each instance can contain one or more statements separated by semi-colons. If an error occurs in any statement, the command exits with a non-zero status code and further statements are not executed. The results of each statement are printed to the standard output (see `--format` for formatting options).

      For a demonstration of this and other ways to execute SQL from the command line, see the [example](#execute-sql-statements-from-the-command-line) below. - `--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `raw`, `records`, `sql`, `html`.

      **Default:** `table` for sessions that [output on a terminal](#session-and-output-types); `tsv` otherwise

      This flag corresponds to the `display_format` [client-side option](#client-side-options). -`--safe-updates` | Disallow potentially unsafe SQL statements, including `DELETE` without a `WHERE` clause, `UPDATE` without a `WHERE` clause, and `ALTER TABLE ... DROP COLUMN`.

      **Default:** `true` for [interactive sessions](#session-and-output-types); `false` otherwise

      Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the `sql_safe_updates` [session variable](set-vars.html). -`--set` | Set a [client-side option](#client-side-options) before starting the SQL shell or executing SQL statements from the command line via `--execute`. This flag may be specified multiple times, once per option.

      After starting the SQL shell, the `\set` and `unset` commands can be use to enable and disable client-side options as well. - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `sql` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Session and output types - -`cockroach sql` exhibits different behaviors depending on whether or not the session is interactive and/or whether or not the session outputs on a terminal. - -- A session is **interactive** when `cockroach sql` is invoked without the `--execute` flag and input is not redirected from a file. In such cases: - - The [`errexit` option](#sql-option-errexit) defaults to `false`. - - The [`check_syntax` option](#sql-option-check-syntax) defaults to `true` if supported by the CockroachDB server (this is checked when the shell starts up). - - **Ctrl+C** at the prompt will only terminate the shell if no other input was entered on the same line already. - - The shell will attempt to set the `safe_updates` [session variable](set-vars.html) to `true` on the server. -- A session **outputs on a terminal** when output is not redirected to a file. In such cases: - - The [`--format` flag](#sql-flag-format) and its corresponding [`display_format` option](#sql-option-display-format) default to `table`. These default to `tsv` otherwise. - - The `show_times` option defaults to `true`. - -When a session is both interactive and outputs on a terminal, `cockroach sql` also activates the interactive prompt with a line editor that can be used to modify the current line of input. Also, command history becomes active. - -## SQL shell - -### Welcome message - -When the SQL shell connects (or reconnects) to a CockroachDB node, it prints a welcome text with some tips and CockroachDB version and cluster details: - -~~~ shell -# Welcome to the cockroach SQL interface. -# All statements must be terminated by a semicolon. -# To exit: CTRL + D. -# -# Server version: CCL {{page.release_info.version}} (darwin amd64, built 2017/07/13 11:43:06, go1.10.1) (same version as client) -# Cluster ID: 7fb9f5b4-a801-4851-92e9-c0db292d03f1 -# -# Enter \? for a brief introduction. -# -> -~~~ - -The **Version** and **Cluster ID** details are particularly noteworthy: - -- When the client and server versions of CockroachDB are the same, the shell prints the `Server version` followed by `(same version as client)`. -- When the client and server versions are different, the shell prints both the `Client version` and `Server version`. In this case, you may want to [plan an upgrade](upgrade-cockroach-version.html) of older client or server versions. -- Since every CockroachDB cluster has a unique ID, you can use the `Cluster ID` field to verify that your client is always connecting to the correct cluster. - -### Commands - -The following commands can be used within the interactive SQL shell: - -Command | Usage ---------|------------ -`\q`
      `quit`
      `exit`
      `ctrl-d` | Exit the shell.

      When no text follows the prompt, `ctrl-c` exits the shell as well; otherwise, `ctrl-c` clears the line. -`\!` | Run an external command and print its results to `stdout`. See the [example](#run-external-commands-from-the-sql-shell) below. -\| | Run the output of an external command as SQL statements. See the [example](#run-external-commands-from-the-sql-shell) below. -`\set
      - - - - -
      chickturtle
      🐥🐢
      -~~~ - -When piping output to another command or a file, `--format` defaults to `tsv`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ -1 row -chick turtle -🐥 🐢 -~~~ - -However, you can explicitly set `--format` to another format, for example, `table`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---format=table \ ---execute="SELECT '🐥' AS chick, '🐢' AS turtle" > out.txt \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cat out.txt -~~~ - -~~~ -+-------+--------+ -| chick | turtle | -+-------+--------+ -| 🐥 | 🐢 | -+-------+--------+ -(1 row) -~~~ - -### Make the output of `SHOW` statements selectable - -To make it possible to select from the output of `SHOW` statements, set `--format` to `raw`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---format=raw \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE customers; -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -When `--format` is not set to `raw`, you can use the `display_format` [SQL shell option](#client-side-options) to change the output format within the interactive session: - -{% include copy-clipboard.html %} -~~~ sql -> \set display_format raw -~~~ - -~~~ -# 2 columns -# row 1 -## 14 -test.customers -## 185 -CREATE TABLE customers ( - id INT NOT NULL, - email STRING NULL, - CONSTRAINT "primary" PRIMARY KEY (id ASC), - UNIQUE INDEX customers_email_key (email ASC), - FAMILY "primary" (id, email) -) -# 1 row -~~~ - -### Execute SQL statements from a file - -In this example, we show and then execute the contents of a file containing SQL statements. - -{% include copy-clipboard.html %} -~~~ shell -$ cat statements.sql -~~~ - -~~~ -CREATE TABLE roaches (name STRING, country STRING); -INSERT INTO roaches VALUES ('American Cockroach', 'United States'), ('Brownbanded Cockroach', 'United States'); -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=critterdb \ -< statements.sql -~~~ - -~~~ -CREATE TABLE -INSERT 2 -~~~ - -### Run external commands from the SQL shell - -In this example, we use `\!` to look at the rows in a CSV file before creating a table and then using `\|` to insert those rows into the table. - -{{site.data.alerts.callout_info}}This example works only if the values in the CSV file are numbers. For values in other formats, use an online CSV-to-SQL converter or make your own import program.{{site.data.alerts.end}} - -{% include copy-clipboard.html %} -~~~ sql -> \! cat test.csv -~~~ - -~~~ -12, 13, 14 -10, 20, 30 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE csv (x INT, y INT, z INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \| IFS=","; while read a b c; do echo "insert into csv values ($a, $b, $c);"; done < test.csv; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM csv; -~~~ - -~~~ -+----+----+----+ -| x | y | z | -+----+----+----+ -| 12 | 13 | 14 | -| 10 | 20 | 30 | -+----+----+----+ -~~~ - -In this example, we create a table and then use `\|` to programmatically insert values. - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE for_loop (x INT); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \| for ((i=0;i<10;++i)); do echo "INSERT INTO for_loop VALUES ($i);"; done -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM for_loop; -~~~ - -~~~ -+---+ -| x | -+---+ -| 0 | -| 1 | -| 2 | -| 3 | -| 4 | -| 5 | -| 6 | -| 7 | -| 8 | -| 9 | -+---+ -~~~ - -### Edit SQL statements in an external editor - -In applications that use [GNU Readline](https://tiswww.case.edu/php/chet/readline/rltop.html) (such as [bash](https://www.gnu.org/software/bash/)), you can edit a long line in your preferred editor by typing `Ctrl-x Ctrl-e`. However, CockroachDB uses the BSD-licensed [libedit](https://thrysoee.dk/editline/), which does not include this functionality. - -If you would like to be able to edit the current line in an external editor by typing `C-x C-e` as in `bash`, do the following: - -1. Install the `vipe` program (from the [moreutils](https://joeyh.name/code/moreutils/) suite of tools). -2. Edit your `~/.editrc` to add the following line, which takes advantage of the SQL client's ability to [run external commands](#run-external-commands-from-the-sql-shell): - - {% include copy-clipboard.html %} - ~~~ - cockroach:bind -s ^X^E '^A^K\\\| echo \"^Y\" | vipe\r' - ~~~ - -This tells libedit to translate `C-x C-e` into the following commands: - -1. Move to the beginning of the current line. -2. Cut the whole line. -3. Paste the line into your editor via `vipe`. -4. Pass the edited file back to the SQL client when `vipe` exits. - -{{site.data.alerts.callout_info}} -Future versions of the SQL client may opt to use a different back-end for reading input, in which case please refer to this page for additional updates. -{{site.data.alerts.end}} - -### Allow potentially unsafe SQL statements - -The `--safe-updates` flag defaults to `true`. This prevents SQL statements that may have broad, undesired side-effects. For example, by default, we cannot use `DELETE` without a `WHERE` clause to delete all rows from a table: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="SELECT * FROM db1.t1" -~~~ - -~~~ -+----+------+ -| id | name | -+----+------+ -| 1 | a | -| 2 | b | -| 3 | c | -| 4 | d | -| 5 | e | -| 6 | f | -| 7 | g | -| 8 | h | -| 9 | i | -| 10 | j | -+----+------+ -(10 rows) -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --execute="DELETE FROM db1.t1" -~~~ - -~~~ -Error: pq: rejected: DELETE without WHERE clause (sql_safe_updates = true) -Failed running "sql" -~~~ - -However, to allow an "unsafe" statement, you can set `--safe-updates=false`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure --safe-updates=false --execute="DELETE FROM db1.t1" -~~~ - -~~~ -DELETE 10 -~~~ - -{{site.data.alerts.callout_info}}Potentially unsafe SQL statements can also be allowed/disallowed for an entire session via the sql_safe_updates session variable.{{site.data.alerts.end}} - -### Reveal the SQL statements sent implicitly by the command-line utility - -In this example, we use the `--execute` flag to execute statements from the command line and the `--echo-sql` flag to reveal SQL statements sent implicitly: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---execute="CREATE TABLE t1 (id INT PRIMARY KEY, name STRING)" \ ---execute="INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c')" \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=db1 ---echo-sql -~~~ - -~~~ -# Server version: CockroachDB CCL f8f3c9317 (darwin amd64, built 2017/09/13 15:05:35, go1.8) (same version as client) -# Cluster ID: 847a4ba5-c78a-465a-b1a0-59fae3aab520 -> SET sql_safe_updates = TRUE -> CREATE TABLE t1 (id INT PRIMARY KEY, name STRING) -CREATE TABLE -> INSERT INTO t1 VALUES (1, 'a'), (2, 'b'), (3, 'c') -INSERT 3 -~~~ - -In this example, we start the interactive SQL shell and enable the `echo` shell option to reveal SQL statements sent implicitly: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sql --insecure \ ---user=maxroach \ ---host=12.345.67.89 \ ---database=db1 -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> \set echo -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -~~~ - -~~~ -> INSERT INTO db1.t1 VALUES (4, 'd'), (5, 'e'), (6, 'f'); -INSERT 3 - -Time: 2.426534ms - -> SHOW TRANSACTION STATUS -> SHOW DATABASE -~~~ - -## See also - -- [Client Connection Parameters](connection-parameters.html) -- [`cockroach demo`](cockroach-demo.html) -- [Other Cockroach Commands](cockroach-commands.html) -- [SQL Statements](sql-statements.html) -- [Learn CockroachDB SQL](learn-cockroachdb-sql.html) diff --git a/src/current/v19.1/use-the-query-formatter.md b/src/current/v19.1/use-the-query-formatter.md deleted file mode 100644 index 6d6823ecc11..00000000000 --- a/src/current/v19.1/use-the-query-formatter.md +++ /dev/null @@ -1,157 +0,0 @@ ---- -title: Reformat SQL Queries for Enhanced Clarity -summary: Use cockroach sqlfmt to enhance the text layout of a SQL query. -toc: true ---- - -The `cockroach sqlfmt` -[command](cockroach-commands.html) changes the textual formatting of -one or more SQL queries. It recognizes all SQL extensions supported by -CockroachDB. - -A [web interface to this feature](https://sqlfum.pt/) is also available. - -{% include {{ page.version.version }}/misc/experimental-warning.md %} - -## Synopsis - -Use the query formatter interactively: - -~~~ shell -$ cockroach sqlfmt - -CTRL+D -~~~ - -Reformat a SQL query given on the command line: - -~~~ shell -$ cockroach sqlfmt -e "" -~~~ - -Reformat a SQL query already stored in a file: - -~~~ shell -$ cat query.sql | cockroach sqlfmt -~~~ - -## Flags - -The `sqlfmt` command supports the following flags. - -Flag | Description | Default value ------|------|---- -`--execute`
      `-e` | Reformat the given SQL query, without reading from standard input. | N/A -`--print-width` | Desired column width of the output. | 80 -`--tab-width` | Number of spaces occupied by a tab character on the final display device. | 4 -`--use-spaces` | Always use space characters for formatting; avoid tab characters. | Use tabs. -`--align` | Use vertical alignment during formatting. | Do not align vertically. -`--no-simplify` | Avoid removing optional grouping parentheses during formatting. | Remove unnecessary grouping parentheses. - -## Examples - -### Reformat a query with constrained column width - -Using the interactive query formatter, output with the default column width (80 columns): - -1. Start the interactive query formatter: - - {% include copy-clipboard.html %} - ~~~ shell - $ cockroach sqlfmt - ~~~ - -2. Press **Enter**. - -3. Run the query: - - {% include copy-clipboard.html %} - ~~~ sql - > CREATE TABLE animals (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING); - ~~~ -4. Press **CTRL+D**. - - ~~~ sql - CREATE TABLE animals ( - id INT PRIMARY KEY DEFAULT unique_rowid(), - name STRING - ) - ~~~ - -Using the command line, output with the column width set to `40`: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --print-width 40 -e "CREATE TABLE animals (id INT PRIMARY KEY DEFAULT unique_rowid(), name STRING);" -~~~ - -~~~ sql -CREATE TABLE animals ( - id - INT - PRIMARY KEY - DEFAULT unique_rowid(), - name STRING -) -~~~ - -### Reformat a query with vertical alignment - -Output with the default vertical alignment: - -~~~ shell -$ cockroach sqlfmt -e "SELECT winner, round(length / (60 * 5)) AS counter FROM players WHERE build = $1 AND (hero = $2 OR region = $3);" -~~~ - -~~~ sql -SELECT -winner, round(length / (60 * 5)) AS counter -FROM -players -WHERE -build = $1 AND (hero = $2 OR region = $3) -~~~ - -Output with vertical alignment: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --align -e "SELECT winner, round(length / (60 * 5)) AS counter FROM players WHERE build = $1 AND (hero = $2 OR region = $3);" -~~~ - -~~~ sql -SELECT winner, round(length / (60 * 5)) AS counter - FROM players - WHERE build = $1 AND (hero = $2 OR region = $3); -~~~ - -### Reformat a query with simplification of parentheses - -Output with the default simplification of parentheses: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt -e "SELECT (1 * 2) + 3, (1 + 2) * 3;" -~~~ - -~~~ sql -SELECT 1 * 2 + 3, (1 + 2) * 3 -~~~ - -Output with no simplification of parentheses: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach sqlfmt --no-simplify -e "SELECT (1 * 2) + 3, (1 + 2) * 3;" -~~~ - -~~~ sql -SELECT (1 * 2) + 3, (1 + 2) * 3 -~~~ - -## See also - -- [Sequel Fumpt](https://sqlfum.pt/) -- [`cockroach demo`](cockroach-demo.html) -- [`cockroach sql`](use-the-built-in-sql-client.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/uuid.md b/src/current/v19.1/uuid.md deleted file mode 100644 index d66c8b48313..00000000000 --- a/src/current/v19.1/uuid.md +++ /dev/null @@ -1,140 +0,0 @@ ---- -title: UUID -summary: The UUID data type stores 128-bit Universal Unique Identifiers. -toc: true ---- - -The `UUID` (Universally Unique Identifier) [data type](data-types.html) stores a 128-bit value that is [unique across both space and time](https://www.ietf.org/rfc/rfc4122.txt). - -{{site.data.alerts.callout_success}} -To auto-generate unique row IDs, we recommend using [`UUID`](uuid.html) with the `gen_random_uuid()` function as the default value. See the [example](#create-a-table-with-auto-generated-unique-row-ids) below for more details. -{{site.data.alerts.end}} - - -## Syntax -A `UUID` value can be expressed using the following formats: - -Format | Description --------|------------- -Standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format | Hyphen-separated groups of 8, 4, 4, 4, 12 hexadecimal digits.

      Example: `acde070d-8c4c-4f0d-9d8a-162843c10333` -With braces | The standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format with braces.

      Example: `{acde070d-8c4c-4f0d-9d8a-162843c10333}` -As `BYTES` | `UUID` value specified as bytes.

      Example: `b'kafef00ddeadbeed'` -`UUID` used as a URN | `UUID` can be used as a Uniform Resource Name (URN). In that case, the format is [specified](https://www.ietf.org/rfc/rfc2141.txt) as "urn:uuid:" followed by standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format.

      Example: `urn:uuid:63616665-6630-3064-6465-616462656564` - -## Size -A `UUID` value is 128 bits in width, but the total storage size is likely to be larger due to CockroachDB metadata. - -## Examples - -### Create a table with manually-entered `UUID` values - -#### Create a table with `UUID` in standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format - -{% include copy-clipboard.html %} -~~~ sql -> CREATE TABLE v (token uuid); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO v VALUES ('63616665-6630-3064-6465-616462656562'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -+--------------------------------------+ -(1 row) -~~~ - -#### Create a table with `UUID` in standard [RFC4122](http://www.ietf.org/rfc/rfc4122.txt)-specified format with braces - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO v VALUES ('{63616665-6630-3064-6465-616462656563}'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -+--------------------------------------+ -(2 rows) -~~~ - -#### Create a table with `UUID` in byte format - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO v VALUES (b'kafef00ddeadbeed'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -| 6b616665-6630-3064-6465-616462656564 | -+--------------------------------------+ -(3 rows) -~~~ - -#### Create a table with `UUID` used as URN - -{% include copy-clipboard.html %} -~~~ sql -> INSERT INTO v VALUES ('urn:uuid:63616665-6630-3064-6465-616462656564'); -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM v; -~~~ - -~~~ -+--------------------------------------+ -| token | -+--------------------------------------+ -| 63616665-6630-3064-6465-616462656562 | -| 63616665-6630-3064-6465-616462656563 | -| 6b616665-6630-3064-6465-616462656564 | -| 63616665-6630-3064-6465-616462656564 | -+--------------------------------------+ -(4 rows) -~~~ - -### Create a table with auto-generated unique row IDs - -{% include {{ page.version.version }}/faq/auto-generate-unique-ids.html %} - -## Supported casting and conversion - -`UUID` values can be [cast](data-types.html#data-type-conversions-and-casts) to the following data type: - -Type | Details ------|-------- -`BYTES` | Requires supported [`BYTES`](bytes.html) string format, e.g., `b'\141\061\142\062\143\063'`. - -## See also - -[Data Types](data-types.html) diff --git a/src/current/v19.1/validate-constraint.md b/src/current/v19.1/validate-constraint.md deleted file mode 100644 index e6b229beaf7..00000000000 --- a/src/current/v19.1/validate-constraint.md +++ /dev/null @@ -1,60 +0,0 @@ ---- -title: VALIDATE CONSTRAINT -summary: Use the ADD COLUMN statement to add columns to tables. -toc: true ---- - -The `VALIDATE CONSTRAINT` [statement](sql-statements.html) is part of `ALTER TABLE` and checks whether values in a column match a [constraint](constraints.html) on the column. This statement is especially useful after applying a constraint to an existing column via [`ADD CONSTRAINT`](add-constraint.html). In this case, `VALIDATE CONSTRAINT` can be used to find values already in the column that do not match the constraint. - -{% include {{ page.version.version }}/sql/combine-alter-table-commands.md %} - -## Required privileges - -The user must have the `CREATE` [privilege](authorization.html#assign-privileges) on the table. - -## Synopsis - -
      - {% include {{ page.version.version }}/sql/diagrams/validate_constraint.html %} -
      - -## Parameters - - Parameter | Description --------------------+----------------------------------------------------------------------------- - `table_name` | The name of the table in which the constraint you'd like to validate lives. - `constraint_name` | The name of the constraint on `table_name` you'd like to validate. - -## Viewing schema changes - -{% include {{ page.version.version }}/misc/schema-change-view-job.md %} - -## Examples - -In [`ADD CONSTRAINT`](add-constraint.html), we [added a foreign key constraint](add-constraint.html#add-the-foreign-key-constraint-with-cascade) like so: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders ADD CONSTRAINT customer_fk FOREIGN KEY (customer_id) REFERENCES customers (id) ON DELETE CASCADE; -~~~ - -In order to ensure that the data added to the `orders` table prior to the creation of the `customer_fk` constraint conforms to that constraint, run the following: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE orders VALIDATE CONSTRAINT customer_fk; -~~~ - -{{site.data.alerts.callout_info}} -If present in a [`CREATE TABLE`](create-table.html) statement, the table is considered validated because an empty table trivially meets its constraints. -{{site.data.alerts.end}} - -## See also - -- [Constraints](constraints.html) -- [`ADD CONSTRAINT`](add-constraint.html) -- [`SHOW CONSTRAINTS`](show-constraints.html) -- [`RENAME CONSTRAINT`](rename-constraint.html) -- [`DROP CONSTRAINT`](drop-constraint.html) -- [`CREATE TABLE`](create-table.html) -- [`SHOW JOBS`](show-jobs.html) diff --git a/src/current/v19.1/view-node-details.md b/src/current/v19.1/view-node-details.md deleted file mode 100644 index d7083b9b1cf..00000000000 --- a/src/current/v19.1/view-node-details.md +++ /dev/null @@ -1,309 +0,0 @@ ---- -title: View Node Details -summary: To view details for each node in the cluster, use the cockroach node command with the appropriate subcommands and flags. -toc: true ---- - -To view details for each node in the cluster, use the `cockroach node` [command](cockroach-commands.html) with the appropriate subcommands and flags. - -The `cockroach node` command is also used in the process of decommissioning nodes for removal from the cluster. See [Decommission Nodes](remove-nodes.html) for more details. - -## Subcommands - -Subcommand | Usage ------------|------ -`ls` | List the ID of each node in the cluster, excluding those that have been decommissioned and are offline. -`status` | View the status of one or all nodes, excluding nodes that have been decommissioned and taken offline. Depending on flags used, this can include details about range/replicas, disk usage, and decommissioning progress. -`decommission` | Decommission nodes for removal from the cluster. See [Decommission Nodes](remove-nodes.html) for more details. -`recommission` | Recommission nodes that have been decommissioned. See [Recommission Nodes](remove-nodes.html#recommission-nodes) for more details. -`drain` | Drain nodes of SQL clients, [distributed SQL](architecture/sql-layer.html#distsql) queries, and range leases, and prevent ranges from rebalancing onto the node. This is normally done during [node shutdown](stop-a-node.html), but the `drain` subcommand provides operators an option to interactively monitor, and if necessary intervene in, the draining process. - -## Synopsis - -List the IDs of active and inactive nodes: - -~~~ shell -$ cockroach node ls -~~~ - -Show status details for active and inactive nodes: - -~~~ shell -$ cockroach node status -~~~ - -Show status and range/replica details for active and inactive nodes: - -~~~ shell -$ cockroach node status --ranges -~~~ - -Show status and disk usage details for active and inactive nodes: - -~~~ shell -$ cockroach node status --stats -~~~ - -Show status and decommissioning details for active and inactive nodes: - -~~~ shell -$ cockroach node status --decommission -~~~ - -Show complete status details for active and inactive nodes: - -~~~ shell -$ cockroach node status --all -~~~ - -Show status details for a specific node: - -~~~ shell -$ cockroach node status -~~~ - -Decommission nodes: - -~~~ shell -$ cockroach node decommission -~~~ - -Recommission nodes: - -~~~ shell -$ cockroach node recommission -~~~ - -Drain nodes: - -~~~ shell -$ cockroach node drain -~~~ - -View help: - -~~~ shell -$ cockroach node --help -~~~ -~~~ shell -$ cockroach node --help -~~~ - -## Flags - -All `node` subcommands support the following [general-use](#general) and [logging](#logging) flags. - -### General - -Flag | Description ------|------------ -`--format` | How to display table rows printed to the standard output. Possible values: `tsv`, `csv`, `table`, `records`, `sql`, `html`.

      **Default:** `tsv` - -The `node ls` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--timeout` | Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node status` subcommand also supports the following general flags: - -Flag | Description ------|------------ -`--all` | Show all node details. -`--decommission` | Show node decommissioning details. -`--ranges` | Show node details for ranges and replicas. -`--stats` | Show node disk usage details. -`--timeout` | Set the duration of time that the subcommand is allowed to run before it returns an error and prints partial information. The timeout is specified with a suffix of `s` for seconds, `m` for minutes, and `h` for hours. If this flag is not set, the subcommand may hang. - -The `node decommission` subcommand also supports the following general flag: - -Flag | Description ------|------------ -`--wait` | When to return to the client. Possible values: `all`, `none`.

      If `all`, the command returns to the client only after all replicas on all specified nodes have been transferred to other nodes. If any specified nodes are offline, the command will not return to the client until those nodes are back online.

      If `none`, the command does not wait for the decommissioning process to complete; it returns to the client after starting the decommissioning process on all specified nodes that are online. Any specified nodes that are offline will automatically be marked as decommissioning; if they come back online, the cluster will recognize this status and will not rebalance data to the nodes.

      **Default:** `all` - -The `node drain` subcommand also supports the following general flag: - -Flag | Description ------|------------ -`--drain-wait` | Amount of time to wait for the node to drain before returning to the client.

      **Default:** `10m` - -### Client connection - -{% include {{ page.version.version }}/sql/connection-parameters.md %} - -See [Client Connection Parameters](connection-parameters.html) for more details. - -### Logging - -By default, the `node` command logs errors to `stderr`. - -If you need to troubleshoot this command's behavior, you can change its [logging behavior](debug-and-error-logs.html). - -## Response - -The `cockroach node` subcommands return the following fields for each node. - -### `node ls` - -Field | Description -------|------------ -`id` | The ID of the node. - -### `node status` - -Field | Description -------|------------ -`id` | The ID of the node.

      **Required flag:** None -`address` | The address of the node.

      **Required flag:** None -`build` | The version of CockroachDB running on the node. If the binary was built from source, this will be the SHA hash of the commit used.

      **Required flag:** None -`updated_at` | The date and time when the node last recorded the information displayed in this command's output. When healthy, a new status should be recorded every 10 seconds or so, but when unhealthy this command's stats may be much older.

      **Required flag:** None -`started_at` | The date and time when the node was started.

      **Required flag:** None -`replicas_leaders` | The number of range replicas on the node that are the Raft leader for their range. See `replicas_leaseholders` below for more details.

      **Required flag:** `--ranges` or `--all` -`replicas_leaseholders` | The number of range replicas on the node that are the leaseholder for their range. A "leaseholder" replica handles all read requests for a range and directs write requests to the range's Raft leader (usually the same replica as the leaseholder).

      **Required flag:** `--ranges` or `--all` -`ranges` | The number of ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`ranges_unavailable` | The number of unavailable ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`ranges_underreplicated` | The number of underreplicated ranges that have replicas on the node.

      **Required flag:** `--ranges` or `--all` -`live_bytes` | The amount of live data used by both applications and the CockroachDB system. This excludes historical and deleted data.

      **Required flag:** `--stats` or `--all` -`key_bytes` | The amount of live and non-live data from keys in the key-value storage layer. This does not include data used by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`value_bytes` | The amount of live and non-live data from values in the key-value storage layer. This does not include data used by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`intent_bytes` | The amount of non-live data associated with uncommitted (or recently-committed) transactions.

      **Required flag:** `--stats` or `--all` -`system_bytes` | The amount of data used just by the CockroachDB system.

      **Required flag:** `--stats` or `--all` -`is_available` | If `true`, the node is currently available.

      **Required flag:** None -`is_live` | If `true`, the node is currently live.

      For unavailable clusters (with an unresponsive Admin UI), running the `node status` command and monitoring the `is_live` field is the only way to identify the live nodes in the cluster. However, you need to run the `node status` command on a live node to identify the other live nodes in an unavailable cluster. Figuring out a live node to run the command is a trial-and-error process, so run the command against each node until you get one that responds.

      See [Identify live nodes in an unavailable cluster](#identify-live-nodes-in-an-unavailable-cluster) for more details.

      **Required flag:** None -`gossiped_replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0.

      **Required flag:** `--decommission` or `--all` -`is_decommissioning` | If `true`, the node's range replicas are being transferred to other nodes. This happens when a live node is marked for [decommissioning](remove-nodes.html).

      **Required flag:** `--decommission` or `--all` -`is_draining` | If `true`, the node is being drained of in-flight SQL connections and new SQL connections are rejected. This happens when a live node is being [stopped](stop-a-node.html).

      **Required flag:** `--decommission` or `--all` - -### `node decommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0. -`is_decommissioning` | If `true`, the node's range replicas are being transferred to other nodes. This happens when a live node is marked for [decommissioning](remove-nodes.html). -`is_draining` | If `true`, the node is being drained of in-flight SQL connections and new SQL connections are rejected. This happens when a live node is being [stopped](stop-a-node.html). - -### `node recommission` - -Field | Description -------|------------ -`id` | The ID of the node. -`is_live` | If `true`, the node is live. -`replicas` | The number of replicas on the node that are active members of a range. After the decommissioning process completes, this should be 0. -`is_decommissioning` | If `true`, the node's range replicas are being transferred to other nodes. This happens when a live node is marked for [decommissioning](remove-nodes.html). -`is_draining` | If `true`, the node is being drained of in-flight SQL connections and new SQL connections are rejected. This happens when a live node is being [stopped](stop-a-node.html). - -## Examples - -### List node IDs - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach node ls --host=165.227.60.76 --certs-dir=certs -~~~ - -~~~ -+----+ -| id | -+----+ -| 1 | -| 2 | -| 3 | -| 4 | -| 5 | -+----+ -~~~ - -### Show the status of a single node - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach node status 1 --host=165.227.60.76 --certs-dir=certs -~~~ - -~~~ -+----+-----------------------+---------+---------------------+---------------------+---------+ -| id | address | build | updated_at | started_at | is_live | -+----+-----------------------+---------+---------------------+---------------------+---------+ -| 1 | 165.227.60.76:26257 | 91a299d | 2017-09-07 18:16:03 | 2017-09-07 16:30:13 | true | -+----+-----------------------+---------+---------------------+---------------------+---------+ -(1 row) -~~~ - -### Show the status of all nodes - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=165.227.60.76 --certs-dir=certs -~~~ - -~~~ - id | address | build | started_at | updated_at | is_available | is_live -+----+-----------------------+--------------------------------------+----------------------------------+----------------------------------+--------------+---------+ - 1 | 165.227.60.76:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:30.797131+00:00 | 2018-09-18 17:25:20.351483+00:00 | true | true - 2 | 192.241.239.201:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:38.914482+00:00 | 2018-09-18 17:25:23.984197+00:00 | true | true - 3 | 67.207.91.36:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:57.957116+00:00 | 2018-09-18 17:25:20.535474+00:00 | true | true -(3 rows) -~~~ - -### Identify live nodes in an unavailable cluster - -The `is_live` and `is_available` fields are marked as `true` as long as a majority of the nodes are up, and a quorum can be reached: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --host=192.241.239.201 --certs-dir=certs -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=165.227.60.76 --certs-dir=certs -~~~ - -~~~ - id | address | build | started_at | updated_at | is_available | is_live -+-----+-----------------------+--------------------------------------+----------------------------------+----------------------------------+--------------+---------+ - 1 | 165.227.60.76:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:30.797131+00:00 | 2018-09-18 17:54:21.894586+00:00 | true | true - 2 | 192.241.239.201:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:50:17.839323+00:00 | 2018-09-18 17:52:06.172624+00:00 | false | false - 3 | 67.207.91.36:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:50:10.961166+00:00 | 2018-09-18 17:54:24.925007+00:00 | true | true -(3 rows) -~~~ - -If a majority of nodes are down and a quorum cannot be reached, the `is_live` field is marked as `true` for the nodes that are up, but the `is_available` field is marked as `false` for all nodes: - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach quit --host=67.207.91.36 --certs-dir=certs -~~~ - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach node status --host=165.227.60.76 --certs-dir=certs -~~~ - -~~~ - id | address | build | started_at | updated_at | is_available | is_live -+----+-----------------------+--------------------------------------+----------------------------------+----------------------------------+--------------+---------+ - 1 | 165.227.60.76:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:30.797131+00:00 | 2018-09-18 17:30:48.860329+00:00 | false | true - 2 | 192.241.239.201:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:38.914482+00:00 | 2018-09-18 17:25:31.137222+00:00 | false | false - 3 | 67.207.91.36:26257 | v2.1.0-beta.20180917-146-g19ca36c89a | 2018-09-18 17:24:57.957116+00:00 | 2018-09-18 17:30:49.943822+00:00 | false | false -(3 rows) -~~~ - -{{site.data.alerts.callout_info}} -You need to run the `node status` command on a live node to identify the other live nodes in an unavailable cluster. Figuring out a live node to run the command is a trial-and-error process, so run the command against each node until you get one that responds. -{{site.data.alerts.end}} - -### Decommission nodes - -See [Remove Nodes](remove-nodes.html) - -### Recommission nodes - -See [Recommission Nodes](remove-nodes.html#recommission-nodes) - -## See also - -- [Other Cockroach Commands](cockroach-commands.html) -- [Remove Nodes](remove-nodes.html) diff --git a/src/current/v19.1/view-version-details.md b/src/current/v19.1/view-version-details.md deleted file mode 100644 index 5469aa710f6..00000000000 --- a/src/current/v19.1/view-version-details.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -title: View Version Details -summary: To view version details for a specific cockroach binary, run the cockroach version command. -toc: false ---- - -To view version details for a specific `cockroach` binary, run the `cockroach version` [command](cockroach-commands.html): - -{% include copy-clipboard.html %} -~~~ shell -$ cockroach version -~~~ - -~~~ -Build Tag: {{page.release_info.version}} -Build Time: {{page.release_info.build_time}} -Distribution: CCL -Platform: darwin amd64 -Go Version: go1.8.3 -C Compiler: 4.2.1 Compatible Clang 3.8.0 (tags/RELEASE_380/final) -Build SHA-1: 5b757262d33d814bda1deb2af20161a1f7749df3 -Build Type: release -~~~ - -The `cockroach version` command outputs the following fields: - -Field | Description -------|------------ -`Build Tag` | The CockroachDB version. -`Build Time` | The date and time when the binary was built. -`Distribution` | The scope of the binary. If `CCL`, the binary contains open-source and enterprise functionality covered by the CockroachDB Community License. If `OSS`, the binary contains only open-source functionality.

      To obtain a pure open-source binary, you must [build from source](install-cockroachdb.html) using the `make buildoss` command. -`Platform` | The platform that the binary can run on. -`Go Version` | The version of Go in which the source code is written. -`C Compiler` | The C compiler used to build the binary. -`Build SHA-1` | The SHA-1 hash of the commit used to build the binary. -`Build Type` | The type of release. If `release`, `release-gnu`, or `release-musl`, the binary is for a [production release](../releases/#production-releases). If `development`, the binary is for a [testing release](../releases/#testing-releases). - -## See also - -- [Install CockroachDB](install-cockroachdb.html) -- [Other Cockroach Commands](cockroach-commands.html) diff --git a/src/current/v19.1/views.md b/src/current/v19.1/views.md deleted file mode 100644 index 4bcc213565f..00000000000 --- a/src/current/v19.1/views.md +++ /dev/null @@ -1,376 +0,0 @@ ---- -title: Views -summary: -toc: true ---- - -A view is a stored [selection query](selection-queries.html) and provides a shorthand name for it. CockroachDB's views are **dematerialized**: they do not store the results of the underlying queries. Instead, the underlying query is executed anew every time the view is used. - - -## Why use views? - -There are various reasons to use views, including: - -- [Hide query complexity](#hide-query-complexity) -- [Limit access to underlying data](#limit-access-to-underlying-data) - -### Hide query complexity - -When you have a complex query that, for example, joins several tables, or performs complex calculations, you can store the query as a view and then select from the view as you would from a standard table. - -#### Example - -Let's say you're using our [sample `startrek` database](generate-cockroachdb-resources.html#generate-example-data), which contains two tables, `episodes` and `quotes`. There's a foreign key constraint between the `episodes.id` column and the `quotes.episode` column. To count the number of famous quotes per season, you could run the following join: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -+--------+----------+ -| season | count(*) | -+--------+----------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+----------+ -(3 rows) -~~~ - -Alternatively, to make it much easier to run this complex query, you could create a view: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE VIEW startrek.quotes_per_season (season, quotes) - AS SELECT startrek.episodes.season, count(*) - FROM startrek.quotes - JOIN startrek.episodes - ON startrek.quotes.episode = startrek.episodes.id - GROUP BY startrek.episodes.season; -~~~ - -~~~ -CREATE VIEW -~~~ - -Then, executing the query is as easy as `SELECT`ing from the view: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.quotes_per_season; -~~~ - -~~~ -+--------+--------+ -| season | quotes | -+--------+--------+ -| 2 | 76 | -| 3 | 46 | -| 1 | 78 | -+--------+--------+ -(3 rows) -~~~ - -### Limit access to underlying data - -When you do not want to grant a user access to all the data in one or more standard tables, you can create a view that contains only the columns and/or rows that the user should have access to and then grant the user permissions on the view. - -#### Example - -Let's say you have a `bank` database containing an `accounts` table: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -+----+----------+---------+-----------------+ -| id | type | balance | email | -+----+----------+---------+-----------------+ -| 1 | checking | 1000 | max@roach.com | -| 2 | savings | 10000 | max@roach.com | -| 3 | checking | 15000 | betsy@roach.com | -| 4 | checking | 5000 | lilly@roach.com | -| 5 | savings | 50000 | ben@roach.com | -+----+----------+---------+-----------------+ -(5 rows) -~~~ - -You want a particular user, `bob`, to be able to see the types of accounts each user has without seeing the balance in each account, so you create a view to expose just the `type` and `email` columns: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE VIEW bank.user_accounts - AS SELECT type, email - FROM bank.accounts; -~~~ - -~~~ -CREATE VIEW -~~~ - -You then make sure `bob` does not have privileges on the underlying `bank.accounts` table: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW GRANTS ON bank.accounts; -~~~ - -~~~ -+----------+------+------------+ -| Table | User | Privileges | -+----------+------+------------+ -| accounts | root | ALL | -| accounts | toti | SELECT | -+----------+------+------------+ -(2 rows) -~~~ - -Finally, you grant `bob` privileges on the `bank.user_accounts` view: - -{% include copy-clipboard.html %} -~~~ sql -> GRANT SELECT ON bank.user_accounts TO bob; -~~~ - -Now, `bob` will get a permissions error when trying to access the underlying `bank.accounts` table but will be allowed to query the `bank.user_accounts` view: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.accounts; -~~~ - -~~~ -pq: user bob does not have SELECT privilege on table accounts -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.user_accounts; -~~~ - -~~~ -+----------+-----------------+ -| type | email | -+----------+-----------------+ -| checking | max@roach.com | -| savings | max@roach.com | -| checking | betsy@roach.com | -| checking | lilly@roach.com | -| savings | ben@roach.com | -+----------+-----------------+ -(5 rows) -~~~ - -## How views work - -### Creating views - -To create a view, use the [`CREATE VIEW`](create-view.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> CREATE VIEW bank.user_accounts - AS SELECT type, email - FROM bank.accounts; -~~~ - -~~~ -CREATE VIEW -~~~ - -{{site.data.alerts.callout_info}} -Any [selection query](selection-queries.html) is valid as operand to `CREATE VIEW`, not just [simple `SELECT` clauses](select-clause.html). -{{site.data.alerts.end}} - -### Listing views - -Once created, views are listed alongside regular tables in the database: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW TABLES FROM bank; -~~~ - -~~~ -+---------------+ -| Table | -+---------------+ -| accounts | -| user_accounts | -+---------------+ -(2 rows) -~~~ - -To list just views, you can query the `views` table in the [Information Schema](information-schema.html): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.information_schema.views; -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM startrek.information_schema.views; -~~~ - -~~~ -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| table_catalog | table_schema | table_name | view_definition | check_option | is_updatable | is_insertable_into | is_trigger_updatable | is_trigger_deletable | is_trigger_insertable_into | -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| bank | public | user_accounts | SELECT type, email FROM bank.accounts | NULL | NULL | NULL | NULL | NULL | NULL | -+---------------+-------------------+----------------------+---------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -(1 row) -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| table_catalog | table_schema | table_name | view_definition | check_option | is_updatable | is_insertable_into | is_trigger_updatable | is_trigger_deletable | is_trigger_insertable_into | -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -| startrek | public | quotes_per_season | SELECT startrek.episodes.season, count(*) FROM startrek.quotes JOIN startrek.episodes ON startrek.quotes.episode = startrek.episodes.id GROUP BY startrek.episodes.season | NULL | NULL | NULL | NULL | NULL | NULL | -+---------------+-------------------+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------+--------------+--------------------+----------------------+----------------------+----------------------------+ -(1 row) -~~~ - -### Querying views - -To query a view, target it with a [table expression](table-expressions.html#table-or-view-names), for example using a [`SELECT` clause](select-clause.html), just as you would with a stored table: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM bank.user_accounts; -~~~ - -~~~ -+----------+-----------------+ -| type | email | -+----------+-----------------+ -| checking | max@roach.com | -| savings | max@roach.com | -| checking | betsy@roach.com | -| checking | lilly@roach.com | -| savings | ben@roach.com | -+----------+-----------------+ -(5 rows) -~~~ - -`SELECT`ing a view executes the view's stored `SELECT` statement, which returns the relevant data from the underlying table(s). To inspect the `SELECT` statement executed by the view, use the [`SHOW CREATE`](show-create.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE bank.user_accounts; -~~~ - -~~~ -+--------------------+---------------------------------------------------------------------------+ -| table_name | create_statement | -+--------------------+---------------------------------------------------------------------------+ -| bank.user_accounts | CREATE VIEW "bank.user_accounts" AS SELECT type, email FROM bank.accounts | -+--------------------+---------------------------------------------------------------------------+ -(1 row) -~~~ - -You can also inspect the `SELECT` statement executed by a view by querying the `views` table in the [Information Schema](information-schema.html): - -{% include copy-clipboard.html %} -~~~ sql -> SELECT view_definition FROM bank.information_schema.views WHERE table_name = 'user_accounts'; -~~~ - -~~~ -+----------------------------------------+ -| view_definition | -+----------------------------------------+ -| SELECT type, email FROM bank.accounts | -+----------------------------------------+ -(1 row) -~~~ - -### View dependencies - -A view depends on the objects targeted by its underlying query. Attempting to rename an object referenced in a view's stored query therefore results in an error: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank.accounts RENAME TO bank.accts; -~~~ - -~~~ -pq: cannot rename table "bank.accounts" because view "user_accounts" depends on it -~~~ - -Likewise, attempting to drop an object referenced in a view's stored query results in an error: - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE bank.accounts; -~~~ - -~~~ -pq: cannot drop table "accounts" because view "user_accounts" depends on it -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> ALTER TABLE bank.accounts DROP COLUMN email; -~~~ - -~~~ -pq: cannot drop column email because view "bank.user_accounts" depends on it -~~~ - -There is an exception to the rule above, however: When [dropping a table](drop-table.html) or [dropping a view](drop-view.html), you can use the `CASCADE` keyword to drop all dependent objects as well: - -{% include copy-clipboard.html %} -~~~ sql -> DROP TABLE bank.accounts CASCADE; -~~~ - -~~~ -DROP TABLE -~~~ - -{{site.data.alerts.callout_danger}} -`CASCADE` drops **all** dependent objects without listing them, which can lead to inadvertent and difficult-to-recover losses. To avoid potential harm, we recommend dropping objects individually in most cases. -{{site.data.alerts.end}} - -### Renaming views - -To rename a view, use the [`ALTER VIEW`](alter-view.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> ALTER VIEW bank.user_accounts RENAME TO bank.user_accts; -~~~ - -~~~ -RENAME VIEW -~~~ - -It is not possible to change the stored query executed by the view. Instead, you must drop the existing view and create a new view. - -### Removing views - -To remove a view, use the [`DROP VIEW`](drop-view.html) statement: - -{% include copy-clipboard.html %} -~~~ sql -> DROP VIEW bank.user_accounts -~~~ - -~~~ -DROP VIEW -~~~ - -## See also - -- [Selection Queries](selection-queries.html) -- [Simple `SELECT` Clauses](select-clause.html) -- [`CREATE VIEW`](create-view.html) -- [`SHOW CREATE`](show-create.html) -- [`GRANT`](grant.html) -- [`ALTER VIEW`](alter-view.html) -- [`DROP VIEW`](drop-view.html) diff --git a/src/current/v19.1/window-functions.md b/src/current/v19.1/window-functions.md deleted file mode 100644 index 9addcf271cb..00000000000 --- a/src/current/v19.1/window-functions.md +++ /dev/null @@ -1,434 +0,0 @@ ---- -title: Window Functions -summary: A window function performs a calculation across a set of table rows that are somehow related to the current row. -toc: true ---- - -CockroachDB supports the application of a function over a subset of the rows returned by a [selection query][selection-query]. Such a function is known as a _window function_, and it allows you to compute values by operating on more than one row at a time. The subset of rows a window function operates on is known as a _window frame_. - -For a complete list of supported window functions, see [Functions and Operators](functions-and-operators.html#window-functions). - -{{site.data.alerts.callout_success}} -All [aggregate functions][aggregate-functions] can also be used as [window functions][window-functions]. For more information, see the [Examples](#examples) below. -{{site.data.alerts.end}} - -{{site.data.alerts.callout_info}} -The examples on this page use the `users`, `rides`, and `vehicles` tables from our open-source fictional peer-to-peer ride-sharing application,[MovR](https://github.com/cockroachdb/movr). -{{site.data.alerts.end}} - -## How window functions work - -At a high level, window functions work by: - -1. Creating a "virtual table" using a [selection query][selection-query]. -2. Splitting that table into window frames using an `OVER (PARTITION BY ...)` clause. -3. Applying the window function to each of the window frames created in step 2 - -For example, consider this query: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT(city), - SUM(revenue) OVER (PARTITION BY city) AS city_revenue - FROM rides - ORDER BY city_revenue DESC; -~~~ - -Its operation can be described as follows (numbered steps listed here correspond to the numbers in the diagram below): - -1. The outer `SELECT DISTINCT(city) ... FROM rides` creates a "virtual table" on which the window functions will operate. -2. The window function `SUM(revenue) OVER ()` operates on a window frame containing all rows of the query output. -3. The window function `SUM(revenue) OVER (PARTITION BY city)` operates on several window frames in turn; each frame contains the `revenue` columns for a different city (Amsterdam, Boston, L.A., etc.). - -Window function diagram - -## Caveats - -The most important part of working with window functions is understanding what data will be in the frame that the window function will be operating on. By default, the window frame includes all of the rows of the partition. If you order the partition, the default frame includes all rows from the first row in the partition to the current row. In other words, adding an `ORDER BY` clause when you create the window frame (e.g., `PARTITION BY x ORDER by y`) has the following effects: - -- It makes the rows inside the window frame ordered. -- It changes what rows the function is called on - no longer all of the rows in the window frame, but a subset between the "first" row and the current row. - -Another way of saying this is that you can run a window function on either: - -- All rows in the window frame created by the `PARTITION BY` clause, e.g., `SELECT f(x) OVER () FROM z`. -- A subset of the rows in the window frame if the frame is created with `SELECT f(x) OVER (PARTITION BY x ORDER BY y) FROM z`. - -Because of this, you should be aware of the behavior of any [aggregate function][aggregate-functions] you use as a [window function][window-functions]. If you are not seeing results you expect from a window function, this behavior may explain why. You may need to specify the frame boundaries explicitly using a *frame clause* such as `ROWS BETWEEN [exclusion]` (fully supported) or `RANGE BETWEEN [exclusion]` (only `UNBOUNDED PRECEDING`/`CURRENT ROW`/`UNBOUNDED FOLLOWING` supported). - -## Examples - -### Schema - -The tables used in the examples are shown below. - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE users; -~~~ - -~~~ -+-------+-------------------------------------------------------------+ -| Table | CreateTable | -+-------+-------------------------------------------------------------+ -| users | CREATE TABLE users ( | -| | id UUID NOT NULL, | -| | city STRING NOT NULL, | -| | name STRING NULL, | -| | address STRING NULL, | -| | credit_card STRING NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), | -| | FAMILY "primary" (id, city, name, address, credit_card) | -| | ) | -+-------+-------------------------------------------------------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE rides; -~~~ - -~~~ -+-------+--------------------------------------------------------------------------+ -| Table | CreateTable | -+-------+--------------------------------------------------------------------------+ -| rides | CREATE TABLE rides ( | -| | id UUID NOT NULL, | -| | city STRING NOT NULL, | -| | vehicle_city STRING NULL, | -| | rider_id UUID NULL, | -| | vehicle_id UUID NULL, | -| | start_address STRING NULL, | -| | end_address STRING NULL, | -| | start_time TIMESTAMP NULL, | -| | end_time TIMESTAMP NULL, | -| | revenue FLOAT NULL, | -| | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), | -| | CONSTRAINT fk_city_ref_users FOREIGN KEY (city, rider_id) REFERENCES | -| | users (city, id), | -| | INDEX rides_auto_index_fk_city_ref_users (city ASC, rider_id ASC), | -| | CONSTRAINT fk_vehicle_city_ref_vehicles FOREIGN KEY (vehicle_city, | -| | vehicle_id) REFERENCES vehicles (city, id), | -| | INDEX rides_auto_index_fk_vehicle_city_ref_vehicles (vehicle_city | -| | ASC, vehicle_id ASC), | -| | FAMILY "primary" (id, city, vehicle_city, rider_id, vehicle_id, | -| | start_address, end_address, start_time, end_time, revenue), | -| | CONSTRAINT check_vehicle_city_city CHECK (vehicle_city = city) | -| | ) | -+-------+--------------------------------------------------------------------------+ -~~~ - -{% include copy-clipboard.html %} -~~~ sql -> SHOW CREATE TABLE vehicles; -~~~ - -~~~ -+----------+------------------------------------------------------------------------------------------------+ -| Table | CreateTable | -+----------+------------------------------------------------------------------------------------------------+ -| vehicles | CREATE TABLE vehicles ( +| -| | id UUID NOT NULL, +| -| | city STRING NOT NULL, +| -| | type STRING NULL, +| -| | owner_id UUID NULL, +| -| | creation_time TIMESTAMP NULL, +| -| | status STRING NULL, +| -| | mycol STRING NULL, +| -| | ext JSON NULL, +| -| | CONSTRAINT "primary" PRIMARY KEY (city ASC, id ASC), +| -| | CONSTRAINT fk_city_ref_users FOREIGN KEY (city, owner_id) REFERENCES users (city, id),+| -| | INDEX vehicles_auto_index_fk_city_ref_users (city ASC, owner_id ASC), +| -| | FAMILY "primary" (id, city, type, owner_id, creation_time, status, mycol, ext) +| -| | ) | -+----------+------------------------------------------------------------------------------------------------+ -(1 row) -~~~ - -### Customers taking the most rides - -To see which customers have taken the most rides, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM - (SELECT distinct(name) as "name", - COUNT(*) OVER (PARTITION BY name) AS "number of rides" - FROM users JOIN rides ON users.id = rides.rider_id) - ORDER BY "number of rides" DESC LIMIT 10; -~~~ - -~~~ -+-------------------+-----------------+ -| name | number of rides | -+-------------------+-----------------+ -| Michael Smith | 53 | -| Michael Williams | 37 | -| John Smith | 36 | -| Jennifer Smith | 32 | -| Michael Brown | 31 | -| Michael Miller | 30 | -| Christopher Smith | 29 | -| James Johnson | 28 | -| Jennifer Johnson | 27 | -| Amanda Smith | 26 | -+-------------------+-----------------+ -(10 rows) -~~~ - -### Customers generating the most revenue - -To see which customers have generated the most revenue, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT name, - SUM(revenue) OVER (PARTITION BY name) AS "total rider revenue" - FROM users JOIN rides ON users.id = rides.rider_id - ORDER BY "total rider revenue" DESC - LIMIT 10; -~~~ - -~~~ -+------------------+---------------------+ -| name | total rider revenue | -+------------------+---------------------+ -| Michael Smith | 2251.04 | -| Jennifer Smith | 2114.55 | -| Michael Williams | 2011.85 | -| John Smith | 1826.43 | -| Robert Johnson | 1652.99 | -| Michael Miller | 1619.25 | -| Robert Smith | 1534.11 | -| Jennifer Johnson | 1506.50 | -| Michael Brown | 1478.90 | -| Michael Johnson | 1405.68 | -+------------------+---------------------+ -(10 rows) -~~~ - -### Add row numbers to query output - -To add row numbers to the output, kick the previous query down into a subquery and run the `row_number()` window function. - -{% include copy-clipboard.html %} -~~~ sql -> SELECT row_number() OVER (), * - FROM ( - SELECT DISTINCT - name, - sum(revenue) OVER ( - PARTITION BY name - ) AS "total rider revenue" - FROM users JOIN rides ON users.id = rides.rider_id - ORDER BY "total rider revenue" DESC - LIMIT 10 - ); -~~~ - -~~~ -+------------+------------------+---------------------+ -| row_number | name | total rider revenue | -+------------+------------------+---------------------+ -| 1 | Michael Smith | 2251.04 | -| 2 | Jennifer Smith | 2114.55 | -| 3 | Michael Williams | 2011.85 | -| 4 | John Smith | 1826.43 | -| 5 | Robert Johnson | 1652.99 | -| 6 | Michael Miller | 1619.25 | -| 7 | Robert Smith | 1534.11 | -| 8 | Jennifer Johnson | 1506.50 | -| 9 | Michael Brown | 1478.90 | -| 10 | Michael Johnson | 1405.68 | -+------------+------------------+---------------------+ -(10 rows) -~~~ - -### Customers taking the most rides and generating the most revenue - -To see which customers have taken the most rides while generating the most revenue, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM ( - SELECT DISTINCT name, - COUNT(*) OVER w AS "number of rides", - (SUM(revenue) OVER w)::DECIMAL(100,2) AS "total rider revenue" - FROM users JOIN rides ON users.ID = rides.rider_id - WINDOW w AS (PARTITION BY name) - ) - ORDER BY "number of rides" DESC, - "total rider revenue" DESC - LIMIT 10; -~~~ - -~~~ -+-------------------+-----------------+---------------------+ -| name | number of rides | total rider revenue | -+-------------------+-----------------+---------------------+ -| Michael Smith | 53 | 2251.04 | -| Michael Williams | 37 | 2011.85 | -| John Smith | 36 | 1826.43 | -| Jennifer Smith | 32 | 2114.55 | -| Michael Brown | 31 | 1478.90 | -| Michael Miller | 30 | 1619.25 | -| Christopher Smith | 29 | 1380.18 | -| James Johnson | 28 | 1378.78 | -| Jennifer Johnson | 27 | 1506.50 | -| Robert Johnson | 26 | 1652.99 | -+-------------------+-----------------+---------------------+ -(10 rows) -~~~ - -### Customers with the highest average revenue per ride - -To see which customers have the highest average revenue per ride, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT name, - COUNT(*) OVER w AS "number of rides", - AVG(revenue) OVER w AS "average revenue per ride" - FROM users JOIN rides ON users.ID = rides.rider_id - WINDOW w AS (PARTITION BY name) - ORDER BY "average revenue per ride" DESC, "number of rides" ASC - LIMIT 10; -~~~ - -~~~ -+---------------------+-----------------+--------------------------+ -| name | number of rides | average revenue per ride | -+---------------------+-----------------+--------------------------+ -| Madison Jimenez | 1 | 100.00 | -| David Webster | 1 | 100.00 | -| Samantha Holmes | 1 | 100.00 | -| Charles Marquez | 1 | 100.00 | -| Briana Howell | 1 | 99.99 | -| Michelle Williamson | 1 | 99.99 | -| Shannon Weiss | 1 | 99.98 | -| Justin Barry | 1 | 99.98 | -| Paul Key | 1 | 99.97 | -| Holly Gregory | 1 | 99.97 | -+---------------------+-----------------+--------------------------+ -(10 rows) -~~~ - -### Customers with the highest average revenue per ride, given more than five rides - -To see which customers have the highest average revenue per ride, given that they have taken at least 3 rides, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT * FROM ( - SELECT DISTINCT name, - COUNT(*) OVER w AS "number of rides", - (AVG(revenue) OVER w)::DECIMAL(100,2) AS "average revenue per ride" - FROM users JOIN rides ON users.ID = rides.rider_id - WINDOW w AS (PARTITION BY name) - ) - WHERE "number of rides" >= 5 - ORDER BY "average revenue per ride" DESC - LIMIT 10; -~~~ - -~~~ -+------------------+-----------------+--------------------------+ -| name | number of rides | average revenue per ride | -+------------------+-----------------+--------------------------+ -| Richard Wilson | 5 | 88.22 | -| Rachel Johnson | 6 | 86.42 | -| Kenneth Wilson | 5 | 85.26 | -| Benjamin Avila | 5 | 85.23 | -| Katie Evans | 5 | 85.10 | -| Steven Griffith | 5 | 84.64 | -| Phillip Moore | 5 | 84.22 | -| Cheryl Adams | 5 | 83.85 | -| Patrick Baker | 5 | 83.63 | -| Stephen Gonzalez | 6 | 83.59 | -+------------------+-----------------+--------------------------+ -(10 rows) -~~~ - -### Total number of riders, and total revenue - -To find out the total number of riders and total revenue generated thus far by the app, run: - -{% include copy-clipboard.html %} -~~~ sql -> SELECT - COUNT("name") AS "total # of riders", - SUM("total rider revenue") AS "total revenue" FROM ( - SELECT name, - SUM(revenue) OVER (PARTITION BY name) AS "total rider revenue" - FROM users JOIN rides ON users.id = rides.rider_id - ORDER BY "total rider revenue" DESC - LIMIT (SELECT count(distinct(rider_id)) FROM rides) - ); -~~~ - -~~~ -+-------------------+---------------+ -| total # of riders | total revenue | -+-------------------+---------------+ -| 63117 | 15772911.41 | -+-------------------+---------------+ -(1 row) -~~~ - -### How many vehicles of each type - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT type, COUNT(*) OVER (PARTITION BY type) AS cnt FROM vehicles ORDER BY cnt DESC; -~~~ - -~~~ -+------------+-------+ -| type | cnt | -+------------+-------+ -| bike | 33377 | -| scooter | 33315 | -| skateboard | 33307 | -+------------+-------+ -(3 rows) -~~~ - -### How much revenue per city - -{% include copy-clipboard.html %} -~~~ sql -> SELECT DISTINCT(city), SUM(revenue) OVER (PARTITION BY city) AS city_revenue FROM rides ORDER BY city_revenue DESC; -~~~ - -~~~ -+---------------+--------------+ -| (city) | city_revenue | -+---------------+--------------+ -| paris | 567144.48 | -| washington dc | 567011.74 | -| amsterdam | 564211.74 | -| new york | 561420.67 | -| rome | 560464.52 | -| boston | 559465.75 | -| san francisco | 558807.13 | -| los angeles | 558805.45 | -| seattle | 555452.08 | -+---------------+--------------+ -(9 rows) -~~~ - -## See also - -- [Simple `SELECT` clause][simple-select] -- [Selection Queries][selection-query] -- [Aggregate functions][aggregate-functions] -- [Window Functions][window-functions] -- [CockroachDB 2.0 Demo][demo] - - - -[aggregate-functions]: functions-and-operators.html#aggregate-functions -[demo]: https://www.youtube.com/watch?v=v2QK5VgLx6E -[simple-select]: select-clause.html -[selection-query]: selection-queries.html -[window-functions]: functions-and-operators.html#window-functions