Skip to content

Latest commit

 

History

History
163 lines (114 loc) · 6.88 KB

databases.md

File metadata and controls

163 lines (114 loc) · 6.88 KB
label synopsis status
Databases
Class <code>cds.DatabaseService</code> and subclasses thereof are technical services representing persistent storage.
released

Database Services

[[toc]]

cds.DatabaseService class { #cds-db-service}

class cds.DatabaseService extends cds.Service

srv.begin () → this {#db-begin }

In case of database services this actually starts the transaction by acquiring a physical connection from the connection pool, and optionally sends a command to the database like BEGIN TRANSACTION.

This method is called automatically by the framework on the first query, so you never have to call it in application coding. There are only very rare cases where you'd want to do so, for example to reuse a tx object to start subsequent physical transactions after a former commit or rollback. But this is not considered good practice.

cds.DatabaseService — Consumption {#databaseservice-consumption }

InsertResult (Beta)

  • On INSERT, DatabaseServices return an instance of InsertResult defined as follows:
    • Iterator that returns the keys of the created entries, for example:
      • Example: [...result] -> [{ ID: 1 }, { ID: 2 }, ...]
      • In case of INSERT...as(SELECT...), the iterator returns {} for each row
    • affectedRows: the number inserted (root) entries or the number of affectedRows in case of INSERT into SELECT
    • valueOf(): returns affectedRows such that comparisons like result > 0 can be used ::: tip === can't be used as it also compares the type :::

cds.DatabaseService — Configuration {#databaseservice-configuration }

Pool

Instead of opening and closing a database connection for every request, we use a pool to reuse connections. By default, the following pool configuration is used:

{
  "acquireTimeoutMillis": <if (NODE_ENV='production') 1000 else 10000>,
  "evictionRunIntervalMillis": <2 * (idleTimeoutMillis || softIdleTimeoutMillis || 30000)>,
  "min": 0,
  "max": 100,
  "numTestsPerEvictionRun": <(max - min) / 3>,
  "softIdleTimeoutMillis": 30000,
  "idleTimeoutMillis": 30000,
  "testOnBorrow": true,
  "fifo": false
}

::: warning This default pool configuration does not apply to @cap-js database implementations. :::

The generic-pool has a built-in pool evictor, which inspects idle database connections in the pool and destroys them if they are too old.

The following parameters are provided in the pool configuration:

  • acquireTimeoutMillis: The parameter specifies how much time it is allowed to wait an existing connection is fetched from the pool or a new connection is established.

  • evictionRunIntervalMillis: The parameter specifies how often to run eviction checks. In case of 0 the check is not run.

  • min: Minimum number of database connections to keep in pool at any given time. ::: warning This should be kept at the default 0. Otherwise every eviction run destroys all unused connections older than idleTimeoutMillis and afterwards creates new connections until min is reached. :::

  • max: Maximum number of database connections to keep in pool at any given time.

  • numTestsPerEvictionRun: Number of database connections to be checked with one eviction run.

  • softIdleTimeoutMillis: Amount of time database connection may sit idle in the pool before it is eligible for eviction. At least "min" connections should stay in the pool. In case of -1 no connection can get evicted.

  • idleTimeoutMillis: The minimum amount of time that a database connection may stay idle in the pool before it is eligible for eviction due to idle time. This parameter supercedes softIdleTimeoutMillis.

  • testOnBorrow: Should the pool validate the database connections before giving them to the clients?

  • fifo: If false, the most recently released resources will be the first to be allocated (stack). If true, the oldest resources will be first to be allocated (queue). Default value: false.

Pool configuration can be adjusted by setting the pool option as shown in the following example:

{
  "cds": {
    "requires": {
      "db": {
        "kind": "hana",
        "pool": {
          "acquireTimeoutMillis": 5000,
          "min": 0,
          "max": 100,
          "fifo": true
        }
      }
    }
  }
}

::: warning ❗ Warning The parameters are very specific to the current technical setup, such as the application environment and database location. Even though we provide a default pool configuration, we expect that each application provides its own configuration based on its specific needs. :::

cds.DatabaseService — UPSERT {#databaseservice-upsert }

The main use case of upsert is data replication. Upsert updates existing entity records from the given data or inserts new ones if they don't exist in the database.

::: warning Even if an entity doesn't exist in the database:
→ Upsert is not equivalent to Insert. :::

UPSERT statements can be created with the UPSERT query API:

UPSERT.into('db.Books')
  .entries({ ID: 4711, title: 'Wuthering Heights', stock: 100 })

UPSERT queries are translated into DB native upsert statements, more specifically they unfold to an UPSERT SQL statement on SAP HANA and to an INSERT ON CONFLICT SQL statement on SQLite.

  • The rows to be upserted need to have the same structure, that is, all rows needs to specify the same named values.
  • The upsert data must contain all key elements of the entity.
  • If upsert data is incomplete only the given values are updated or inserted, which means the UPSERT statement has "PATCH semantics".
  • UPSERT statements don't have a where clause. The key values of the entity that is upserted are extracted from the data.

The following actions are not performed on upsert:

  • UUID key values are not generated.
  • The @cds.on.insert annotation is not handled.
  • Elements are not initialized with default values if the element's value is not given.
  • Generic CAP handlers, such as audit logging, are not invoked.

::: warning In contrast to the Java runtime, deep upserts and delta payloads are not yet supported. :::

More to Come

This documentation is not complete yet, or the APIs are not released for general availability. Stay tuned to upcoming releases for further updates.