Skip to content

Latest commit

 

History

History
581 lines (392 loc) · 27.3 KB

databases-hana.md

File metadata and controls

581 lines (392 loc) · 27.3 KB
status impl-variants
released
true

Using SAP HANA Cloud for Production

::: info This guide is available for Node.js and Java. Press v to switch, or use the toggle. :::

[[toc]]

SAP HANA Cloud is supported as the CAP standard database and recommended for productive use with full support for schema evolution and multitenancy.

::: warning

CAP isn't validated with other variants of SAP HANA, like "SAP HANA Database as a Service" or "SAP HANA (on premise)".

:::

Setup & Configuration

Run this to use SAP HANA Cloud for production:

npm add @sap/cds-hana

::: details Using other SAP HANA drivers...

Package @sap/cds-hana uses the hdb driver by default. You can override that by running npm add @sap/hana-client, thereby adding it to your package dependencies, which then takes precedence over the default driver.

:::

::: tip Prefer cds add

... as documented in the deployment guide, which also does the equivalent of npm add @sap/cds-hana but in addition cares for updating mta.yaml and other deployment resources.

:::

To use SAP HANA Cloud, configure a module, which includes the feature cds-feature-hana. For example, add a Maven runtime dependency to the cds-feature-hana feature:

<dependency>
  <groupId>com.sap.cds</groupId>
  <artifactId>cds-feature-hana</artifactId>
  <scope>runtime</scope>
</dependency>

::: tip

The modules cds-starter-cloudfoundry and cds-starter-k8s include cds-feature-hana.

:::

The datasource for HANA is then auto-configured based on available service bindings of type service-manager and hana.

Learn more about the configuration of an SAP HANA Cloud Database{ .learn-more}

Running cds build

Deployment to SAP HANA is done via the SAP HANA Deployment Infrastructure (HDI) which in turn requires running cds build to generate all the deployable HDI artifacts. For example, run this in cap/samples/bookshop:

cds build --for hana

Which should display this log output:

[cds] - done > wrote output to:
   gen/db/init.js
   gen/db/package.json
   gen/db/src/gen/.hdiconfig
   gen/db/src/gen/.hdinamespace
   gen/db/src/gen/AdminService.Authors.hdbview
   gen/db/src/gen/AdminService.Books.hdbview
   gen/db/src/gen/AdminService.Books_texts.hdbview
   gen/db/src/gen/AdminService.Currencies.hdbview
   gen/db/src/gen/AdminService.Currencies_texts.hdbview
   gen/db/src/gen/AdminService.Genres.hdbview
   gen/db/src/gen/AdminService.Genres_texts.hdbview
   gen/db/src/gen/CatalogService.Books.hdbview
   gen/db/src/gen/CatalogService.Books_texts.hdbview
   gen/db/src/gen/CatalogService.Currencies.hdbview
   gen/db/src/gen/CatalogService.Currencies_texts.hdbview
   gen/db/src/gen/CatalogService.Genres.hdbview
   gen/db/src/gen/CatalogService.Genres_texts.hdbview
   gen/db/src/gen/CatalogService.ListOfBooks.hdbview
   gen/db/src/gen/data/sap.capire.bookshop-Authors.csv
   gen/db/src/gen/data/sap.capire.bookshop-Authors.hdbtabledata
   gen/db/src/gen/data/sap.capire.bookshop-Books.csv
   gen/db/src/gen/data/sap.capire.bookshop-Books.hdbtabledata
   gen/db/src/gen/data/sap.capire.bookshop-Books.texts.csv
   gen/db/src/gen/data/sap.capire.bookshop-Books.texts.hdbtabledata
   gen/db/src/gen/data/sap.capire.bookshop-Genres.csv
   gen/db/src/gen/data/sap.capire.bookshop-Genres.hdbtabledata
   gen/db/src/gen/localized.AdminService.Authors.hdbview
   gen/db/src/gen/localized.AdminService.Books.hdbview
   gen/db/src/gen/localized.AdminService.Currencies.hdbview
   gen/db/src/gen/localized.AdminService.Genres.hdbview
   gen/db/src/gen/localized.CatalogService.Books.hdbview
   gen/db/src/gen/localized.CatalogService.Currencies.hdbview
   gen/db/src/gen/localized.CatalogService.Genres.hdbview
   gen/db/src/gen/localized.CatalogService.ListOfBooks.hdbview
   gen/db/src/gen/localized.sap.capire.bookshop.Authors.hdbview
   gen/db/src/gen/localized.sap.capire.bookshop.Books.hdbview
   gen/db/src/gen/localized.sap.capire.bookshop.Genres.hdbview
   gen/db/src/gen/localized.sap.common.Currencies.hdbview
   gen/db/src/gen/sap.capire.bookshop.Authors.hdbtable
   gen/db/src/gen/sap.capire.bookshop.Books.hdbtable
   gen/db/src/gen/sap.capire.bookshop.Books_author.hdbconstraint
   gen/db/src/gen/sap.capire.bookshop.Books_currency.hdbconstraint
   gen/db/src/gen/sap.capire.bookshop.Books_foo.hdbconstraint
   gen/db/src/gen/sap.capire.bookshop.Books_genre.hdbconstraint
   gen/db/src/gen/sap.capire.bookshop.Books_texts.hdbtable
   gen/db/src/gen/sap.capire.bookshop.Genres.hdbtable
   gen/db/src/gen/sap.capire.bookshop.Genres_parent.hdbconstraint
   gen/db/src/gen/sap.capire.bookshop.Genres_texts.hdbtable
   gen/db/src/gen/sap.common.Currencies.hdbtable
   gen/db/src/gen/sap.common.Currencies_texts.hdbtable

Generated HDI Artifacts

As we see from the log output cds build generates these deployment artifacts as expected by HDI, based on CDS models and .csv files provided in your projects:

  • .hdbtable files for entities
  • .hdbview files for views / projections
  • .hdbconstraint files for database constraints
  • .hdbtabledata files for CSV content
  • a few technical files required by HDI, such as .hdinamespace and .hdiconfig

Custom HDI Artifacts

In addition to the generated HDI artifacts, you can add custom ones by adding according files to folder db/src. For example, let's add an index for Books titles...

  1. Add a file db/src/sap.capire.bookshop.Books.hdbindex and fill it with this content:

    ::: code-group

    INDEX sap_capire_bookshop_Books_title_index
    ON sap_capire_bookshop_Books (title)

    :::

  2. Run cds build again → this time you should see this additional line in the log output:

    [cds] - done > wrote output to:
       ...
       gen/db/src/sap.capire.bookshop.Books.hdbindex // [!code focus]
    

Deploying to SAP HANA

There are two ways to include SAP HANA in your setup: Use SAP HANA in a hybrid mode, meaning running your services locally and connecting to your database in the cloud, or running your whole application on SAP Business Technology Platform. This is possible either in trial accounts or in productive accounts.

To make the following configuration steps work, we assume that you've provisioned, set up, and started, for example, your SAP HANA Cloud instance in the trial environment. If you need to prepare your SAP HANA first, see How to Get an SAP HANA Cloud Instance for SAP Business Technology Platform, Cloud Foundry environment to learn about your options.

Prepare for Production { #configure-hana .impl .node }

To prepare the project, execute:

cds add hana --for hybrid

This configures deployment for SAP HANA to use the hdbtable and hdbview formats. The configuration is added to a [hybrid] profile in your package.json.

::: tip The profile hybrid relates to the hybrid testing scenario If you want to prepare your project for production and use the profile production, read the Deploy to Cloud Foundry guide. :::

No further configuration is necessary for Node.js. For Java, see the Use SAP HANA as the Database for a CAP Java Application tutorial for the rest of the configuration.

Using cds deploy for Ad-Hoc Deployments { #cds-deploy-hana .impl .node }

cds deploy lets you deploy just the database parts of the project to an SAP HANA instance. The server application (the Node.js or Java part) still runs locally and connects to the remote database instance, allowing for fast development roundtrips.

Make sure that you're logged in to Cloud Foundry with the correct target, that is, org and space. Then in the project root folder, just execute:

cds deploy --to hana --profile hybrid

To connect to your SAP HANA Cloud instance use cds watch --profile hybrid.

Behind the scenes, cds deploy does the following:

  • Compiles the CDS model to SAP HANA files (usually in gen/db, or db/src/gen)
  • Generates .hdbtabledata files for the CSV files in the project. If a .hdbtabledata file is already present next to the CSV files, no new file is generated.
  • Creates a Cloud Foundry service of type hdi-shared, which creates an HDI container. Also, you can explicitly specify the name like so: cds deploy --to hana:<myService>.
  • Starts @sap/hdi-deploy locally. If you need a tunnel to access the database, you can specify its address with --tunnel-address <host:port>.
  • Stores the binding information in the .cdsrc-private.json file of your project. With this information, cds watch/run can fetch the SAP HANA credentials at runtime, so that the server can connect to it.

Learn more about the deployment using HDI.{.learn-more} Learn more about hybrid testing using service bindings to Cloud services.{.learn-more}

If you run into issues, see the Troubleshooting guide.

Using cf deploy or cf push { .impl .node }

See the Deploying to Cloud Foundry guide for information about how to deploy the complete application to SAP Business Technology Platform, including a dedicated deployer application for the SAP HANA database.

Native SAP HANA Features

The HANA Service provides dedicated support for native SAP HANA features as follows.

Geospatial Functions

CDS supports the special syntax for SAP HANA geospatial functions:

entity Geo as select from Foo {
  geoColumn.ST_Area() as area : Decimal,
  new ST_Point(2.25, 3.41).ST_X() as x : Decimal
};

Learn more in the SAP HANA Spatial Reference.{.learn-more}

Spatial Grid Generators

SAP HANA Spatial has some built-in grid generator table functions. To use them in a CDS model, first define corresponding facade entities in CDS.

Example for function ST_SquareGrid:

@cds.persistence.exists
entity ST_SquareGrid(size: Double, geometry: hana.ST_GEOMETRY) {
  geom: hana.ST_GEOMETRY;
  i: Integer;
  j: Integer;
}

Then the function can be called, parameters have to be passed by name:

entity V as select
  from ST_SquareGrid(size: 1.0, geometry: ST_GeomFromWkt('Point(1.5 -2.5)'))
{ geom, i, j };

Functions Without Arguments

SAP HANA allows to omit the parentheses for functions that don't expect arguments. For example:

entity Foo { key ID : UUID; }
entity Bar as select from Foo {
  ID, current_timestamp
};

Some of which are well-known standard functions like current_timestamp in the previous example, which can be written without parentheses in CDS models. However, there are many unknown ones, that aren't known to the compiler, for example:

  • current_connection
  • current_schema
  • current_transaction_isolation_level
  • current_utcdate
  • current_utctime
  • current_utctimestamp
  • sysuuid

To use these in CDS models, you have to add the parentheses so that CDS generic support for using native features can kick in:

entity Foo { key ID : UUID; }
entity Bar as select from Foo {
  ID, current_timestamp,
  sysuuid() as sysid // [!code focus]
};

Regex Functions

CDS supports SAP HANA Regex functions (locate_regexpr, occurrences_regexpr, replace_regexpr, and substring_regexpr), and SAP HANA aggregate functions with an additional order by clause in the argument list. Example:

locate_regexpr(pattern in name from 5)
first_value(name order by price desc)

Restriction: COLLATE isn't supported.

For other functions, where the syntax isn't supported by the compiler (for example, xmltable(...)), a native .hdbview can be used. See Using Native SAP HANA Artifacts for more details.

HDI Schema Evolution

CAP supports database schema updates by detecting changes to the CDS model when executing the CDS build. If the underlying database offers built-in schema migration techniques, compatible changes can be applied to the database without any data loss or the need for additional migration logic. Incompatible changes like deletions are also detected, but require manual resolution, as they would lead to data loss.

Change Detected Automatically Applied Automatically
Adding fields Yes Yes
Deleting fields Yes No
Renaming fields n/a 1 No
Changing datatype of fields Yes No
Changing type parameters Yes Yes
Changing associations/compositions Yes No 2
Renaming associations/compositions n/a 1 No
Renaming entities n/a No

1 Rename field or association operations aren't detected as such. Instead, corresponding ADD and DROP statements are rendered requiring manual resolution activities.

2 Changing targets may lead to renamed foreign keys. Possibly hard to detect data integrity issues due to non-matching foreign key values if target key names remain the same (for example "ID").

::: warning No support for incompatible schema changes Currently there's no framework support for incompatible schema changes that require scripted data migration steps (like changing field constraints NULL > NOT NULL). However, the CDS build does detect those changes and renders them as non-executable statements, requesting the user to take manual resolution steps. We recommend avoiding those changes in productive environments. :::

Schema Evolution and Multitenancy/Extensibility

There's full support for schema evolution when the cds-mtxs library is used for multitenancy handling. It ensures that all schema changes during base-model upgrades are rolled out to the tenant databases.

::: warning Tenant-specific extensibility using the cds-mtxs library isn't supported yet Right now, you can't activate extensions on entities annotated with @cds.persistence.journal. :::

Schema Updates with SAP HANA {#schema-updates-with-sap-hana}

All schema updates in SAP HANA are applied using SAP HANA Deployment Infrastructure (HDI) design-time artifacts, which are auto-generated during CDS build execution.

Schema updates using .hdbtable deployments are a challenge for tables with large data volume. Schema changes with .hdbtable are applied using temporary table generation to preserve the data. As this could lead to long deployment times, the support for .hdbmigrationtable artifact generation has been added. The Migration Table artifact type uses explicit versioning and migration tasks. Modifications of the database table are explicitly specified in the design-time file and carried out on the database table exactly as specified. This saves the cost of an internal table-copy operation. When a new version of an already existing table is deployed, HDI performs the migration steps that haven't been applied.

Deploy Artifact Transitions as Supported by HDI {#deploy-artifact-transitions}

Current format hdbcds hdbtable hdbmigrationtable
hdbcds yes n/a
hdbtable n/a yes
hdbmigrationtable n/a Yes

::: warning Direct migration from .hdbcds to .hdbmigrationtable isn't supported by HDI. A deployment using .hdbtable is required up front.

Learn more in the Enhance Project Configuration for SAP HANA Cloud section.{.learn-more}

During the transition from .hdbtable to .hdbmigrationtable you have to deploy version=1 of the .hdbmigrationtable artifact, which must not include any migration steps. :::

HDI supports the hdbcds → hdbtable → hdbmigrationtable migration flow without data loss. Even going back from .hdbmigrationtable to .hdbtable is possible. Keep in mind that you lose the migration history in this case. For all transitions you want to execute in HDI, you need to specify an undeploy allowlist as described in HDI Delta Deployment and Undeploy Allow List in the SAP HANA documentation.

Enabling hdbmigrationtable Generation for Selected Entities During CDS Build {#enabling-hdbmigrationtable-generation}

If you're migrating your already deployed scenario to .hdbmigrationtable deployment, you've to consider the remarks in Deploy Artifact Transitions as Supported by HDI.

By default, all entities are still compiled to .hdbtable and you only selectively choose the entities for which you want to build .hdbmigrationtable by annotating them with @cds.persistence.journal.

Example:

namespace data.model;

  @cds.persistence.journal
  entity LargeBook {
    key id : Integer;
    title : String(100);
    content : LargeString;
  }

CDS build generates .hdbmigrationtable source files for annotated entities as well as a last-dev/csn.json source file representing the CDS model state of the last build.

These source files have to be checked into the version control system.

Subsequent model changes are applied automatically as respective migration versions including the required schema update statements to accomplish the new target state. There are cases where you have to resolve or refactor the generated statements, like for reducing field lengths. As they can't be executed without data loss (for example, String(100) -> String(50)), the required migration steps are only added as comments for you to process explicitly.

Example:

>>>> Manual resolution required - DROP statements causing data loss are disabled
>>>> by default.
>>>> You may either:
>>>>   uncomment statements to allow incompatible changes, or
>>>>   refactor statements, e.g. replace DROP/ADD by single RENAME statement
>>>> After manual resolution delete all lines starting with >>>>>
-- ALTER TABLE my_bookshop_Books DROP (title);
-- ALTER TABLE my_bookshop_Books ADD (title NVARCHAR(50));

Changing the type of a field causes CDS build to create a corresponding ALTER TABLE statement. Data type conversion rules are applied by the SAP HANA database as part of the deployment step. This may cause the deployment to fail if the column contents can't be converted to the new format.

Examples:

  1. Changing the type of a field from String to Integer may cause tenant updates to fail if existing content can't be converted.
  2. Changing the type of a field from Decimal to Integer can succeed, but decimal places are truncated. Conversion fails if the content exceeds the maximum Integer length.

We recommend keeping .hdbtable deployment for entities where you expect low data volume. Every .hdbmigrationtable artifact becomes part of your versioned source code, creating a new migration version on every model change/build cycle. In turn, each such migration can require manual resolution. You can switch large-volume tables to .hdbmigrationtable at any time, keeping in mind that the existing .hdbtable design-time artifact needs to be undeployed. ::: tip Sticking to .hdbtable for the actual application development phase avoids lots of initial migration versions that would need to be applied to the database schema. :::

CDS build performs rudimentary checks on generated .hdmigrationtable files:

  • CDS build fails if inconsistencies are encountered between the generated .hdbmigrationtable files and the last-dev/csn.json model state. For example, the last migration version not matching the table version is such an inconsistency.
  • CDS build fails if manual resolution comments starting with >>>>> exist in one of the generated .hdbmigrationtable files. This ensures that manual resolution is performed before deployment.

Native Database Clauses {#schema-evolution-native-db-clauses}

Not all clauses supported by SQL can directly be written in CDL syntax. To use native database clauses also in a CAP CDS model, you can provide arbitrary SQL snippets with the annotations @sql.prepend and @sql.append. In this section, we're focusing on schema evolution specific details.

Schema evolution requires that any changes are applied by corresponding ALTER statements. See ALTER TABLE statement reference for more information. A new migration version is generated whenever an @sql.append or @sql.prepend annotation is added, changed, or removed. ALTER statements define the individual changes that create the final database schema. This schema has to match the schema defined by the TABLE statement in the .hdbmigrationtable artifact. Please note that the compiler doesn't evaluate or process these SQL snippets. Any snippet is taken as is and inserted into the TABLE statement and the corresponding ALTER statement. The deployment fails in case of syntax errors.

CDS Model:

@cds.persistence.journal
@sql.append: 'PERSISTENT MEMORY ON'
entity E {
  ...,
  @sql.append: 'FUZZY SEARCH INDEX ON'
  text: String(100);
}

Result in hdbmigrationtable file:

== version=2
COLUMN TABLE E (
  ...,
  text NVARCHAR(100) FUZZY SEARCH INDEX ON
) PERSISTENT MEMORY ON

== migration=2
ALTER TABLE E PERSISTENT MEMORY ON;
ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON);

It's important to understand that during deployment new migration versions will be applied on the existing database schema. If the resulting schema doesn't match the schema as defined by the TABLE statement, deployment fails and any changes are rolled-back. In consequence, when removing or replacing an existing @sql.append annotation, the original ALTER statements need to be undone. As the required statements can't automatically be determined, manual resolution is required. The CDS build generates comments starting with >>>> in order to provide some guidance and enforce manual resolution.

Generated file with comments:

== migration=3
>>>>> Manual resolution required - insert ALTER statement(s) as described below.
>>>>> After manual resolution delete all lines starting with >>>>>
>>>>> Insert ALTER statement for: annotation @sql.append of artifact E has been removed (previous value: "PERSISTENT MEMORY ON")
>>>>> Insert ALTER statement for: annotation @sql.append of element E:text has been removed (previous value: "FUZZY SEARCH INDEX ON")

Manually resolved file:

== migration=3
ALTER TABLE E PERSISTENT MEMORY DEFAULT;
ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX OFF);

Appending text to an existing annotation is possible without manual resolution. A valid ALTER statement will be generated in this case. For example, appending the NOT NULL column constraint to an existing FUZZY SEARCH INDEX ON annotation generates the following statement:

ALTER TABLE E ALTER (text NVARCHAR(100) FUZZY SEARCH INDEX ON NOT NULL);

::: warning You can use @sql.append to partition your table initially, but you can't subsequently change the partitions using schema evolution techniques as altering partitions isn't supported yet. :::

Advanced Options

The following CDS configuration options are supported to manage .hdbmigrationtable generation.

::: warning This hasn't been finalized yet. :::

{
  "hana" : {
    "journal": {
      "enable-drop": false,
      "change-mode": "alter" // "drop"
    },
    // ...
  }
}

The "enable-drop" option determines whether incompatible model changes are rendered as is (true) or manual resolution is required (false). The default value is false.

The change-mode option determines whether ALTER TABLE ... ALTER ("alter") or ALTER TABLE ... DROP ("drop") statements are rendered for data type related changes. To ensure that any kind of model change can be successfully deployed to the database, you can switch the "change-mode" to "drop", keeping in mind that any existing data will be deleted for the corresponding column. See hdbmigrationtable Generation for more details. The default value is "alter".

Caveats

CSV Data Gets Overridden

HDI deploys CSV data as .hdbtabledata, which assumes exclusive ownership of the data. It's overridden with the next application deployment; hence:

::: tip

Only use CSV files for configuration data that can't be changed by application users.

:::

Yet, if you need to support initial data with user changes, you can use the include_filter option that .hdbtabledata offers.

Undeploying Artifacts

As documented in the HDI Deployer docs, an HDI deployment by default never deletes artifacts. So, if you remove an entity or CSV files, the respective tables, and content remain in the database.

By default, cds add hana creates an undeploy.json like this:

::: code-group

[
  "src/gen/**/*.hdbview",
  "src/gen/**/*.hdbindex",
  "src/gen/**/*.hdbconstraint",
  "src/gen/**/*_drafts.hdbtable"
]

:::

If you need to remove deployed CSV files, also add this entry:

::: code-group

[
  ...
  "src/gen/**/*.hdbtabledata"
]

:::

See this troubleshooting entry for more information.{.learn-more}