This project provides the Oozie workflows used to:
- Create a set of Hive tables used by occurrence download: occurrence, occurrence_avro and occurrence_multimedia.
- Occurrence Download through GBIF.org using the occurrence-ws
This coordinator periodically builds the Hive tables used for Occurrence downloads.
It kills the current coordinator job, copies and installs the new job using a Maven custom build that generates Hive scripts based on table definitions.
To install the Oozie coordinator use the script install-workflow.sh which supports the following parameters:
- Environment (required): and existing profile in the Maven file in the gbif-configuration repository.
- GitHub authentication token (required): authentication token to access the gbif-configuration repository.
- Source directory (default to hdfs://ha-nn/data/hdfsview/occurrence/occurrence/): directory to stores the input Avro files.
- Occurrence base table name (default to occurence): common table name to be used for the Avro, Hdfs and Multimedia tables.
The Oozie coordinator job will be installed in the HDFS directory occurrence-download-workflowsenvironmentName.
In general terms the workflow follows the following steps:
- Take an HDFS snapshot of the source directory. The snapshot name will have the Oozie workflow ID.
- Run a set of Hive statements to: build an Avro external table that points to the source directory, creates the HDFS table that contains all the occurrence records and builds the table with associated media.
- Deletes the HDFS snapshot.
If the current Occurrence project version imposes a new schema for the Hive tables the script run-workflow-schema-migration.sh.
The Oozie workflow executed is the same used by the coordinator job described above, with the only difference that the schema_change flag is enabled.
By activating the flag the produced tables will have the prefix new_ as part of their names.
Existing tables are renamed with the prefix old_ and new_ tables are renamed to use expected names by the download workflows.
The Oozie coordinator job will be installed in the HDFS directory occurrence-download-workflows-new-schema-environmentName.
After running this script and workflow successfully, the download workflows must be updated using the install-workflow.sh script.
Alternatively to the process described above, the tables can be created and swap manually by following these steps:
-
Build the occurrence-download project, that will create a directory
occurrence-download-workflows-dev
in the Maven target directory. -
Copy the file
target/occurrence-download-workflows-dev/create-tables/avro-schemas\occurrence-hdfs-record.avsc
to a known location in HDFS. -
From a host with access to the Hive CLI:
- Download the occurrence-hive and occurrence-download from Nexus, for example:
curl https://repository.gbif.org/repository/releases/org/gbif/occurrence/occurrence-download/0.150/occurrence-download-0.150.jar -o occurrence-download.jar
curl https://repository.gbif.org/repository/releases/org/gbif/occurrence/occurrence-download/0.150/occurrence-hive-0.150.jar -o occurrence-hive.jar
-
Run the Hive CLI:
sudo -u hdfs hive
-
Add the downloaded JAR files:
ADD JAR occurrence-hive.jar;
ADD JAR occurrence-download.jar;
-
In the file
target/occurrence-download-workflows-dev/create-tables/hive-scriptscreate-occurrence-avro.q
- Create an Avro table using the schema file copied in the step above and change the path
hdfsPath
to that location, changehdsDataLocation
to the path were the data is located, usually/data/hdfsview/occurrence/occurrence
:
CREATE EXTERNAL TABLE occurrence_avro_new ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.avro.AvroSerDe' STORED as INPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat' LOCATION 'hdsDataLocation' TBLPROPERTIES ('avro.schema.url'='hdfsPath/occurrence-hdfs-record.avsc');
- Replace
${occurrenceTable}
withoccurrence_new
,${occurrenceTable}_avro
withoccurrence_avro_new
,${occurrenceTable}_multimedia
withoccurrence_multimedia_new
- Run every statement of the script
hive-scriptscreate-occurrence-avro.q
.
- Create an Avro table using the schema file copied in the step above and change the path
-
Test the new tables and, if needed, rename the Hive tables:
ALTER TABLE occurrence RENAME TO occurrence_old;
ALTER TABLE occurrence_multimedia RENAME TO occurrence_multimedia_old;
ALTER TABLE occurrence_avro RENAME TO occurrence_avro_old;
ALTER TABLE occurrence_new RENAME TO occurrence;
ALTER TABLE occurrence_multimedia_new RENAME TO occurrence_multimedia;
ALTER TABLE occurrence_avro_new RENAME TO occurrence_avro;
- Install the new workflow by running the Jenkins designated for that, that job runs the
install-workflow.sh
job and accepts a git branch or tag as parameter.