Skip to content

Commit

Permalink
321 setup pypi publishing (#332)
Browse files Browse the repository at this point in the history
Setup PyPI workflow and release version 0.4.2.
  • Loading branch information
mgdenno authored Nov 18, 2024
1 parent 4a753dc commit e7ca21c
Show file tree
Hide file tree
Showing 9 changed files with 116 additions and 124 deletions.
78 changes: 78 additions & 0 deletions .github/workflows/publish-package.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
name: test-build-publish

# on: pull_request
on:
push:
tags: [ 'v*.*.*' ]
# on: push

jobs:
test:
runs-on: ubuntu-latest
steps:
#----------------------------------------------
# check-out repo and set-up python
#----------------------------------------------
- name: Check out repository
uses: actions/checkout@v4
with:
lfs: true
- name: Set up python
id: setup-python
uses: actions/setup-python@v5
with:
python-version: '3.10'
#----------------------------------------------
# ----- install & configure poetry -----
#----------------------------------------------
- name: Install Poetry
uses: snok/install-poetry@v1
with:
virtualenvs-create: true
virtualenvs-in-project: true
virtualenvs-path: .venv
installer-parallel: true

#----------------------------------------------
# load cached venv if cache exists
#----------------------------------------------
- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v4
with:
path: .venv
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
#----------------------------------------------
# install dependencies if cache does not exist
#----------------------------------------------
- name: Install dependencies
if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true'
run: poetry install --no-interaction --no-root
#----------------------------------------------
# install your root project, if required
#----------------------------------------------
- name: Install project
run: poetry install --no-interaction
#----------------------------------------------
# install JARS
#----------------------------------------------
- name: Install JARs
run: |
source .venv/bin/activate
python -m teehr.utils.install_spark_jars
#----------------------------------------------
# run test suite
#----------------------------------------------
- name: Run tests
run: |
source .venv/bin/activate
pytest tests/
# coverage report
#----------------------------------------------
# build publish package
#----------------------------------------------
- name: Build and publish package
run: |
source .venv/bin/activate
poetry build
poetry publish --username __token__ --password ${{ secrets.POETRY_PYPI_TOKEN_TEEHR }}
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,16 +28,17 @@ cd teehr_examples
python3 -m venv .venv
source .venv/bin/activate

# Install using pip. The version can be changed to install a different version.
pip install 'teehr @ git+https://github.com/RTIInternational/[email protected]'
# Install using pip.
# Starting with version 0.4.1 TEEHR is available in PyPI
pip install teehr

# Download the required JAR files for Spark to interact with AWS S3.
python -m teehr.utils.install_spark_jars
```
Use Docker
```bash
$ docker build -t teehr:v0.4.1 .
$ docker run -it --rm --volume $HOME:$HOME -p 8888:8888 teehr:v0.4.1 jupyter lab --ip 0.0.0.0 $HOME
$ docker build -t teehr:v0.4.2 .
$ docker run -it --rm --volume $HOME:$HOME -p 8888:8888 teehr:v0.4.2 jupyter lab --ip 0.0.0.0 $HOME
```

## Examples
Expand Down
11 changes: 11 additions & 0 deletions docs/sphinx/changelog/index.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,17 @@
Release Notes
=============

0.4.2 - 2024-10-18
--------------------

Added
^^^^^
* A test-build-publish workflow to push to PyPI

Changed
^^^^^^^
* None

0.4.1 - 2024-10-15
--------------------

Expand Down
8 changes: 5 additions & 3 deletions docs/sphinx/development/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,11 @@ This document describes the release process which has some manual steps to compl

Create branch with the following updated to the new version (find and replace version number):

- ``version.txt``
- ``README.md``
- ``pyproject.toml``
- `version.txt`
- `README.md`
- `pyproject.toml`
- `src/teehr/__init__.py`
- `docs/sphinx/getting_started/index.rst`

Update the changelog at ``docs/sphinx/changelog/index.rst`` to reflect the changes included in the release.

Expand Down
13 changes: 7 additions & 6 deletions docs/sphinx/getting_started/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ TEEHR requires the following dependencies:
* Java 8 or later for Spark (we use 17)


We do not currently push TEEHR to PyPI, so the easiest way to install it is directly from GitHub.
The easiest way to install TEEHR is from PyPI using `pip`.
If using `pip` to install TEEHR, we recommend installing TEEHR in a virtual environment.
The code below should create a new virtual environment and install TEEHR in it.
The code below creates a new virtual environment and installs TEEHR in it.

.. code-block:: python
Expand All @@ -26,8 +26,9 @@ The code below should create a new virtual environment and install TEEHR in it.
python3 -m venv .venv
source .venv/bin/activate
# Using pip. The version can be changed to install a different version.
pip install 'teehr @ git+https://github.com/RTIInternational/[email protected]'
# Install using pip.
# Starting with version 0.4.1 TEEHR is available in PyPI
pip install teehr
# Download the required JAR files for Spark to interact with AWS S3.
python -m teehr.utils.install_spark_jars
Expand All @@ -36,8 +37,8 @@ Or, if you do not want to install TEEHR in your own virtual environment, you can

.. code-block:: bash
docker build -t teehr:v0.4.1 .
docker run -it --rm --volume $HOME:$HOME -p 8888:8888 teehr:v0.4.1 jupyter lab --ip 0.0.0.0 $HOME
docker build -t teehr:v0.4.2 .
docker run -it --rm --volume $HOME:$HOME -p 8888:8888 teehr:v0.4.2 jupyter lab --ip 0.0.0.0 $HOME
Project Objectives
------------------
Expand Down
115 changes: 6 additions & 109 deletions docs/sphinx/user_guide/notebooks/06_grouping_and_filtering.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -103,26 +103,13 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {
"tags": [
"hide-output"
]
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"24/11/17 10:28:01 WARN Utils: Your hostname, ubuntu3 resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface enp0s3)\n",
"24/11/17 10:28:01 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address\n",
"Setting default log level to \"WARN\".\n",
"To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).\n",
"24/11/17 10:28:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable\n",
"24/11/17 10:28:02 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.\n"
]
}
],
"outputs": [],
"source": [
"from pathlib import Path\n",
"import shutil\n",
Expand All @@ -139,113 +126,23 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>name</th>\n",
" <th>description</th>\n",
" <th>url</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>p0_2_location_example</td>\n",
" <td>Example evaluation datsets with 2 USGS gages</td>\n",
" <td>s3a://ciroh-rti-public-data/teehr-data-warehou...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>p1_camels_daily_streamflow</td>\n",
" <td>Daily average streamflow at ther Camels basins</td>\n",
" <td>s3a://ciroh-rti-public-data/teehr-data-warehou...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>p2_camels_hourly_streamflow</td>\n",
" <td>Hourly instantaneous streamflow at ther Camels...</td>\n",
" <td>s3a://ciroh-rti-public-data/teehr-data-warehou...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>p3_usgs_hourly_streamflow</td>\n",
" <td>Hourly instantaneous streamflow at USGS CONUS ...</td>\n",
" <td>s3a://ciroh-rti-public-data/teehr-data-warehou...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" name \\\n",
"0 p0_2_location_example \n",
"1 p1_camels_daily_streamflow \n",
"2 p2_camels_hourly_streamflow \n",
"3 p3_usgs_hourly_streamflow \n",
"\n",
" description \\\n",
"0 Example evaluation datsets with 2 USGS gages \n",
"1 Daily average streamflow at ther Camels basins \n",
"2 Hourly instantaneous streamflow at ther Camels... \n",
"3 Hourly instantaneous streamflow at USGS CONUS ... \n",
"\n",
" url \n",
"0 s3a://ciroh-rti-public-data/teehr-data-warehou... \n",
"1 s3a://ciroh-rti-public-data/teehr-data-warehou... \n",
"2 s3a://ciroh-rti-public-data/teehr-data-warehou... \n",
"3 s3a://ciroh-rti-public-data/teehr-data-warehou... "
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"# List the evaluations in the S3 bucket\n",
"ev.list_s3_evaluations()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {
"tags": [
"hide-output"
]
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"24/11/17 10:28:09 WARN MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties\n",
"24/11/17 10:28:33 WARN SparkStringUtils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.\n",
" \r"
]
}
],
"outputs": [],
"source": [
"ev.clone_from_s3(\n",
" evaluation_name=\"p1_camels_daily_streamflow\",\n",
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "teehr"
version = "0.4.1"
version = "0.4.2"
description = "Tools for Exploratory Evaluation in Hydrologic Research"
authors = [
"RTI International",
Expand Down
2 changes: 2 additions & 0 deletions src/teehr/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
__version__ = "0.4.2"

from teehr.evaluation.evaluation import Evaluation # noqa
from teehr.models.metrics.metric_models import Metrics # noqa
from teehr.models.metrics.metric_enums import Operators # noqa
Expand Down
2 changes: 1 addition & 1 deletion version.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.4.1
0.4.2

0 comments on commit e7ca21c

Please sign in to comment.