Skip to content

πŸ” The Biolookup Service retrieves metadata and ontological information about biomedical entities.

License

Notifications You must be signed in to change notification settings

biopragmatics/biolookup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

52 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Biolookup Service logo

BioLookup

Tests PyPI PyPI - Python Version PyPI - License Documentation Status Codecov status Cookiecutter template from @cthoyt Ruff Contributor Covenant DOI Powered by the Bioregistry

Get metadata and ontological information about biomedical entities.

πŸ’ͺ Getting Started

The Biolookup Service has an endpoint /api/lookup/<curie> for retrieving metadata and ontological information about a biomedical entity via its compact identifier (CURIE).

import requests

res = requests.get("http://localhost:5000/api/lookup/doid:14330").json()
assert res["name"] == "Parkinson's disease"
assert res["identifier"] == "14330"
assert res["prefix"] == "doid"
assert res["definition"] is not None  # not shown for brevity

The INDRA Lab hosts an instance of the Biolookup Service at http://biolookup.io, so you can alternatively use http://biolookup.io/api/lookup/doid:14330.

The same can be accomplished using the biolookup package:

import biolookup

res = biolookup.lookup("doid:14330")
assert res["name"] == "Parkinson's disease"
# ... same as before

If you've configured the BIOLOOKUP_SQLALCHEMY_URI environment variable (or any other valid way with pystow to point directly at the database for an instance of the Biolookup Service, it will make a direct connection to the database instead of using the web-based API.

πŸ•ΈοΈ Running the Lookup App

You can run the lookup app in local mode with:

$ biolookup web --lazy

This means that the in-memory data from pyobo are used. If you have a large external database, you can run in remote mode with the --sql flag:

$ biolookup web --sql --uri postgresql+psycopg2://postgres:biolookup@localhost:5434/biolookup

If --uri is not given for the web subcommand, it uses pystow.get_config("biolookup", "sqlalchemy_uri)to look up from BIOLOOKUP_SQLALCHEMY_URI or in ~/.config/biolookup.ini. If none is given, it defaults to a SQLite database in ~/.data/biolookup/biolookup.db.

πŸ—‚οΈ Load the Database

$ biolookup load --uri postgresql+psycopg2://postgres:biolookup@localhost:5434/biolookup

If --uri is not given for the load subcommand, it uses pystow.get_config("biolookup", "sqlalchemy_uri)to look up from BIOLOOKUP_SQLALCHEMY_URI or in ~/.config/biolookup.ini. If none is given, it creates a defaults a SQLite database at ~/.data/biolookup/biolookup.db.

πŸš€ Installation

The most recent release can be installed from PyPI with uv:

$ uv pip install biolookup

or with pip:

$ python3 -m pip install biolookup

The most recent code and data can be installed directly from GitHub with uv:

$ uv --preview pip install git+https://github.com/biopragmatics/biolookup.git

or with pip:

$ UV_PREVIEW=1 python3 -m pip install git+https://github.com/biopragmatics/biolookup.git

Note that this requires setting UV_PREVIEW mode enabled until the uv build backend becomes a stable feature.

πŸ‘ Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.

πŸ‘‹ Attribution

βš–οΈ License

The code in this package is licensed under the MIT License.

🎁 Support

This project has been supported by the following organizations (in alphabetical order):

πŸ’° Funding

This project has been supported by the following grants:

Funding Body Program Grant
DARPA Automating Scientific Knowledge Extraction (ASKE) HR00111990009

πŸͺ Cookiecutter

This package was created with @audreyfeldroy's cookiecutter package using @cthoyt's cookiecutter-snekpack template.

πŸ› οΈ For Developers

See developer instructions

The final section of the README is for if you want to get involved by making a code contribution.

Development Installation

To install in development mode, use the following:

$ git clone git+https://github.com/biopragmatics/biolookup.git
$ cd biolookup
$ uv --preview pip install -e .

Alternatively, install using pip:

$ UV_PREVIEW=1 python3 -m pip install -e .

Note that this requires setting UV_PREVIEW mode enabled until the uv build backend becomes a stable feature.

Updating Package Boilerplate

This project uses cruft to keep boilerplate (i.e., configuration, contribution guidelines, documentation configuration) up-to-date with the upstream cookiecutter package. Install cruft with either uv tool install cruft or python3 -m pip install cruft then run:

$ cruft update

More info on Cruft's update command is available here.

πŸ₯Ό Testing

After cloning the repository and installing tox with uv tool install tox --with tox-uv or python3 -m pip install tox tox-uv, the unit tests in the tests/ folder can be run reproducibly with:

$ tox -e py

Additionally, these tests are automatically re-run with each commit in a GitHub Action.

πŸ“– Building the Documentation

The documentation can be built locally using the following:

$ git clone git+https://github.com/biopragmatics/biolookup.git
$ cd biolookup
$ tox -e docs
$ open docs/build/html/index.html

The documentation automatically installs the package as well as the docs extra specified in the pyproject.toml. sphinx plugins like texext can be added there. Additionally, they need to be added to the extensions list in docs/source/conf.py.

The documentation can be deployed to ReadTheDocs using this guide. The .readthedocs.yml YAML file contains all the configuration you'll need. You can also set up continuous integration on GitHub to check not only that Sphinx can build the documentation in an isolated environment (i.e., with tox -e docs-test) but also that ReadTheDocs can build it too.

Configuring ReadTheDocs

  1. Log in to ReadTheDocs with your GitHub account to install the integration at https://readthedocs.org/accounts/login/?next=/dashboard/
  2. Import your project by navigating to https://readthedocs.org/dashboard/import then clicking the plus icon next to your repository
  3. You can rename the repository on the next screen using a more stylized name (i.e., with spaces and capital letters)
  4. Click next, and you're good to go!

πŸ“¦ Making a Release

Configuring Zenodo

Zenodo is a long-term archival system that assigns a DOI to each release of your package.

  1. Log in to Zenodo via GitHub with this link: https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page that lists all of your organizations and asks you to approve installing the Zenodo app on GitHub. Click "grant" next to any organizations you want to enable the integration for, then click the big green "approve" button. This step only needs to be done once.
  2. Navigate to https://zenodo.org/account/settings/github/, which lists all of your GitHub repositories (both in your username and any organizations you enabled). Click the on/off toggle for any relevant repositories. When you make a new repository, you'll have to come back to this

After these steps, you're ready to go! After you make "release" on GitHub (steps for this are below), you can navigate to https://zenodo.org/account/settings/github/repository/biopragmatics/biolookup to see the DOI for the release and link to the Zenodo record for it.

Registering with the Python Package Index (PyPI)

You only have to do the following steps once.

  1. Register for an account on the Python Package Index (PyPI)
  2. Navigate to https://pypi.org/manage/account and make sure you have verified your email address. A verification email might not have been sent by default, so you might have to click the "options" dropdown next to your address to get to the "re-send verification email" button
  3. 2-Factor authentication is required for PyPI since the end of 2023 (see this blog post from PyPI). This means you have to first issue account recovery codes, then set up 2-factor authentication
  4. Issue an API token from https://pypi.org/manage/account/token

Configuring your machine's connection to PyPI

You have to do the following steps once per machine.

$ uv tool install keyring
$ keyring set https://upload.pypi.org/legacy/ __token__
$ keyring set https://test.pypi.org/legacy/ __token__

Note that this deprecates previous workflows using .pypirc.

Uploading to PyPI

After installing the package in development mode and installing tox with uv tool install tox --with tox-uv or python3 -m pip install tox tox-uv, run the following from the console:

$ tox -e finish

This script does the following:

  1. Uses bump-my-version to switch the version number in the pyproject.toml, CITATION.cff, src/biolookup/version.py, and docs/source/conf.py to not have the -dev suffix
  2. Packages the code in both a tar archive and a wheel using uv build
  3. Uploads to PyPI using uv publish.
  4. Push to GitHub. You'll need to make a release going with the commit where the version was bumped.
  5. Bump the version to the next patch. If you made big changes and want to bump the version by minor, you can use tox -e bumpversion -- minor after.

Releasing on GitHub

  1. Navigate to https://github.com/biopragmatics/biolookup/releases/new to draft a new release
  2. Click the "Choose a Tag" dropdown and select the tag corresponding to the release you just made
  3. Click the "Generate Release Notes" button to get a quick outline of recent changes. Modify the title and description as you see fit
  4. Click the big green "Publish Release" button

This will trigger Zenodo to assign a DOI to your release as well.