Skip to content

Latest commit

 

History

History
360 lines (225 loc) · 12.4 KB

README.rst

File metadata and controls

360 lines (225 loc) · 12.4 KB

influencetx

influencetx: An ATX Hack for Change project for accessing Texas campaign finance and voting records.

Built with Cookiecutter Django

Setup

To get started on this project for the first time, you can follow these simple steps.

  • Clone code:

    cd your/code/directory
    git clone https://github.com/open-austin/influence-texas.git
    cd influence-texas/src
    
  • Define environment variables (see below) and export those variables

You have 2 options for running InfluenceTX locally. You can either run:

  • a vagrant VM environment. See the :ref:`Local Vagrant Setup` section below.

or

  • docker containers directly on your host computer. See the :ref:`Local Docker Setup` section below.

Define environment variables

Credentials are stored as environment variables that are not committed to source control. To make your environment reproducible, you'll add these environment variables to a script named env.sh with the following values:

export OPENSTATES_API_KEY=YOUR-API-KEY
export GOOGLE_API_KEY=YOUR-API-KEY
export GOOGLE_ANALYTICS=YOUR-ANALYTICS-ID

The TPJ variables require credentials from Texans for Public Justice. Currently, there's no established process for acquiring those credentials. But as a workaround you can load some fake data with:: sh scripts/manage.sh loaddata ./influencetx/tpj/donors_fixture.json

See the following section to acquire an OpenStates key. The Google API keys are for the "Find Rep" portion of the application, which has a cost associated with it.

When you start up a new shell, you should run the following to setup environment variables:

source env.sh

Note that changes to env.sh should never be committed.

Add Open States API Key

If you want to use portions of the site that rely on the Open States API, you'll need to add an API key to the secrets file.

  • Register for an Open States API key
    • Use your own name and email
    • Website: https://www.open-austin.org/
    • Organization: Open Austin
    • Intended Usage: Local development of influencetx app
  • You should receive an email with your new API key. Follow the activation link.
  • Copy key to env.sh.

Syncing data from Open States API

If running with docker, you can use the scripts/load-data-local.sh script. Otherwise, follow these instructions if running on ansible.

Custom django-admin commands are used to sync data from Open States API. To pull data for legislators and bills from Open States, run the following in order:

./djadmin.sh sync_legislators_from_openstate
./djadmin.sh sync_bills_from_openstate

Note that the order matters because bills have voting data which requires legislators to be in the database for correct attribution.:

./djadmin.sh sync_legislators_from_openstate

The number of bills in the database is quite large. For testing purposes, you can grab a subset of the data by using the "max" option.:

./djadmin.sh sync_bills_from_openstate --max 100

Note: openstates only provides data for the most recent session currently.

Import crosswalk CSV

To match up the ids used by TPJ with the ids used by Openstates, we must manually create a crosswalk then import it using the following command:

./djadmin.sh import_legidmap_from_csv --file [path/to/file]

Note: The crosswalk for the 86 session can be found inside influencetx/legislators/data

Import financial disclosures

to import the financial disclosures into the database run

sh scripts/manage.sh import_financial

make sure you have the legislators loaded up first, since it has to link to them and won't work if it doesn't find a match

Local Docker Setup

Build your local docker containers by running:

docker-compose up

or use the scripts:

bash ./scripts/run-local.sh

You can then automate the data seeding steps described in "Syncing data from Open States API" by running:

bash ./scripts/load-data-local.sh

And optionally pass a MAX_BILLS param:

MAX_BILLS=100 sh scripts/load-data-local.sh

Note! If you choose to run docker in this manner without vagrant, use these scripts to run the commands described in the "Basic Commands" and "Maintenance commands" sections below:

sh scripts/manage.sh # (replaces vagrant's djadmin.sh)
sh scripts/invoke.sh # (replaces vagrant's pyinvoke.sh)

They are wrappers to allow you to easily run manage.py and invoke scripts within the docker container.

If you want to go into the docker environment shell yourself, you can run:

docker-compose exec -it web /bin/bash

Basic Commands

During everyday development, there are a few commands that you'll need to execute to debug, update the database, etc. All of the basic commands are based off of the following commands for interacting with the docker container:

  • docker-compose: Run generic docker commands in docker containers.
    • Run docker-compose -h to see a full list of commands.
    • Run docker-compose help <COMMAND> to see help on a command.
  • ./pyinvoke.sh: A shortcut for running invoke commands in docker containers.
    • Run ./pyinvoke.sh -l to see a full list of commands.
    • Run ./pyinvoke.sh -h <COMMAND> to see help on a command.
  • ./djadmin.sh: A shortcut for running django admin commands in docker containers.
    • Run ./djadmin.sh help to see a full list of commands.
    • Run ./djadmin.sh help <COMMAND> to see help on a command.

These instructions assume you're executing the command from the parent directory of this file. You can find details of any commands using the commands above. A few commonly used commands are

Maintenance commands

The commands commonly used for maintenance of this project are described below.

  • docker-compose up -d: Start up docker container in detached mode (background task). You can keep a docker container running continuously, so you may only need to run this after restarting your machine.
  • ./djadmin.sh makemigrations: Make schema migrations to reflect your changes to Django models. Any migrations that you make should be committed and pushed with your model changes.
  • ./djadmin.sh migrate: Migrate database to the current schema. You'll need to run this after running ./djadmin.sh makemigrations to actually apply migrations. If you pull code from github that includes migrations, you should run this to sync your database.
  • ./pyinvoke.sh test: Execute tests using pytest. At minimum, run this before committing code.
  • ./pyinvoke.sh check: Check project for problems. At minimum, run this before committing code.
  • ./pyinvoke.sh create-app: Create Django app. Django apps are small collections of functionality for your web application.

Debugging commands

  • docker-compose logs -f --tail=5 $CONTAINER_NAME: Watch output of containers. (Alias: -f = --follow.)
  • docker-compose logs: Display bash output for all containers.
  • docker-compose exec -it web /bin/bash: Run bash shell within web container.
  • ./djadmin.sh shell: Start IPython shell.
  • ./djadmin.sh dbshell: Start Postgres shell.

Debugging Python code

You can't use the output window from a docker-compose logs --f call to debug, since it actually interacts with multiple containers. Instead, run the following in a terminal:

docker attach `docker-compose ps -q web`

The docker-compose-part of the command simply returns the id of the web container for the app. You can replace the above with:

docker attach influencetexas_web_1

This will attach the terminal to the web container and allow you to interact with the running process. Now you can add a break point somewhere in your python code:

import ipdb; ipdb.set_trace()

Settings

Moved to settings.

Local Vagrant Setup

A Vagrant based deployment method is also available, which mirrors the configurations of the live
integration/production server.

It provides a virtual machine for running the postgresql database on the VM, and is configured as a docker host. The benefits to using an isolated VM for development is that your work is encapsulated within the VM,

thereby allowing you to work on more than one project.
Another benefit is that by developing in an environment that is the same as the integration/production servers,
then a CI/CD pipeline can be setup.

Pre-requisites

You must first install the following software to utilize the Vagrant development environment:

Usage

To start the virtual machine (first time run will also provision):

vagrant up

To stop the virtual machine:

vagrant halt

To open a terminal on the virtual machine:

vagrant ssh

To reprovision the VM and start the application:

vagrant provision

Development Workflow

There are two uses of the Vagrant environment for testing production deployments, from inside the VM or
from outside the VM.

The Vagrant VM is run by default with the settings: ```

vb.memory = "2048" vb.cpus = "2"

```

Reduce these numbers for running on smaller hardware.

Internal

To perform development from inside the VM, perform the vagrant ssh command, then change directory to "/vagrant".
The source code is mounted automatically inside the VM at the "/vagrant" directory. The docker-compose.build file is used for deployment of the application, and allows for live updates to the source code.
The pyinvoke and djadmin commands do not work inside the Vagrant environment. To perform migrations and other

operations, use the following commands:

cd /vagrant
source env.sh
docker-compose -f docker-compose.build [command]

For example:

docker-compose -f docker-compose.build exec web python3 manage.py sync_legislators_from_openstate

Note: Use 'help' as the command to see all available commands.

External

You can also perform development outside the VM by making code updates, then issuing a vagrant provision command.
This method allows SSH based checkouts of the git repository.

Production Build and Deployment

This requires root privileges on the deployment server:

ssh [email protected]
cd influence-texas
git pull
docker-compose build
docker-compose up -d --force-recreate

The first docker-compose command builds the docker container with the influencetx codebase, and the second starts the web application and associated services.

You can access the logs on the production server using:

docker logs web