ECHO is a revolutionary housing search tool designed to improve the housing search process for low-income families in Greater Boston. What makes ECHO unique is that, unlike other housing search websites that focus only on finding housing units, ECHO helps families find housing units and the neighborhoods that meet their needs– a crucial innovation that rethinks the way we search for housing. ECHO is able to provide these recommendations through unique public transit data and databases on schools and public safety information. Through this focus, ECHO is the only tool serving low-income families that helps locate both neighborhoods and affordable housing. - from the marketing site. (Contact client if updates needed)
There are two instances hosted based on the code in this repo:
- Django backed by Postgres.
- AWS Lambda, AWS DynamoDB, and API Gateway for Realtor listings.
- AWS Simple Email Sending
- External service for routing
- Frontend based on Taui
- This project requires AWS access for most dev tasks.
It was originally written in Amplify, but has been ported to Django.
- Docker Engine 20.10.17
- Docker Compose 1.29
To deploy or manage deployment resources, on your host machine you will need to set up an echo-locator
profile for the AWS account using the following command:
$ aws configure --profile echo-locator
To get setup (this runs bootstrap which will try to pull env variables from s3 and then update):
$ ./scripts/setup
Finally, use the server
script to build container images, compile frontend assets,
and run a development server:
$ ./scripts/server
This project uses scripts-to-rule-them-all
to bootstrap, test, and maintain projects consistently across all teams. Below is a quick explanation for the specific usage of each script.
Script | Use |
---|---|
bootstrap |
Pull down secrets from S3 |
infra |
Execute Terraform subcommands with remote state management |
manage |
Issue Django management commands |
server |
Start the frontend and backend services |
setup |
Setup the project development environment |
test |
Run linters and tests |
update |
Update project, assemble, run migrations |
From the project directory, run:
./scripts/manage createsuperuser
Fill out the prompts for a email, username (which MUST just be the same email) and password.
Run:
./scripts/server
Navigate to http://localhost:9966 and log in using staging credentials. From there, you can make a Client ID by entering and searching for a random 6-8 digit number, then making a new profile.
Navigate to http://localhost:8085/admin and login with your credentials created through the createsuperuser
command above.
In the neighborhood_data
directory are data sources and management scripts to transform them. The app uses two GeoJSON files generated by the scripts for neighborhood point and bounds data.
The two expected source files are:
neighborhoods.csv
neighborhood_descriptions.csv
Both contain zip code fields, which should be unique and appear in both files.
To run the data processing scripts and copy the output into the app directory:
cd neighborhood_data
./update_data.sh
The downloaded thumbnail images need to be deployed separately from the other app data. To publish the neighborhood thumbnail images:
`./scripts/imagepublish ENVIRONMENT`
where ENVIRONMENT
is either staging
or production
.
The ecc_neighborhoods.csv
file is the primary source file for data on the neighborhoods, organized by zip code. non_ecc_max_subsidies.csv
also contains non-ECC zipcodes, but does not contain the additional fields in ecc_neighborhoods.csv
. The add_non_ecc.py
script combines the two into neighborhoods.csv
. The add_zcta_centroids.py
script downloads Census Zip Code Tabulation Area (ZCTA) data, looks up the zip codes from neighborhoods.csv
, and writes two files. One is neighborhood_centroids.csv
, which is the input file content with two new columns added for the coordiates of the matching ZCTA's centroid (approximate center). The other is neighborhood_bounds.json
, a GeoJSON file of the bounds of the ZCTAs marked as ECC in neighborhoods.csv
.
The fetch_images.py
script downloads metadata for and thumbnail versions of the image fields in neighborhood_descriptions.csv
and appends fields with the metadata (to be used for attribution) to neighborhood_extended_descriptions.csv
. This script only needs to be run if the images or their metadata need updating.
The generate_neighborhood_json.py
script expects the add_non_ecc
, add_zcta_centroids.py
, and fetch_images.py
scripts to have already been run. It transforms the neighborhood_centroids.csv
data into GeoJSON, appends the description and image-related fields from neighborhood_extended_descriptions.csv
, and writes the results to neighborhoods.json
.
Run linters and tests with the test
script:
$ ./scripts/test
CI will deploy frontend assets to staging on commits to the develop
branch,
and will deploy to production on commits to the master
branch.
For instructions on how to update core infrastructure, see the README in the deployment directory.
Note that the neighborhood thumbnail images are not deployed by CI, but need to be pushed manually after running the data processing script fetch_images.py
that downloads them. See the data section for more information.