Skip to content

rishabhjain4/disaster-pipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Disaster Response Pipeline Project

In this Project, data set containing real messages which were sent during disaster events is used to build a model to categorize events so that one can send the messages to an appropriate disaster relief agency.

Dataset used: disaster data from Figure Eight

Instructions:

  1. Run the following commands in the project's root directory to set up your database and model.

    • To run ETL pipeline that cleans data and stores in database python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db
    • To run ML pipeline that trains classifier and saves python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl
  2. Run the following command in the app's directory to run your web app. python run.py

  3. Go to http://0.0.0.0:3001/

Important Files:

  1. data/ process_data.py : The ETL pipeline used to process data in preparation. messages.db : SQLite Database
  2. models/ train_classifier.py : The Machine Learning pipeline used to fit, tune, evaluate, and export the model to a Python pickle. model.pkl : Pickle file of the trained model.
  3. app/ templates/*.html : HTML templates for the web app. run.py : Start the Python server for the web app and prepare visualizations.

About

Udacity ETL project

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published