Skip to content

chinaibuakaeze/postgres_data_modeling

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

postgres_data_modeling

Introduction

Sparkify wants to analyze the data they've been collecting on songs and user activity on their new music streaming app. The analytics team is particularly interested in understanding what songs users are listening to. They currently don't have an easy way to query their data, which resides in a directory of JSON logs on user activity on the app as well as directory with JSON metadata in their app.

They want a data engineer to create a postgres database with tables designed to optimize queries on song play analysis. This project creates a database schema and ETL pipeline using python and SQL. I've used the star schema to define 4 dimensions and 1 fact table in the sparkify database.

Dataset

There are two datasets for this project: Song Dataset and Log Dataset.

Song Dataset

The first dataset is a subset of real data from the Million Song Dataset. Each file is in JSON format and contains metadata about a song and the artist of that song. The files are partitioned by the first three letters of each song's track ID. For example, here are file paths to two files in this dataset.

song_data/A/B/C/TRABCEI128F424C983.json song_data/A/A/B/TRAABJL12903CDCF1A.json

And below is an example of what a single song file, TRAABJL12903CDCF1A.json, looks like.

{"num_songs": 1, "artist_id": "ARJIE2Y1187B994AB7", "artist_latitude": null, "artist_longitude": null, "artist_location": "", "artist_name": "Line Renaud", "song_id": "SOUPIRU12A6D4FA1E1", "title": "Der Kleine Dompfaff", "duration": 152.92036, "year": 0}

Log Dataset

The second dataset consists of log files in JSON format generated by this event simulator based on the songs in the dataset above. These simulate activity logs from a music streaming app based on specified configurations. They are partitioned by year and month. For example, here are filepaths to two files in this dataset.

log_data/2018/11/2018-11-12-events.json log_data/2018/11/2018-11-13-events.json

Schema for database tables

The database for sparkify is sparkify_db. The star schema for the sparkify database includes the following tables:

Fact Table

  • songplays: Records in log data asscociated with song plays. The columns include: songplay_id, start_time, user_id, level_song, artist_id, session_id, location, user_agent

Dimension Tables

  • songs: Songs in the music database. The columns include: song_id, title, artist_id, year, duration
  • users: Users in the app. The columns include: user_id, first_name, last_name, gender, level
  • artists: Artists in the music database. The columns include: artist_id, name, location, latitude, longitude
  • time: Timestamps of music record broken down into specific units. The columns include: start_time, hour, day, week, month, year, weekday

Files in repository

The files in the repository include the following:

  1. log_data: Stores the log data files of the log dataset
  2. song_data: Stores the data files of the song dataset
  3. README.md: Explains the project and documents the process of running the code
  4. create_tables.py: Contains sql queries to create the fact and dimensions tables
  5. etl.py: Contains python code to read and process the files from song_data and log_data directoriess and load the data into the database
  6. etl.ipynb: Jupyter notebook that calls the create_tables files to create the tables in the sparkify database and etl.py file that creates the data pipeline to load the data into the tables. It also queries and displays data from the different tables to verify the pipeline works properly.

Usage

To run the pipeline, dowload the data files in your local repository. Then run Jupyter Notebook etl.ipynb.

About

No description or website provided.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published