Skip to content

ETL pipeline that extracts data from S3, stages them in Redshift, and transforms data into a set of dimensional tables for their analytics team.

Notifications You must be signed in to change notification settings

Jalmeida1994/Project-Data-Warehouse

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Data Warehouse

Introduction

A music streaming startup, Sparkify, has grown their user base and song database and want to move their processes and data onto the cloud. Their data resides in S3, in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.

As their data engineer, you are tasked with building an ETL pipeline that extracts their data from S3, stages them in Redshift, and transforms data into a set of dimensional tables for their analytics team to continue finding insights in what songs their users are listening to. You'll be able to test your database and ETL pipeline by running queries given to you by the analytics team from Sparkify and compare your results with their expected results.

Project Description

In this project, you'll apply what you've learned on data warehouses and AWS to build an ETL pipeline for a database hosted on Redshift. To complete the project, you will need to load data from S3 to staging tables on Redshift and execute SQL statements that create the analytics tables from these staging tables.

Getting started

python create_tables.py
python etl.py

Python scripts

  • create_tables.py: Clean previous schema and creates tables.
  • sql_queries.py: All queries used in the ETL pipeline.
  • etl.py: Extracts their data from S3, stages them in Redshift, and transforms data into a set of dimensional tables.

Database Schema

  • songplay: Records in log data associated with song plays
  • sparkify_user: Users in the app
  • song: Songs in music database
  • artists: Artists in music database
  • start_time: Timestamps of records in songplays broken down into specific units

Final fact/dimension tables

  • songplay: This table will store each event associated with song plays(where page = 'NextSong') in the Log dataset. The song_id and artist_id will be found from the Song dataset by the song name and the artist name.
songplay_id start_time user_id level song_id artist_id session_id location user_agent
1 2018-11-29 00:00:57.796000 73 paid - - 954 Tampa-St. Petersburg-Clearwater, FL "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.78.2 (KHTML, like Gecko) Version/7.0.6 Safari/537.78.2"
2 2018-11-29 00:01:30.796000 24 paid - - 984 Lake Havasu City-Kingman, AZ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36"
  • sparkify_user: This table will store each user information from the Log dataset. If a single user is found by multiple event records in the Log dataset, the latest information by the ts will be saved.
user_id first_name last_name gender level
79 James Martin M free
52 Theodore Smith M free
  • song: This table will store each song information from the Song dataset.
song_id title artist_id year duration
SOFNOQK12AB01840FC Kutt Free (DJ Volume Remix) ARNNKDK1187B98BBD5 - 407.37914
SOFFKZS12AB017F194 A Higher Place (Album Version) ARBEBBY1187B9B43DB 1994 236.17261
  • artist: This table will store each artist information from the Song dataset. If a single artist is found by multiple song records in the Song dataset, the latest information by the year will be saved.
artist_id name location lattitude longitude
ARNNKDK1187B98BBD5 Jinx Zagreb Croatia 45.80726 15.9676
ARBEBBY1187B9B43DB Tom Petty Gainesville, FL - -
  • start_time: This table will store distinct ts from the Log dataset. The hour, day, week, month, year, weekday columns will be calculated from each ts.
start_time hour day week month year weekday
2018-11-29 00:00:57.796000 0 29 48 11 2018 3
2018-11-29 00:01:30.796000 0 29 48 11 2018 3

About

ETL pipeline that extracts data from S3, stages them in Redshift, and transforms data into a set of dimensional tables for their analytics team.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages