Skip to content
This repository has been archived by the owner on Mar 8, 2023. It is now read-only.
/ tweet-crawler Public archive

🦎 A python 2 Twitter crawler using the Twitter search API.

License

Notifications You must be signed in to change notification settings

greird/tweet-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Tweet Crawler 🦎

This is a simple Twitter crawler that will fetch every tweets from the past week on a given topic (or more depending on your access to the Twitter API).

The last tweet id is stored in the config file so that if you launch the same query again, it will start crawling tweets from the last retrieved and add them to the same csv file.

Tweets are stored in a csv file.

Configuration

The crawler depends on the twitter package.

pip install twitter

Create a config.cfg file next to tweet_crawler.py and fill it with the following code.

[twitter-api]
CONSUMER_KEY=
CONSUMER_SECRET=
ACCESS_TOKEN_KEY=
ACCESS_TOKEN_SECRET=

Complete the file with your very own Twitter API Keys and Tokens.

Usage

python tweet_crawler.py in your terminal to launch the crawler and follow the instructions.

If manually interrupted, it will automatically save tweets to a csv before shutting down.

Releases

No releases published

Packages

No packages published

Languages