-
Notifications
You must be signed in to change notification settings - Fork 0
/
Twitter_weekly_download
45 lines (29 loc) · 1.65 KB
/
Twitter_weekly_download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
### Workflow for weekly download of twitter data with specific hashtags (keywords)###
### The code was developed as part of the IARH project, in order to download all the tweets containg specifc hashtag the appear every week.
### The code downloads only the tweets published up to 7-10 ago from the time it is run - hence will be re-run every week
### Project IARH
### Author Marta Krzyzanska
#Set working direcotry:
setwd("~path")
# Require rtweet. If you have not installed it already, run install.packages() first.
library("rtweet")
# Load or create token (for the project the twitter_token is located on the google drive/R useful files/twitter):
load("twitter_token.R")
# Load or create list of hashtags to search for:
load("twitter.R")
#Create a list in which to store tweets with different hashtags
tweets <- c()
#Loop through all the hashtags/keywords, extract the relevant tweets. Wait 15 minutes between each hashtag/keywords, because twitter API
#has a limit on API calls within 15 minutes and usually it is used up in one search
#It will need to be edited if any of the hashtags returns 18000 tweets, which will suggest that there are more
#(search_tweets returns maximum of 18000 tweets, which should be sufficient for our hashtag, otherwise we may need to use alternative
# method which is slower and sometimes return errors)
i=1
j=length(twitter_keywords)+1
while (i<j){
tweets[[i]]<-search_tweets(q=twitter_keywords[i], n = 18000, token=twitter_token)
print (paste("Search for tweets containing keyword: ",twitter_keywords[i]," completed",sep=""))
print ("Waiting for the token reset")
Sys.sleep(900)
i=i+1}
save(tweets,file="tweets(#enter_download_date).R")