A Python script that uses image detection to extract the number of listeners in a Twitter Spaces based on a screen recording. It also creates a folder with graphs (one for each second of the recording) which can be assembled into a movie using a second script.
The code's inspiration comes from here.
Install tesseract, pytesseract and cv2.
In oder to create movie, you need to install ffmpeg.
Tip: If you are on a Mac, I recommend to use Homebrew to install tesseract
and ffmpeg
.
Before starting the script, open up config.py
to specify a few useful parameters. The latter are explained in the file.
Then extract the data using
./extract <file name of screen recording>
on the command line.
The results are stored in data.csv
, while the created graphs are stored in graphs/
(unless you have changed to corresponding parameter in config.py
).
In case that frames are not correctly extracted, the missing frames need to be created based on the last available frame. Use this script to do this:
./fillGraphGaps.py
You can assemble the graph images into a movie by executing
./createVideo.sh
again on the command line (that's the Terminal program on the Mac).
The screen recording needs to be done on a Desktop browser while the Twitter Space is active. Also, the recording needs to be cropped to the essential information, a screenshot is shown here:
The number between '+' and 'others' corresponds to the number of listeners.
The following graph shows the evolution listener numbers over time during the daily newscast of the @tortoise twitter channel, recorded on 2021-09-16 12:30 UTC.
The graph shows that there is a built-up over a minimum of 5 minutes before the curve starts to stabilize.