|
| 1 | +# IRIS websites scraper |
| 2 | +Scraper for the IRIS project to run using a [Rasperry Pi (RPi)](https://www.raspberrypi.org/). |
| 3 | + |
| 4 | +## Reproducibility |
| 5 | +To reproduce the results, please follow these steps [1] |
| 6 | + |
| 7 | +1. Install [Rasperry Pi OS](https://www.raspberrypi.org/downloads/) |
| 8 | +2. Install [Git](https://git-scm.com/) running ``sudo apt install -y git jq`` |
| 9 | +3. Clone this repository |
| 10 | + * ``cd ~/Documents`` |
| 11 | + * ``git clone https://gitlab.tue.nl/iris/iris-scraper.git`` |
| 12 | + * ``cd iris-scraper`` |
| 13 | +4. Install [Node.js](https://nodejs.org/) |
| 14 | + * If you are using a RPi Zero or a RPi 1 (i.e., a device with a ARMv6 architecture) use the following script (be aware that the support for this architecture is experimental) |
| 15 | + * ``sudo ./install-nodejs-12-rpi_zero.sh`` |
| 16 | + * Otherwise, run the following commands |
| 17 | + * ``curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -`` |
| 18 | + * ``sudo apt-get install -y nodejs`` |
| 19 | +5. Install the needed node.js packages with ``npm install`` |
| 20 | +6. Add a ``data.jsonl`` file with the information to scrape the ``data/data_from_database/`` folder |
| 21 | +7. Create a new daemon that manages the scraping script |
| 22 | + * ``sudo cp [email protected] /etc/systemd/system/`` |
| 23 | + |
| 24 | +[1] Note that, for this part of the IRIS project, the results cannot be perfectly reproduced, since they depend on many factors, some of which random and/or time-evolving. |
| 25 | + |
| 26 | +### Possible types of scraper |
| 27 | +* Websites scrapers (first phase) [2] |
| 28 | + * choose ``website`` to use only Bloomberg and the Google front-page; |
| 29 | + * choose ``website_sbir`` to use also SBIR as a scraping source; |
| 30 | +* VPM pages scrapers (second phase) |
| 31 | + * choose ``vpm`` to use Google to search the patent numbers within the detected websites |
| 32 | + |
| 33 | +The websites scrapers, for each recipient/assignee name listed in ``data/data_from_database/data.jsonl``, search for its website on |
| 34 | +* The SBIR website (not mandatory) |
| 35 | +* Bloomberg |
| 36 | +* Google (first page; about 10 results) |
| 37 | + |
| 38 | +The results are listed in the ``data/data_from_websites_scraper/results.jsonl`` file. |
| 39 | + |
| 40 | +Instead, the VPM pages scraper uses the websites previously detected and ask Google to search for the related patent numbers within them. The number of results scraped are no more than the number of websites times the number of patents. |
| 41 | + |
| 42 | +[2] The idea is that only if you are actually looking for the name of recipients of the SBIR program it makes sense to look on the SBIR website for the websites of these recipients. |
| 43 | + |
| 44 | +## Scrapers configuration |
| 45 | +The scrapers can be fine-tuned through some parameters that you can modify in the ``scraper.conf`` file. Specifically |
| 46 | +* ``SCRAPING_RATE`` controls the target rate (in seconds) at which the scraper should go (default ``120``). If the scraper takes less than the target, it will wait for longer. If it takes more, it will try to compensate in the following rounds (since this is the average target and not the punctual target). |
| 47 | +* ``USE_HEADLESS`` controls if the scraper should not show the browser (default ``true``) |
| 48 | +* ``USE_MOBILE`` controls if the scraper should simulate a mobile environment or not (default ``true``) |
| 49 | +* ``CHROME_PATH`` contains the path to the Chromium/Chorme browser in your system (default ``null``). If it is ``null``, the script will try to guess the path in the following preferential order |
| 50 | + * if your OS is perceived as MS Windows by Python, the script will use the executable file found at ``C:\Program Files (x86)\Google\Chrome\Application\chrome.exe``; |
| 51 | + * otherwise, the executable at ``/usr/bin/google-chrome-stable`` will be used, if present; |
| 52 | + * or ``/usr/bin/chromium-browser`` will be used (if this file is not present, an error will be raised by the scraper). |
| 53 | + |
| 54 | +## Scraping phases |
| 55 | +1. The first thing to do is to search for the website of the US Federal funds recipients and/or USPTO patent assignees, with the following commands (where ``<scraper-type>`` is the type of scraper you want to use; see below) |
| 56 | + * ``sudo systemctl enable iris-scraper@<scraper-type>.service`` |
| 57 | + * ``sudo systemctl start iris-scraper@<scraper-type>.service`` |
| 58 | +2. Now you can stop and deactivate the scraper used till now |
| 59 | + * ``sudo systemctl disable iris-scraper@<scraper-type>.service`` |
| 60 | + * ``sudo systemctl stop iris-scraper@<scraper-type>.service`` |
| 61 | +3. Than, you must clean the websites so to remove the too-common websites that are likely false positives [3] |
| 62 | + * ``python clean-scraped-websites.py -I data/data_from_websites_scraper/results.jsonl data/websites_to_exclude.txt -o data/data_from_websites_scraper/results_clean.jsonl`` |
| 63 | +4. Lastly, you must use the VPM pages scraper with the following commands |
| 64 | + * ``sudo systemctl enable [email protected]`` |
| 65 | + * ``sudo systemctl start [email protected]`` |
| 66 | +5. Again stop and deactivate the scraper used |
| 67 | + * ``sudo systemctl disable [email protected]`` |
| 68 | + * ``sudo systemctl stop [email protected]`` |
| 69 | + |
| 70 | +If you work in a GNU/Linux environment, you can have some basic statistics about the ongoing scraping process by running ``./stats.sh website`` (or ``vpm`` according to the step you are actually running) |
| 71 | + |
| 72 | +[3] For now, this script is in Python. I advice you to execute it within a Conda environment. The ``json`` and ``datetime`` Python packages must be installed. Moreover, you must install the ``iris-utils`` package from the [iris-utils](https://gitlab.tue.nl/iris/iris-utils) repository. The advice is to re-use the environment of the [iris-database](https://gitlab.tue.nl/iris/iris-database) repository. |
| 73 | + |
| 74 | +## Working of the Systemd daemons |
| 75 | +When you start one of the daemons, the script will start (provided that you have a working Internet connection) and will restart every time you switch on your RPi and get connected to the Internet.<br> |
| 76 | +At the end of the scraping process, the rows on which an error has been reported are deleted and the scraper trys another time.<br> |
| 77 | +The systemd daemon that controls this process will restard, if errors occurs, for 5 times authomatically. Than a manual intervention is required to, eventually, go on (consider that at least a restart is useful to deal with the eventual, but likely, errors that will occur during the scraping process; by experience, you can expect at least 0.15% failures in a "successful" run). |
| 78 | + |
| 79 | +To stop the script, run<br> |
| 80 | +``sudo systemctl stop iris-scraper.service``<br> |
| 81 | +Be patient, it can take even more than 2 min to stop because of the way in which the JavaScript code is written.<br> |
| 82 | +Moreover, consider that this is a brutal operation that will end in an ERROR in results.jsonl |
| 83 | + |
| 84 | +To look what the script is doing, you can run the following command (use ``website``, ``website_sbir``, or ``vpm`` according with the daemon currently running)<br> |
| 85 | +``journalctl -u [email protected] -f``< br> |
| 86 | +Press CTRL+C to go back to the shall |
| 87 | + |
| 88 | +Note: Consider that, by deafult on the RPi the logs (i.e., what the ``journalctl`` command reads) are arases when you shoutdown the machine. To preserve the logs of past sessions you need either to run |
| 89 | +* ``sudo mkdir -p /var/log/journald`` |
| 90 | +* ``sudo systemd-tmpfiles --create --prefix /var/log/journal`` |
| 91 | +* ``sudo systemctl restart systemd-journald`` |
| 92 | + |
| 93 | +or to set ``Storage=persistent`` into ``/etc/systemd/journal.conf``. |
| 94 | + |
| 95 | +## Data format |
| 96 | +The data files are formatted according to the JSONL (i.e., lines of JSON objects). |
| 97 | + |
| 98 | +Each line must contain the following structure<br> |
| 99 | +``{"award_recipient": "corporation name with legal type", "patent_assignee": "corporation name with legal type"}, "patent_id": [193765482, 917253468]``<br> |
| 100 | +The award recipient's name is not mandatory. |
| 101 | + |
| 102 | +To split the full database into random chunks one for each machine (RPi) you have, you can use the following commands |
| 103 | +* ``shuf f_in.jsonl | split -a1 -d -l $(( $(wc -l <f_in.jsonl) * 1 / N )) - f_out`` |
| 104 | +* ``find . -type f ! -name "*.*" -exec mv {} {}.jsonl \;`` |
| 105 | +where f_in.jsonl is the full database; f_out is the name you want to give to the chunks (will be followed by a progressive number); N is the number of chunks you want to create; and `.` is the local folder (if the files are in another folder, substitute it with the correct path).<br> |
| 106 | +Remember that the standard input file name for the scraping process is always ``data/data_from_database/data.jsonl``. The easiest way is simply copy one of the files with the progressive numbers in each device you have. Than, you can create locally a copy of the file simply called ``data/data_from_database/data.jsonl`` (to preserve the original file will help in remembering the progressive number, if it were useful for some reasons).<br> |
| 107 | +Note: if the number of lines of the original file (f_in.jsonl) are not divisible by the number of chunks desired, an additional (N+1) file will be created with the few extra lines still unassigned. |
| 108 | + |
| 109 | +After the scraping, you can collect the results from each device in a common folder. Rename each ``data/data_from_websites_scraper/results.jsonl`` file with a progressive number (as for the output files). Than, use a command like this to concatenate the output files in a common one<br> |
| 110 | +``cat dod_sbir_citations_to_scrape_with_potential_websites_<?>.jsonl > dod_sbir_citations_to_scrape_with_potential_websites.jsonl``<br> |
| 111 | +where the star (``<?>``) stands for any of the progressive numbers.<br> |
| 112 | +Note: it works only if you have less than 10 files. |
| 113 | + |
| 114 | +## Control the process remotely |
| 115 | +From the RPi configuration tool (``sudo raspi-config`` or from the desktop menu) enable the CLI interface (not mandatory) and enable the SSH interface.<br> |
| 116 | +You can now controll the RPi remotely through SSH, both from another computer, or through a smartphone App (there are even some explicitely dedicated to the RPi). |
| 117 | + |
| 118 | +## Use a Proxy server |
| 119 | +It is possible to use a proxy server. To do so, you must modify the ``proxy.conf.example`` file and rename it as ``proxy.conf``. |
| 120 | + |
| 121 | +Parameters:<br> |
| 122 | +* ``PROXY_ADDRESS`` is the address of the proxy server |
| 123 | +* ``PROXY_PORT`` is the port of the proxy server |
| 124 | +* ``PROXY_USER`` is the proxy server's username |
| 125 | +* ``PROXY_PASSWORD`` is the proxy server's password |
| 126 | +* ``PROXY_ROTATE`` is the API address called to rotate your proxy server |
| 127 | +* ``PROXY_STATUS`` is a function (passed as a string) that must return two values [proxy_ok, proxy_msg]: a boolean that sais whether the proxy rotated correctly and a string that will be printed (e.g., with the IP address assigned by the proxy server) |
| 128 | + |
| 129 | +## Run the scraper without Systemd |
| 130 | +You can also ran the scraping process without the use of Systemd. In this case, you must run<br> |
| 131 | +* ``node scrape-for-websites.js -i <INPUT_FILE.jsonl> -o <OUTPUT_FILE.jsonl> --sbir <true/false> --proxy <true/false> --timestamp <true/false>`` |
| 132 | +* ``node scrape-for-vpm-pages.js -i <INPUT_FILE.jsonl> -o <OUTPUT_FILE.jsonl> --sbir <true/false> --proxy <true/false> --timestamp <true/false>`` |
| 133 | + |
| 134 | +Parameters<br> |
| 135 | +* ``i`` is the input file |
| 136 | +* ``o`` is the output file |
| 137 | +* ``sbir`` uses also the SBIR website as a source of information (default ``false``). Note: anyhow, only the lines with an ``award_recipient`` use also the SBIR website |
| 138 | +* ``proxy`` uses the ``proxy.conf`` parameters in the scraper |
| 139 | +* ``timestamp`` prints also the date alongside the messages |
| 140 | + |
| 141 | +## Acknowledgements |
| 142 | +The authors thank the EuroTech Universities Alliance for sponsoring this work. Carlo Bottai was supported by the European Union's Marie Skłodowska-Curie programme for the project Insights on the "Real Impact" of Science (H2020 MSCA-COFUND-2016 Action, Grant Agreement No 754462). |
0 commit comments