Skip to content

Web Scraping

DeadlyCoconuts edited this page Jun 12, 2019 · 18 revisions

Web scraping done by Wikipedia Graph Mapper is handled by the script process_link.py written in Python, and is powered by libraries such as Requests and Beautiful Soup.

Table of Contents

Receiving the Command

Upon receiving a user request, the function generate_lists is first called to handle and process user input.

def generate_lists(self_url, currentLevel, max_level, isMobileBrowser, search_mode):
    # patches requests as it has compatibility issues with Google App Engine/ comment this out to test on development server
    requests_toolbelt.adapters.appengine.monkeypatch() 

    global MAX_LEVEL
    MAX_LEVEL = int(max_level)
    
    # clears list for each new request made
    del entryList[:]

    nodeList, linkList = proc_data(scrape(self_url, currentLevel, search_mode), isMobileBrowser)
    return nodeList, linkList

The line containing the function monkeypatch() should only be used when deploying the application on Google App Engine. Comment it out when testing on your development server as it might create other errors while running.

Besides calling all the other dependent functions scrape and proc_data to actually carry out web scraping and data processing respectively, the generate_lists function also sets the user input maximum search depth max_level as a global variable to force the recursive function scrape to return prematurely upon arriving at the maximum search depth.

Scraping Data off Wikipedia

fasdfsa

Processing all those Data

Clone this wiki locally