Skip to content

Spell Searcher to get started with web crawling using Scrapy, distributed computing using Apache Spark and desktop GUI using Scala Swing.

License

Notifications You must be signed in to change notification settings

maxnrb/Spell-Searcher

 
 

Repository files navigation

Spell Searcher

This is a school project from UQAC in the course 8INF803.

In this project we are creating a web crawler to retrieve Pathfinder spell and creature data. The aim is to save those data in JSON in order to parse them thanks to Apache Spark. We also provide a GUI in Scala to search through spells more efficiently. This is a small project to get started with web crawling and distributed computing. Overall, this project thrives to be an introduction to big data.

For more information please refer to the Instructions (in French).

Interface

Two interfaces are available for this project. The main one is the Desktop interface in Scala Swing, using Apache Spark™ as a backend to process request.

The orange "Update Database" button launches the python crawler in order to retrieve spells and creatures information from Archives of Nethys. The first launch will take longer than the others as the crawler will automatically load the database from the website and generate the JSON files needed for Apache Spark™ processing.

You will need the Scrapy package installed on your machine for the crawler to work. You can expect the crawler to retrieve all information in between 2 and 10 minutes, depending on your internet speed. Following updates will be faster because HTTP requests are cached locally.

Scala Swing Desktop Interface

The second interface is a Web interface written in PHP and using a MySQL database as a backend. This interface is a more straight-forward approach available here.

Installation

Scrapy

To install the latest version of Scrapy:

$ pip install scrapy

You can also refer to the documentation: Scrapy Installation.

Apache Spark

To install and configure Apache Spark on Linux please refer to spark-setup.sh. You can also refer to the documentation: Apache Spark Installation.

Built With

  • Scala 2.12.12 - Scala is a general-purpose programming language providing support for both object-oriented programming and functional programming.
  • Oracle OpenJDK 11.0.9 - OpenJDK (Open Java Development Kit) is a free and open-source implementation of the Java Platform Standard Edition (Java SE).
  • sbt 1.4.5 - sbt is a build tool for Scala, Java, and more. It requires Java 1.8 or later.
  • Apache Spark™ 3.0.1 - Apache Spark is an open-source distributed general-purpose cluster-computing framework.
  • scala-swing 3.0.0 - scala-wing is a UI library that wraps most of Java Swing for Scala in a straightforward manner. The widget class hierarchy loosely resembles that of Java Swing.
  • FlatLaf 0.45 - FlatLaf is a modern open-source cross-platform Look and Feel for Java Swing desktop applications.
  • Python 3.9.1 - Python is an interpreted, high-level and general-purpose programming language.
  • Scrapy 2.4.1 - Scrapy is a free and open-source web-crawling framework written in Python.

Authors

License

This project is licensed under the GNU AGPLv3 License - see the LICENSE.md file for details

License chosen thanks to choosealicense.com

About

Spell Searcher to get started with web crawling using Scrapy, distributed computing using Apache Spark and desktop GUI using Scala Swing.

Resources

License

Stars

Watchers

Forks

Languages

  • Scala 45.2%
  • Python 23.7%
  • PHP 15.6%
  • CSS 11.9%
  • Shell 2.7%
  • Hack 0.9%