Skip to content

A scalable, mature and versatile web crawler based on Apache Storm

License

Notifications You must be signed in to change notification settings

Rahul9354/storm-crawler

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

storm-crawler

license Build Status javadoc Coverage Status

StormCrawler is an open source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java.

Quickstart

NOTE: These instructions assume that you have Apache Maven installed. You will need to install Apache Storm to run the crawler.

StormCrawler requires Java 11 or above.

The version of Apache Storm to install must match the one defined in the pom.xml file of your topology. The major version of StormCrawler mirrors the one from Apache Storm, i.e whereas StormCrawler 1.x used Storm 1.2.3, the current version now requires Storm 2.5.0. Our Ansible-Storm repository contains resources to install Apache Storm using Ansible. Alternatively, our stormCrawler-docker project should help you run Apache Storm on Docker.

Once Storm is installed, the easiest way to get started is to generate a brand new StormCrawler project using :

mvn archetype:generate -DarchetypeGroupId=com.digitalpebble.stormcrawler -DarchetypeArtifactId=storm-crawler-archetype -DarchetypeVersion=2.10

You'll be asked to enter a groupId (e.g. com.mycompany.crawler), an artefactId (e.g. stormcrawler), a version and package name.

This will not only create a fully formed project containing a POM with the dependency above but also the default resource files, a default CrawlTopology class and a configuration file. Enter the directory you just created (should be the same as the artefactId you specified earlier) and follow the instructions on the README file.

Alternatively if you can't or don't want to use the Maven archetype above, you can simply copy the files from archetype-resources.

Have a look at the code of the CrawlTopology class, the crawler-conf.yaml file as well as the files in src/main/resources/, they are all that is needed to run a crawl topology : all the other components come from the core module.

Getting help

The WIKI is a good place to start your investigations but if you are stuck please use the tag stormcrawler on StackOverflow or ask a question in the discussions section.

DigitalPebble Ltd provide commercial support and consulting for StormCrawler.

Note for developers

Please format your code before submitting a PR with

mvn git-code-format:format-code -Dgcf.globPattern=**/*

Each commit must include a DCO which looks like this

Signed-off-by: Jane Smith <[email protected]>

You may type this line on your own when writing your commit messages. However, if your user.name and user.email are set in your git config, you can use -s or --signoff to add the Signed-off-by line to the end of the commit message.

Thanks

alt tag

YourKit supports open source projects with its full-featured Java Profiler. YourKit, LLC is the creator of YourKit Java Profiler and YourKit .NET Profiler, innovative and intelligent tools for profiling Java and .NET applications.

About

A scalable, mature and versatile web crawler based on Apache Storm

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 65.8%
  • Java 34.1%
  • FLUX 0.1%