Search engine for the Interplanetary Filesystem. Sniffs the DHT gossip and indexes file and directory hashes.
Metadata and contents are extracted using ipfs-tika, searching is done using ElasticSearch 5, queueing is done using RabbitMQ. The crawler is implemented in Go, the API and frontend are built using Node.js.
A preliminary start at providing a minimal amount of documentation can be found in the docs folder.
Building a search engine like this takes a considerable amount of resources (money and TLC). If you are able to help out with either of them, mail us at [email protected] or find us at #ipfssearch on Freenode (or #ipfs-search:chat.weho.st on Matrix).
For discussing and suggesting features, look at the project planning.
- Go 1.11
- Elasticsearch 5.x
- RabbitMQ / AMQP server
- NodeJS 9.x
Configuration can be done using a YAML configuration file, see example_config.yml
.
The following configuration options can be overridden by environment variables:
IPFS_TIKA_URL
IPFS_API_URL
ELASTICSEARCH_URL
AMQP_URL
or by using environment variables.
$ go get ./...
$ make
Local installation is done using vagrant:
git clone https://github.com/ipfs-search/ipfs-search.git ipfs-search
cd ipfs-search
vagrant up
This starts up the API on port 9615, Elasticsearch on 9200 and RabbitMQ on 15672.
Vagrant setup does not currently start up the frontend.
Automated deployment can be done on any (virtual) Ubuntu 16.04 machine. The full production stack is automated and can be found here.