Skip to content

Bash script to fetch URLs (and follow links) on a domain -- with some filtering

License

Notifications You must be signed in to change notification settings

aboyd003/fetchurls

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 

Repository files navigation

Fetch Urls

Bash script to spider a site, follow links, and fetch urls -- with some filtering. A list of URLs will be generated and saved to a text file.

GitHub release GitHub commits GitHub issues license

How To Use

  1. Download the script and save to the desired location on your machine.
  2. You'll need wget installed on your machine in order to continue. To check if it's already installed (if you're on Linux or a Mac, chances are you already have it) open Git Bash, Terminal, etc. and run the command: $ wget. If you receive an error message or command not found, you're probably on Windows. Here's the Windows installation instructions:
    1. Download the lastest wget binary for windows from https://eternallybored.org/misc/wget/ (they are available as a zip with documentation, or just an exe. I'd recommend just the exe.)
    2. If you downloaded the zip, extract all (if windows built in zip utility gives an error, use 7-zip). If you downloaded the 64-bit version, rename the wget64.exe file to wget.exe
    3. Move wget.exe to C:\Windows\System32\
  3. Open Git Bash, Terminal, etc. and run the fetchurls.sh script:
    $ bash /path/to/script/fetchurls.sh
  4. You will be prompted to enter the full URL (including HTTPS/HTTP protocol) of the site you would like to crawl:
    #
    #    Fetch a list of unique URLs for a domain.
    #
    #    Enter the full URL ( http://example.com )
    #    URL:
  5. You will be prompted to enter the location (directory) of where you would like the generated results to be saved (defaults to Desktop on Windows):
    #
    #    Save file to location
    #    Directory: /c/Users/username/Desktop
  6. You will then be prompted to change/accept the name of the outputted file (simply press enter to accept the default filename):
    #
    #    Save file as
    #    Filename (no extension): example-com
  7. When complete, the script will show a message and the location of your outputted file:
    #
    #    Fetching URLs for example.com
    #
    #
    #    Finished with 1 result!
    #
    #    File Location:
    #    /c/Users/username/Desktop/example-com.txt
    #

The script will crawl the site and compile a list of valid URLs into a text file that will be placed on your Desktop.

Extra Info

  • To change the default file output location, edit line #7 (or simply use the interactive prompt). Default: ~/Desktop

  • Ensure that you enter the correct protocol and subdomain for the URL or the outputted file may be empty or incomplete, however the script will attempt to follow the first HTTP redirect, if found. For example, entering the incorrect HTTP, protocol for https://adamdehaven.com will automatically fetch the URLs for the HTTPS version.

  • The script will successfully run as long as the target URL returns status HTTP 200 OK

  • The script, by default, filters out the following file extensions:

    • .css
    • .js
    • .map
    • .xml
    • .png
    • .gif
    • .jpg
    • .JPG
    • .bmp
    • .txt
    • .pdf
  • The script filters out several common WordPress files and directories such as:

    • /wp-content/uploads/
    • /feed/
    • /category/
    • /tag/
    • /page/
    • /widgets.php/
    • /wp-json/
    • xmlrpc
  • To change or edit the regular expressions that filter out some pages, directories, and file types, you may edit lines #35 through #44. Caution: If you're not familiar with grep and regular expressions, you can easily break the script.

About

Bash script to fetch URLs (and follow links) on a domain -- with some filtering

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Shell 100.0%