Sage - Fast Scraping Tool to quicly find all S3 Buckets from organization's GitHub Repositories and find S3 Bucket Takeover.
Requirement: Python: 3.7+
git clone https://github.com/notmarshmllow/sage.git
pip install -r requirements.txt
python3 sage.py -h
Sage takes the name of the organization and tries to find all S3 Buckets in the Orgnization's GitHub Repositories.
Following usage example show simplest tasks you can acomplish with Sage
.
Please update you GitHub Login credentials in cred.py
file.
Make sure you do not have two-factor authentication (2FA) turned ON. This may cause issues for tool to run.
Credentials are required to use the tool or else the tool will not run.
The -org
switch takes the Organization's name and finds all the S3 Buckets associated with the Organization in the Organization's Repository.
python3 sage.py -org Google
The -p
switch takes integer as input. It accepts the number of pages to scrape (default: 100)
python3 sage.py -org Google -p 8
The -o
switch prints output to a file.
python3 sage.py -org Google -p 8 -o facebook_s3.txt
Sage finds all the S3 Buckets of an organization and also finds out any unclaimed Buckets.
If a bucket is vulnerable, Sage display that the Takeover of the Bucket is possible.
If you find any repetative or irreleant bucket that isn't associated with your target but still is being scrapped, you can include it in the exclude list in sage.py
file. This will make sure that the bucket name isn't shown in the Output anymore.
All developments to Sage are highly welcomed and much appreciated!
Sage- Developed by @notmarshmllow❤️