This project is a powerful and efficient SEO scraping tool that scrapes data from the first page of Google search results for a list of keywords. It is built using Python, Selenium, and PySimpleGUI, and allows you to customize proxy settings and user agent to ensure a smooth scraping experience.
The scraped data includes the website's URL, H1, H2, and H3 tags, meta content, and meta description. The data is then saved in a CSV file for further analysis and processing.
- Scrape data from the first page of Google search results
- Customize proxy settings and user agent
- Save scraped data in a CSV file
- Efficient and user-friendly GUI
-
Install Python 3.7+ if not already installed.
-
Clone the repository:
git clone https://github.com/andreireporter13/SEO-1st-page-Google-data-scrape.git
-
Change to the project directory:
cd SEO-1st-page-Google-data-scrape
-
Install the required dependencies:
pip install -r requirements.txt
-
Run the script:
main_SEO_automation.py
- Enter the keywords (up to 3) separated by commas in the input field.
- Click on "Run Scraper" to begin scraping data from the first page of Google search results.
- The progress bar will show the progress of the scraping process.
- Once the scraping is complete, a CSV file with the scraped data will be saved in the project directory.
- You can customize the proxy settings and user agent from the menu options.
This project is licensed under the MIT License.
Feel free to reach out to the authors:
For more information about our work, visit our website: https://webautomation.ro and https://www.laurentiumarian.ro/
Special Thanks to: @tragdate