-
Notifications
You must be signed in to change notification settings - Fork 816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It broke again #602
Comments
Oh god, I thought I was the only one. Trying to get YouTube API Data for research and then 429s are plastered all across my screen. I really hope this gets resolved ASAP. |
I have the same bug, l have change the IP proxies and it works, but it failed get code of 429 today; |
Same here |
nothing works even used proxies |
Any idea when it will be resolved? |
it seems that the guys at serp api have solved it |
The origin of the problem is caused by Google: the hability to embed a trends graph is broken, so Pytrends API doesn't work Since this is not an official Google api, and we don't know if Google cares that much about its trends, we can't know when will this be solved. |
How you knew 🥹 how they fixed |
I think we can raise concern about the downtime in embeds... May be they will take a look |
it's already happened lately and it lasted more than a month, I think we should think about alternative scraping with selenium |
because their searches work even with periods of less than a week |
By using cookies? That too temporary 🥹 |
It works for me if I use a timeframe of today 1-m but it fails if I try now 1-d. I wonder if the arguments have changed for trends in the last 7 days? |
I confirm it doesn't work from the weekly timeframe down, like the last time it lasted more than a month in July |
do you have a python function for utilizing results from the serpapi? I have this but the format is kind of bad def query_google_trends(api_key, query_terms, timeframe):
|
3 months duration is working for me and i checked the serpapi it shows USER TYPE LEGIT. It should be USER TYPE SCRAPER for them. They are using something to apply this |
except pytrends.exceptions.TooManyRequestsError as e: |
Do you think it would be useful to set the request with these categories to build this URL? https://trends.google.com/trends/api/explore?tz=420&req=%7B%22comparisonItem%22%3A%5B%7B%22keyword%22%3A%22Retail+sales%22%2C%22geo%22 |
I'm also trying to retrieve data with DataForSeo but is broken as well, I'm getting lots of errors |
I tried DFS yesterday but it worked well from my side, which errors did you get? |
Lots and lots of blanks and it took forever each time
…On Tue, Oct 24, 2023, 04:58 Chenxi Liu ***@***.***> wrote:
I'm also trying to retrieve data with DataForSeo but is broken as well,
I'm getting lots of errors
I tried DFS yesterday but it worked well from my side, which errors did
you get?
—
Reply to this email directly, view it on GitHub
<#602 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL325MN5DYZXBMY2RA7CFKLYA5YJHAVCNFSM6AAAAAA6H3KT3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZWG4YTCMRWGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I am wanting to know how are you utilizing this in automation to end up with too many requests error. Like you are searching for keywords in cities or a country as a whole or just wanted to see latest trends in searches every minute? Just found this repo and wanted to know your use cases. Long back, I have dealt with similar issues but with Google maps API, hence curious |
SerpApi is a good alternative but it's limited results per month for free and then it's $50/month which is too much for me to pay as a student who's researching. Really hope this can get this fixed soon :/ |
Save the keywords and loop. It with multiple retries |
That sometimes works
…On Tue, Oct 24, 2023, 21:08 praburamWAPKA ***@***.***> wrote:
I am wanting to know how are you utilizing this in automation to end up
with too many requests error. Like you are searching for keywords in cities
or a country as a whole or just wanted to see latest trends in searches
every minute? Just found this repo and wanted to know your use cases. Long
back, I have dealt with similar issues but with Google maps API, hence
curious
Save the keywords and loop. It with multiple retries
—
Reply to this email directly, view it on GitHub
<#602 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AL325MNIN2CQVOI3DDIB5C3YBBKBLAVCNFSM6AAAAAA6H3KT3GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZYGI2TQMJXGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Try this latest version from pypi.org pytrends. It has the latest version 4.9.2. See this thread for explanation. |
Everyone using latest version only. |
Did you care to read my comment and check? Github has version 4.9.1 whereas pypi.org has the latest 4.9.2. If you don't want to check, fine; let others here try that version 4.9.2 and speak for themselves. |
Bruh... Everyone use 'pip install pytrends' not from github |
Anyone found workaround, im sick of trying to get 1 day data |
I haven't tried to run this myself (though have plans in short term) but one of the things to try is to keep a big list of proxies in a text file and make modifications in code to feed to read off it. That is how I had worked around this issue several years back when dealing with Google Maps API. I know this code does support a list of proxies but unsure if any of you are passing in a long list of proxies. Code changes to read off a file should be trivial. |
Any update? |
Hi everyone, I'm not a developer, but I fiddled around with AI a bit and I composed this code that works better than pytrends and receives few 429 errors on a very large mass. I share it with you freely to help us together. The code must however be reviewed and brought into a "formal" manner.
|
Mmmmm... I think you're a developer now :) |
Can you tell me how to get related queries |
Don't working Error 429 while fetching trends data, retrying 1/5 |
Your IP may be blocked and Proxy is expensive so I would suggest to Install Termux App and install Ubuntu on it. To change IP once you got 429. Turn off and on mobile data or turn off and on Aeroplane mode to change IP address |
My script seems to be working half the time now, which is weird, no new pytrends update. |
For Rising and Top Queries
|
@CyberTamilan thanks for providing the code. Didn't know the repo curl_cffi which I will definitely try out in future. Your code is currently not working for me. Even with impersionate. But all other code I am running for gtrends is not working too. Gtrends is down again. If the iframe code is not working then all crawlers will have issues |
Change the category value and also try to use data alongwith timestamp |
I repeat, I am not a professional developer, but I developed this code which directly takes the csv from the trends and then prints times and values. The problem is that it is a very heavy code to run (on pythonanywhere it takes a lot of CPU). Help me develop it to make it more usable. from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import http.cookies
import pandas as pd
import urllib.parse
import os
import json
import time
from curl_cffi import requests as cffi_requests
MAX_RETRIES = 5
def trend_selenium(keywords):
browser_versions = ["chrome99", "chrome100", "chrome101", "chrome104", "chrome107", "chrome110"]
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-dev-shm-usage")
chrome_options.add_argument("--window-size=1920,1080")
chrome_options.add_argument("--user-data-dir=./user_data")
driver = webdriver.Chrome(options=chrome_options)
encoded_keywords = urllib.parse.quote_plus(keywords)
retries = 0
file_downloaded = False
while retries < MAX_RETRIES and not file_downloaded:
response = cffi_requests.get("https://www.google.com", impersonate=browser_versions[retries % len(browser_versions)])
cookies = response.cookies
for cookie in cookies:
cookie_str = str(cookie)
cookie_dict = http.cookies.SimpleCookie(cookie_str)
for key, morsel in cookie_dict.items():
selenium_cookie = {
'name': key,
'value': morsel.value,
'domain': cookie.domain
}
driver.add_cookie(selenium_cookie)
trends_url = f'https://trends.google.com/trends/explore?date=now%207-d&geo=US&q={encoded_keywords}'
print(trends_url)
driver.get(trends_url)
excel_button_selector = "body > div.trends-wrapper > div:nth-child(2) > div > md-content > div > div > div:nth-child(1) > trends-widget > ng-include > widget > div > div > div > widget-actions > div > button.widget-actions-item.export > i"
try:
WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.CSS_SELECTOR, excel_button_selector)))
driver.find_element(By.CSS_SELECTOR, excel_button_selector).click()
time.sleep(5) # Aggiungi una pausa per attendere il download
if os.path.exists('multiTimeline.csv'):
file_downloaded = True
else:
print(f"File not downloaded. Attempt {retries + 1} of {MAX_RETRIES}...")
retries += 1
time.sleep(retries) # Implementa un ritardo esponenziale
driver.refresh()
except Exception as e:
print(f"Error during download attempt: {str(e)}")
retries += 1
time.sleep(retries) # Implementa un ritardo esponenziale
trend_data = {}
if file_downloaded:
try:
trend_df = pd.read_csv('multiTimeline.csv', skiprows=2)
trend_df['Time'] = pd.to_datetime(trend_df['Time']).dt.strftime('%Y-%m-%d %H:%M:%S')
data_column = [col for col in trend_df.columns if col not in ['Time']][0]
trend_data = dict(zip(trend_df['Time'], trend_df[data_column]))
os.remove('multiTimeline.csv')
trends_str = json.dumps(trend_data)
except Exception as e:
print(f"Error in reading or deleting the file 'multiTimeline.csv': {str(e)}")
else:
print("File not downloaded after the maximum number of attempts.")
driver.quit()
return trends_str
keywords = "test"
trends_str = trend_selenium(keywords)
print(trends_str) |
thanks its working with some modifications use cases can be satisified !! upvoted |
GPT suggests refactoring to make it less CPU intensive from selenium import webdriver MAX_RETRIES = 5 def create_chrome_driver():
def get_trends_url(keywords): def download_trends_data(driver, trends_url):
def read_and_delete_csv(): def trend_selenium(keywords):
keywords = "test" |
One workaround I've had work the last few days is using the retry function built into the TrendReq call:
this does require urllib3 < 2 as this is forced to update when the requests library is updated to the latest version. |
retry doesn't work for me |
It seems that after approximately 2 weeks of turbulence it's again possible to get responses back from GT at a decent success rate. I say two weeks because that was my starting point, with my target being ~ 1K search terms for interest over time. |
Was anyone able to get a workaround ? |
You can use selenium and tailscale on mobile for ipv6 as exit node. Turn on and Turn off Aeroplane mode for ip address change. Jio is cheap in India you can do that easily. |
Thanks, @CyberTamilan 👍 How effective is Jio compared to the most popular residential proxies? |
Or simply use Home WIFI if you concerned about data packs. Turn On and Turn Off router IP will be changed. |
I have the same issue 429 both with Selenium and Pytrends. Is there any fix as of January 2024? |
Change IP if you get 429 |
I've been using proxies but Google gives me error code 400, and when I don't use proxies I get error 429. I tried running the code on a new computer, which also gave me 429. Is Pytrends broken again? |
Is it completely broken for anyone else as well? |
I think that Google Trends is having the same issue it had past july/august, I'm getting lots of 429 and the embed code is broken on the website. Anyone having the same problems?
The text was updated successfully, but these errors were encountered: