You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I deployed the project by using the Free version steps suggested in the repository, no specific features except the "text-embedding-3-large" model for embedding. I set these env variables:
Hm, interesting. Normally we'd only see that rate limit error for an actual rate limit issue. A few thoughts:
Can you go into the Azure Portal and verify the capacity of your deployments, both your embedding and chat one?
Can you determine if the rate limit error is coming from the embedding API or the chat completion API? I can't tell from that traceback, it doesn't seem to show the original call causing it. You could try looking in App Insights on the "Failures" tab or try turning logging to the DEBUG level. More ways to see errors are in aka.ms/appservice-logs docs.
Hi @pamelafox thank you for your response. Here you can see the quota
regarding the logs, i checked all the solution you provided but without different results. The logs i can access are the same provided above.
I debugged the code manually on vsocde. The code breaks in this line:
chatapproach.py
async for event_chunk in await chat_coroutine:
but the extra_info variable is populated in the chatapproach.py with the data received from the AI search index. Let me know if i can help you in another way!
I deployed the project by using the Free version steps suggested in the repository, no specific features except the "text-embedding-3-large" model for embedding. I set these env variables:
AZURE_OPENAI_CHATGPT_DEPLOYMENT_CAPACITY=1
AZURE_OPENAI_CHATGPT_DEPLOYMENT_VERSION="2024-07-18"
AZURE_OPENAI_CHATGPT_MODEL="gpt-4o-mini"
AZURE_OPENAI_EMB_DEPLOYMENT="embedding"
AZURE_OPENAI_EMB_DEPLOYMENT_VERSION=1
AZURE_OPENAI_EMB_DIMENSIONS=1536
AZURE_OPENAI_EMB_MODEL_NAME="text-embedding-3-large"
For the embedding, i set the full capacity for the embedding model:
When i try to chat with the system, i encountered this error:
By analyzing the logs, i identified these errors:
2025-01-27T22:13:40 Welcome, you are now connected to log-streaming service.Starting Log Tail -n 10 of existing logs ----/home/LogFiles/__lastCheckTime.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/__lastCheckTime.txt)01/27/2025 10:21:52/home/LogFiles/Application/diagnostics-20250127.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/Application/diagnostics-20250127.txt) 2025-01-27 22:11:16.934 +00:00 [Information] Microsoft.AspNetCore.Hosting.Diagnostics: Request starting HTTP/1.1 GET http://app-backend-yv7itbiyrtoks.azurewebsites.net/ - - - 2025-01-27 22:11:16.934 +00:00 [Trace] Middleware: Request came in on normal port, sending to normal target http://169.254.129.8:8000/ 2025-01-27 22:11:16.934 +00:00 [Trace] Middleware: Forwarding request to http://169.254.129.8:8000/ 2025-01-27 22:11:16.941 +00:00 [Trace] Middleware: Forwarded request finished in 6.582ms 200 OK 2025-01-27 22:11:16.941 +00:00 [Debug] Microsoft.AspNetCore.Server.Kestrel.Connections: Connection id "0HN9UOBO0QJ7Q" completed keep alive response. 2025-01-27 22:11:16.941 +00:00 [Information] Microsoft.AspNetCore.Hosting.Diagnostics: Request finished HTTP/1.1 GET https://app-backend-yv7itbiyrtoks.azurewebsites.net/ - 200 758 text/html;+charset=utf-8 7.2630ms 2025-01-27 22:12:17.144 +00:00 [Debug] Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets: Connection id "0HN9UOBO0QJ7Q" received FIN. 2025-01-27 22:12:17.144 +00:00 [Debug] Microsoft.AspNetCore.Server.Kestrel.Connections: Connection id "0HN9UOBO0QJ7Q" disconnecting. 2025-01-27 22:12:17.144 +00:00 [Debug] Microsoft.AspNetCore.Server.Kestrel.Connections: Connection id "0HN9UOBO0QJ7Q" stopped. 2025-01-27 22:12:17.144 +00:00 [Debug] Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets: Connection id "0HN9UOBO0QJ7Q" sending FIN because: "The Socket transport's send loop completed gracefully."/home/LogFiles/kudu/trace/app-backen_kudu_01a60a5b3f-400ac7f0-f8a5-4c1f-9331-caa0a4e0a543.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_01a60a5b3f-400ac7f0-f8a5-4c1f-9331-caa0a4e0a543.txt) 2025-01-27T10:41:04 Startup Request, url: /api/deployments/?$orderby=ReceivedTime%20desc&$top=20&api-version=2022-03-01, method: GET, type: request, pid: 87,1,20, SCM_DO_BUILD_DURING_DEPLOYMENT: True, ScmType: None/home/LogFiles/kudu/trace/app-backen_kudu_0cc8d438c8-8aececba-e038-4538-b039-0776fffe6ec6.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_0cc8d438c8-8aececba-e038-4538-b039-0776fffe6ec6.txt) 2025-01-27T08:55:07 Startup Request, url: /api/zipdeploy?isAsync=true, method: POST, type: request, pid: 87,1,5, ScmType: None, SCM_DO_BUILD_DURING_DEPLOYMENT: True/home/LogFiles/kudu/trace/app-backen_kudu_1b9ad1c00f-35fac343-7921-4a64-9fff-f5f8e22f377f.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_1b9ad1c00f-35fac343-7921-4a64-9fff-f5f8e22f377f.txt) 2025-01-27T08:57:49 Startup Request, url: /api/zipdeploy?isAsync=true, method: POST, type: request, pid: 87,1,7, ScmType: None, SCM_DO_BUILD_DURING_DEPLOYMENT: True/home/LogFiles/kudu/trace/app-backen_kudu_1b9ad1c00f-b69444bb-c9d3-4fd1-a5fa-aed8303e627a.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_1b9ad1c00f-b69444bb-c9d3-4fd1-a5fa-aed8303e627a.txt) 2025-01-27T08:57:53 Outgoing response, type: response, statusCode: 404, statusText: NotFound/home/LogFiles/kudu/trace/app-backen_kudu_889fb6c35c-8cccfcd9-aaab-45cf-be25-36de64f677b5.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_889fb6c35c-8cccfcd9-aaab-45cf-be25-36de64f677b5.txt) 2025-01-27T09:19:28 Startup Request, url: /api/zipdeploy?isAsync=true, method: POST, type: request, pid: 87,1,5, SCM_DO_BUILD_DURING_DEPLOYMENT: True, ScmType: None/home/LogFiles/kudu/trace/app-backen_kudu_b2fa72335a-3e3e25d1-5782-4706-b601-bca5e10f9216.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_b2fa72335a-3e3e25d1-5782-4706-b601-bca5e10f9216.txt) 2025-01-27T09:51:56 Startup Request, url: /api/deployments/?$orderby=ReceivedTime%20desc&$top=20&api-version=2022-03-01, method: GET, type: request, pid: 87,1,5, SCM_DO_BUILD_DURING_DEPLOYMENT: True, ScmType: None/home/LogFiles/kudu/trace/app-backen_kudu_bdebaaa3fe-29118d87-d5b0-4fae-9de5-bf75093440a7.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_bdebaaa3fe-29118d87-d5b0-4fae-9de5-bf75093440a7.txt) 2025-01-27T11:09:59 Startup Request, url: /api/logstream/, method: GET, type: request, pid: 87,1,24, ScmType: None, SCM_DO_BUILD_DURING_DEPLOYMENT: True/home/LogFiles/kudu/trace/app-backen_kudu_fcbf7b4104-97bc3659-4a02-44ef-8779-ca4a3d396084.txt (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/kudu/trace/app-backen_kudu_fcbf7b4104-97bc3659-4a02-44ef-8779-ca4a3d396084.txt) 2025-01-27T10:44:51 Startup Request, url: /api/deployments/?$orderby=ReceivedTime%20desc&$top=20&api-version=2022-03-01, method: GET, type: request, pid: 93,1,7, ScmType: None, SCM_DO_BUILD_DURING_DEPLOYMENT: True/home/LogFiles/2025_01_27_lw0mdlwk00001A_default_docker.log (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/2025_01_27_lw0mdlwk00001A_default_docker.log) 2025-01-27T22:09:47.6230372Z File "/tmp/8dd3eb3b999bec5/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1623, in _request 2025-01-27T22:09:47.6230408Z return await self._retry_request( 2025-01-27T22:09:47.6230427Z ^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-01-27T22:09:47.6230449Z File "/tmp/8dd3eb3b999bec5/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1670, in _retry_request 2025-01-27T22:09:47.6230467Z return await self._request( 2025-01-27T22:09:47.6230484Z ^^^^^^^^^^^^^^^^^^^^ 2025-01-27T22:09:47.6230507Z File "/tmp/8dd3eb3b999bec5/antenv/lib/python3.11/site-packages/openai/_base_client.py", line 1638, in _request 2025-01-27T22:09:47.6230528Z raise self._make_status_error_from_response(err.response) from None 2025-01-27T22:09:47.6230551Z openai.RateLimitError: Error code: 429 - {'error': {'code': '429', 'message': 'Rate limit is exceeded. Try again in 86400 seconds.'}} 2025-01-27T22:11:16.9404301Z 2025-01-27 22:11:16,940 - 169.254.129.9:48200 - "GET / HTTP/1.1" 200/home/LogFiles/2025_01_27_lw0mdlwk00001A_default_scm_docker.log (https://app-backend-yv7itbiyrtoks.scm.azurewebsites.net/api/vfs/LogFiles/2025_01_27_lw0mdlwk00001A_default_scm_docker.log)
can you please help me to fix this issue? I'm on an appservice as requested with B2 plan (B1 is not working, i changed it).
Thank you!
The text was updated successfully, but these errors were encountered: