-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request error #12
Comments
I have the same issue. |
+1 for this issue. Always return status 500. |
Testing in Postman also returned HTTP error 500. They might have changed the API? |
this is hardcode, but you can try. retry, max_retries = 0, 10
while retry < max_retries:
try:
for chunk in client.chat("openai:gpt-3.5-turbo", messages, params=params):
print(chunk, end="", flush=True)
print()
break
except:
retry += 1
if retry == max_retries:
raise
logging.warning(f"Retrying {retry}/{max_retries}...")
continue INFO:root:Downloading homepage...
INFO:root:Downloading and parsing scripts...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 1/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 2/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
The 2020 World Series was played at the Globe Life Field in Arlington, Texas. |
@ading2210 You added the help wanted label. Would you like me to make a pull request based on #12 (comment)? |
@mak448a Sure, I wouldn't mind a PR for this. |
It's harder than I expected to patch this. It won't stop retrying. I'm going to give up. Sorry |
this usually works: from vercel_ai import Client
from curl_cffi.requests.errors import RequestsError
def chat_gen(client: Client, messages: list, model: str = "openai:gpt-3.5-turbo", params: dict = {"temperature": 0.8}) -> str:
response: str = ""
try:
for chunk in client.chat(model, messages, params):
# just make sure we dont process the returned error lol
if chunk != 'Internal Server Error':
response += chunk
# append the ai's response to the message list
messages.append({'role': 'assistant', 'content': f'{response}'})
return response
# error-driven recursive call
except RequestsError:
return chat_gen(client, messages, params=params, model=model) |
@Recentaly Can you make a pull request? |
I cannot get the error fixed from the source. Only in my own scripts. |
Vercel usually gives out the 500 error when making too many requests from the same ip, thic can be fixed by rotating proxies. |
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 141, in stream_request
chunk = chunks_queue.get(block=True, timeout=0.01)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/queue.py", line 179, in get
raise Empty
_queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/gpt.py", line 2, in
ans=nerdapi.ask("I'm thinking about building an civilization filled with AI bots interacting with each other and developing")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/nerdapi.py", line 16, in ask
for chunk in client.generate("openai:gpt-3.5-turbo", f"Your name is n.e.r.d., an AI language model trained by Neurum. {prompt}", params=params):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 185, in generate
for chunk in self.stream_request(self.session.post, self.generate_url, headers=headers, json=payload):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 144, in stream_request
raise error
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 132, in request_thread
response.raise_for_status()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/curl_cffi/requests/cookies.py", line 51, in raise_for_status
raise RequestsError(f"HTTP Error {self.status_code}: {self.reason}")
curl_cffi.requests.errors.RequestsError: HTTP Error 500:
The text was updated successfully, but these errors were encountered: