Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request error #12

Open
VanshShah1 opened this issue Aug 5, 2023 · 11 comments
Open

Request error #12

VanshShah1 opened this issue Aug 5, 2023 · 11 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@VanshShah1
Copy link

Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 141, in stream_request
chunk = chunks_queue.get(block=True, timeout=0.01)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/queue.py", line 179, in get
raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/gpt.py", line 2, in
ans=nerdapi.ask("I'm thinking about building an civilization filled with AI bots interacting with each other and developing")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vanshshah/Documents/GitHub/neurumtopsecret/experiments/nerdapi.py", line 16, in ask
for chunk in client.generate("openai:gpt-3.5-turbo", f"Your name is n.e.r.d., an AI language model trained by Neurum. {prompt}", params=params):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 185, in generate
for chunk in self.stream_request(self.session.post, self.generate_url, headers=headers, json=payload):
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 144, in stream_request
raise error
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/vercel_ai.py", line 132, in request_thread
response.raise_for_status()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/curl_cffi/requests/cookies.py", line 51, in raise_for_status
raise RequestsError(f"HTTP Error {self.status_code}: {self.reason}")
curl_cffi.requests.errors.RequestsError: HTTP Error 500:

@GidiGumDrop
Copy link

I have the same issue.

@ading2210 ading2210 added the bug Something isn't working label Aug 5, 2023
@0zl
Copy link

0zl commented Aug 7, 2023

+1 for this issue. Always return status 500.

@ading2210 ading2210 added the help wanted Extra attention is needed label Aug 8, 2023
@GidiGumDrop
Copy link

Testing in Postman also returned HTTP error 500. They might have changed the API?

@sachnun
Copy link

sachnun commented Aug 9, 2023

this is hardcode, but you can try.

retry, max_retries = 0, 10
while retry < max_retries:
    try:
        for chunk in client.chat("openai:gpt-3.5-turbo", messages, params=params):
            print(chunk, end="", flush=True)
        print()
        break
    except:
        retry += 1
        if retry == max_retries:
            raise
        logging.warning(f"Retrying {retry}/{max_retries}...")
        continue
INFO:root:Downloading homepage...
INFO:root:Downloading and parsing scripts...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 1/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
Internal Server ErrorWARNING:root:Retrying 2/10...
INFO:root:Sending to openai:gpt-3.5-turbo: 4 messages
INFO:root:Fetching token from ***
INFO:root:Waiting for response
The 2020 World Series was played at the Globe Life Field in Arlington, Texas.

@mak448a
Copy link

mak448a commented Sep 20, 2023

@ading2210 You added the help wanted label. Would you like me to make a pull request based on #12 (comment)?

@ading2210
Copy link
Owner

@mak448a Sure, I wouldn't mind a PR for this.

@mak448a
Copy link

mak448a commented Sep 21, 2023

It's harder than I expected to patch this. It won't stop retrying. I'm going to give up. Sorry

@5eroo
Copy link

5eroo commented Oct 3, 2023

this usually works:

from vercel_ai import Client
from curl_cffi.requests.errors import RequestsError


def chat_gen(client: Client, messages: list, model: str = "openai:gpt-3.5-turbo", params: dict = {"temperature": 0.8}) -> str:

    response: str = ""

    try:

        for chunk in client.chat(model, messages, params):

            # just make sure we dont process the returned error lol
            if chunk != 'Internal Server Error':

                response += chunk

        # append the ai's response to the message list
        messages.append({'role': 'assistant', 'content': f'{response}'})
    
        return response

    # error-driven recursive call
    except RequestsError:

        return chat_gen(client, messages, params=params, model=model)

@mak448a
Copy link

mak448a commented Oct 3, 2023

@Recentaly Can you make a pull request?

@5eroo
Copy link

5eroo commented Oct 3, 2023

I cannot get the error fixed from the source. Only in my own scripts.

@Ivang71
Copy link

Ivang71 commented Oct 25, 2023

Vercel usually gives out the 500 error when making too many requests from the same ip, thic can be fixed by rotating proxies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

8 participants