Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Backend crashes with a KeyError while running normally #2812

Closed
5 tasks
seungduk-yanolja opened this issue Jan 9, 2025 · 5 comments
Closed
5 tasks
Assignees
Labels
bug Something isn't working

Comments

@seungduk-yanolja
Copy link
Contributor

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

At some point, the backend crashes with an error message like this. The key in the key error is different for every error.

[2025-01-09 10:41:54 DP4 TP0] Prefill batch. #new-seq: 1, #new-token: 1364, #cached-token: 164, cache hit rate: 15.77%, token usage: 0.62, #running-req: 349, #queue-req: 0                                                            
[2025-01-09 10:41:54] INFO:     10.248.201.16:45314 - "POST /v1/chat/completions HTTP/1.1" 200 OK                                                                                                                                      
[2025-01-09 10:41:54] INFO:     10.248.201.16:44802 - "POST /v1/chat/completions HTTP/1.1" 200 OK                                                                                                                                      
[2025-01-09 10:41:54] INFO:     10.248.201.16:55014 - "POST /v1/chat/completions HTTP/1.1" 200 OK                                                                                                                                      
[2025-01-09 10:41:54] INFO:     10.248.201.16:52254 - "POST /v1/chat/completions HTTP/1.1" 200 OK                                                                                                                                      
[2025-01-09 10:41:54 DP7 TP0] Prefill batch. #new-seq: 1, #new-token: 338, #cached-token: 164, cache hit rate: 15.76%, token usage: 0.71, #running-req: 373, #queue-req: 0                                                             
[2025-01-09 10:41:54 DP6 TP0] Prefill batch. #new-seq: 1, #new-token: 300, #cached-token: 165, cache hit rate: 15.78%, token usage: 0.61, #running-req: 340, #queue-req: 0                                                             
[2025-01-09 10:41:54] INFO:     10.248.201.16:54718 - "POST /v1/chat/completions HTTP/1.1" 200 OK                                                                                                                                      
[2025-01-09 10:41:54] DetokenizerManager hit an exception: Traceback (most recent call last):                      
  File "/data/nas-2/seungduk/sglang/python/sglang/srt/managers/detokenizer_manager.py", line 224, in run_detokenizer_process                                                                                                           
    manager.event_loop()                                                                                                                                                                                                               
  File "/data/nas-2/seungduk/sglang/python/sglang/srt/managers/detokenizer_manager.py", line 158, in event_loop                                                                                                                        
    s = self.decode_status[recv_obj.rids[i]]                                                                                                                                                                                           
KeyError: '2fd383090e854bfb9fba6c5b0bc39939'                                                                       
                                                                                                                                                                                                                                       
[2025-01-09 10:41:54 DP1 TP0] Prefill batch. #new-seq: 1, #new-token: 295, #cached-token: 164, cache hit rate: 15.58%, token usage: 0.81, #running-req: 398, #queue-req: 3908                                                          
[2025-01-09 10:41:54 DP2 TP0] Prefill batch. #new-seq: 1, #new-token: 339, #cached-token: 164, cache hit rate: 15.75%, token usage: 0.68, #running-req: 399, #queue-req: 703                                                           
[2025-01-09 10:41:54 DP0 TP0] Prefill batch. #new-seq: 1, #new-token: 713, #cached-token: 165, cache hit rate: 15.76%, token usage: 0.81, #running-req: 396, #queue-req: 249                                                           
[2025-01-09 10:41:54 DP1 TP0] Prefill batch. #new-seq: 1, #new-token: 585, #cached-token: 164, cache hit rate: 15.58%, token usage: 0.81, #running-req: 398, #queue-req: 3907                                                          
[2025-01-09 10:41:54 DP2 TP0] Prefill batch. #new-seq: 2, #new-token: 1302, #cached-token: 330, cache hit rate: 15.75%, token usage: 0.68, #running-req: 398, #queue-req: 701                                                          
Killed

Reproduction

Random

python -m sglang.launch_server --model-path Qwen/Qwen2.5-7B-Instruct --dp 8 --trust-remote-code --host 0.0.0.0 --port 8001 --max-running-requests 400 --schedule-conservativeness 1.2

Environment

| NVIDIA-SMI 550.127.08             Driver Version: 550.127.08     CUDA Version: 12.4     |
aiohappyeyeballs==2.4.4          
aiohttp==3.11.11
aiosignal==1.3.2
annotated-types==0.7.0
anthropic==0.42.0
anyio==4.7.0
asttokens==3.0.0                  
async-timeout==5.0.1
attrs==24.3.0 
certifi==2024.12.14
charset-normalizer==3.4.0               
click==8.1.7             
cloudpickle==3.1.0    
compressed-tensors==0.6.0
cuda-python==12.6.2.post1
datasets==3.2.0
decorator==5.1.1 
decord==0.6.0   
dill==0.3.8      
diskcache==5.6.3 
distro==1.9.0  
einops==0.8.0   
exceptiongroup==1.2.2
executing==2.1.0
fastapi==0.115.6     
filelock==3.16.1
flashinfer==0.1.6+cu121torch2.4
frozenlist==1.5.0           
fsspec==2024.9.0    
gguf==0.10.0            
h11==0.14.0 
hf_transfer==0.1.8
httpcore==1.0.7
httptools==0.6.4
httpx==0.27.2      
huggingface-hub==0.27.0
idna==3.10      
importlib_metadata==8.5.0
iniconfig==2.0.0  
interegular==0.3.3  
ipython==8.31.0    
jedi==0.19.2            
Jinja2==3.1.4                                                                                                                                                                                                                          
jiter==0.8.2
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
lark==1.2.2      
litellm==1.55.7
llvmlite==0.43.0
lm-format-enforcer==0.10.6
MarkupSafe==3.0.2
matplotlib-inline==0.1.7
mistral_common==1.5.1
modelscope==1.21.0 
mpmath==1.3.0
msgpack==1.1.0   
msgspec==0.18.6     
multidict==6.1.0
multiprocess==0.70.16    
nest-asyncio==1.6.0
networkx==3.4.2
numba==0.60.0  
numpy==1.26.4 
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==12.560.30
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.6.85
nvidia-nvtx-cu12==12.1.105
openai==1.58.1
opencv-python-headless==4.10.0.84
orjson==3.10.12
outlines==0.0.46
packaging==24.2
pandas==2.2.3
parso==0.8.4
partial-json-parser==0.2.1.1.post4
pexpect==4.9.0
pillow==10.4.0
pluggy==1.5.0
prometheus-fastapi-instrumentator==7.0.0
prometheus_client==0.21.1
prompt_toolkit==3.0.48
propcache==0.2.1
protobuf==5.29.2
psutil==6.1.1
ptyprocess==0.7.0
pure_eval==0.2.3
py-cpuinfo==9.0.0
pyairports==2.1.1
pyarrow==18.1.0
pybind11==2.13.6
pycountry==24.6.1
pydantic==2.10.4
pydantic_core==2.27.2
Pygments==2.18.0
pytest==8.3.4
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-multipart==0.0.20
pytz==2024.2
PyYAML==6.0.2
pyzmq==26.2.0
ray==2.40.0
referencing==0.35.1
regex==2024.11.6
requests==2.32.3
rpds-py==0.22.3
safetensors==0.4.5
sentencepiece==0.2.0
setproctitle==1.3.4
sgl-kernel==0.0.2.post11
-e git+https://github.com/sgl-project/sglang@58f9060efe26d4377af06dcb2e33778fb012e4f3#egg=sglang&subdirectory=python
six==1.17.0
sniffio==1.3.1
stack-data==0.6.3
starlette==0.41.3
sympy==1.13.3
tiktoken==0.7.0
tokenizers==0.21.0
tomli==2.2.1
torch==2.4.0
torchao==0.7.0
torchvision==0.19.0
tqdm==4.67.1
traitlets==5.14.3
transformers==4.47.1
triton==3.0.0
typing_extensions==4.12.2
tzdata==2024.2
urllib3==2.2.3
uvicorn==0.34.0
uvloop==0.21.0
vllm==0.6.3.post1
watchfiles==1.0.3
wcwidth==0.2.13
websockets==14.1
xformers==0.0.27.post2
xgrammar==0.1.7
xxhash==3.5.0
yarl==1.18.3
zipp==3.21.0
@Xu-Chen
Copy link
Contributor

Xu-Chen commented Jan 10, 2025

We also encountered this issue.

Sorry to bother you @merrymercy @zhyncs , could you please take a look at this issue?

@zhyncs zhyncs added the bug Something isn't working label Jan 10, 2025
@seungduk-yanolja seungduk-yanolja changed the title [Bug] [Bug] Backend crashes while running with a KeyError Jan 10, 2025
@seungduk-yanolja seungduk-yanolja changed the title [Bug] Backend crashes while running with a KeyError [Bug] Backend crashes with a KeyError while running normally Jan 10, 2025
@seungduk-yanolja
Copy link
Contributor Author

            # Incremental decoding
            output_strs = []
            for i in range(bs):
                try:
                    s = self.decode_status[recv_obj.rids[i]]
                    new_text = read_texts[i][len(surr_texts[i]) :]
                    if recv_obj.finished_reasons[i] is None:
                        # Streaming chunk: update the decode status
                        if len(new_text) > 0 and not new_text.endswith("�"):
                            s.decoded_text = s.decoded_text + new_text
                            s.surr_offset = s.read_offset
                            s.read_offset = len(s.decode_ids)
                            new_text = ""
                        else:
                            new_text = find_printable_text(new_text)

                    output_strs.append(
                        self.trim_matched_stop(
                            s.decoded_text + new_text,
                            recv_obj.finished_reasons[i],
                            recv_obj.no_stop_trim[i],
                        )
                    )
                except Exception as e:
                    logger.error(f"Request ID {recv_obj.rids[i]} missing from decode_status dictionary - treating as error")
                    output_strs.append("")
                    recv_obj.finished_reasons[i] = FINISH_ABORT("Request state lost during processing")

Is this correct to fix the issue?

@seungduk-yanolja
Copy link
Contributor Author

seungduk-yanolja commented Jan 11, 2025

Any reason to use LimitedCapacityDict?
Seems like the item was removed. I will temporarily fix this issue by increasing the dict size.

LimitedCapacityDict(capacity=1 << 24)

It seems like DetokenizerManager persists throughout the process's lifetime, evicting the oldest ones. There could be an issue if the number of concurrent requests exceeds 32768 or there is a long-running request and many short-running requests were followed.

@nxphi47
Copy link

nxphi47 commented Jan 13, 2025

+1 ran into similar issues, cannot pin point exactly which prompt cause this issue

@seungduk-yanolja
Copy link
Contributor Author

+1 ran into similar issues, cannot pin point exactly which prompt cause this issue

Patch this PR, please. Then the issue will be gone.
#2839

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants