Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add factories for logits_processors #38

Closed
wants to merge 616 commits into from
Closed

Add factories for logits_processors #38

wants to merge 616 commits into from

Conversation

maxdebayser
Copy link
Contributor

This is a proposed solution to the guided decoding crash problem.

In vLLM there are two requests that can be used to submit more than one sequence at the same time: the completion request in the legacy OpenAI API, and the generate batch request in the tgis gRPC API. In both cases the sampling params are validated only once and a single SamplingsParam object is shared between all sequences, even when they belong to different sequence groups. The SamplingsParam is mostly a data object, but it has a list of logits processors that are executable. If the logits processors are stateless there is no problem. However, the CFGFSM used by the CFGLogitsProcessor has an internal state that depends on the sequence generated so far and is updated at each iteration. This causes it to raise a KeyError and crash the asyncio event loop.

The first attempted solution added a seq_id parameter to the logits processor call so that it could manage state internally, but that changed the interface that is already used by some external libraries.

The solution proposed here is based on adding factories for stateful logits processors. The basic idea is:

  1. We add processors and factories to the same list so that they are in the correct order
  2. We add a logits_processors list to the SequenceGroupState object
  3. When the SequenceGroup is created, we iterate over the sampling_params.logits_processors
    and copy the logits_processors and call the factories to populate SequenceGroupState.logits_processors
  4. The LogitsProcessor(nn.Module) will iterate over the SequenceGroupState.logits_processors instead of
    the sampling_params.logits_processors

Here are some diagrams to illustrate the current code structure to better visualize the proposed changes:
vllm_sampling
vllm_seq_classes

The idea is quite simple, but the execution is a bit tricky due to the nature of async code in python where an async call can't call a non-async function that calls an async function. In the PR I tried to support both using LLMEngine directly as well as the AsyncLLMEngine used for serving.

@njhill, I was going to add support to return the processors to the factory, but I realized that it was a little bit more complicated because only the scheduler knows when the sequence is done. Maybe we can add a callback somewhere in the scheduler where we can add the deallocation call. Actually that might be required, because I realized that there is another hidden bug: when sequence are preempted with the recompute policy, that makes the state of the logits processor invalid.

itechbear and others added 30 commits May 6, 2024 11:35
… Dynamic/Static Activations) (#4527)

Follow on to #4332 to enable FP8 checkpoint loading for Mixtral and supersedes #4436.

This PR enables the following checkpoint loading features for Mixtral:

Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model
Supports static or dynamic activation quantization with static weight quantization (all per tensor)
Supports different scales for each expert weight
Supports Fp8 in QKV layer
Notes:

The Expert Gate/Router always runs at half / full precision for now.
If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance.
Signed-off-by: Joe Runde <[email protected]>
`format.sh` now has mypy checks after pulling in upstream changes. This
PR makes the mypy suggested modifications to our code.

---------

Signed-off-by: Travis Johnson <[email protected]>
…ubi (#23)

Changes:
- vLLM v0.4.2 was published today, update our build to use pre-built
libs from their wheel
- bump other dependencies in the image build (base UBI image, miniforge,
flash attention, grpcio-tools, accelerate)
- little cleanup to remove `PYTORCH_` args that are no longer used

---------

Signed-off-by: Travis Johnson <[email protected]>
Cherry-pick of fix commit 6100f4b from ODH:
opendatahub-io/vllm#17

---------

Signed-off-by: Travis Johnson <[email protected]>
Co-authored-by: Daniele Trifirò <[email protected]>
Install and configure use of the NCCL version recommended by vLLM via
the [vllm-nccl](https://github.com/vllm-project/vllm-nccl) package. The
install is a little wonky... but this set of changes should work.

Signed-off-by: Travis Johnson <[email protected]>
divakar-amd and others added 22 commits June 4, 2024 16:02
…4927)

This PR enables the fused topk_softmax kernel used in moe layer for HIP
…g the env var that indicates the vLLM backend (#5210)
Copy link
Contributor

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @maxdebayser this looks much better!

The other thing we may need to look at is what to do when sequences are forked. I think this only applies to beam search. Is deep-copying the LPs the right thing to do? Could there be problems deep-copying arbitrary LPs?

vllm/sampling_params.py Outdated Show resolved Hide resolved
vllm/model_executor/guided_decoding/__init__.py Outdated Show resolved Hide resolved
vllm/model_executor/guided_decoding/outlines_decoding.py Outdated Show resolved Hide resolved
vllm/sampling_params.py Outdated Show resolved Hide resolved
@njhill
Copy link
Contributor

njhill commented Jun 6, 2024

@maxdebayser after addressing the simple comments above (not necessarily the pooling thing yet), maybe you could open an upstream PR? Then we can continue the discussions with others...

Signed-off-by: Max de Bayser <[email protected]>
@maxdebayser
Copy link
Contributor Author

Here's the upstream PR: vllm-project/vllm#5329

@tdoublep
Copy link
Member

Closing as stale and this repo has been re-purposed for Spyre enablement.

@tdoublep tdoublep closed this Jan 17, 2025
tdoublep pushed a commit that referenced this pull request Jan 20, 2025
This PR supports fixes the model loading adapting to new fms convention.

### Changes:

- passing data_type FP16 for AIU and FP32 on CPU. 

#### Code Tests
Model loading has been tested for all models on CPU and some on AIU (all
models up to 7b in size)

#### Note
please report any issues regarding loading models/ running on AIU
immediately.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.