-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add factories for logits_processors #38
Conversation
Co-authored-by: SangBin Cho <[email protected]>
Co-authored-by: simon-mo <[email protected]>
…default to None (#4586)
Co-authored-by: LiuXiaoxuanPKU <[email protected]>
… Dynamic/Static Activations) (#4527) Follow on to #4332 to enable FP8 checkpoint loading for Mixtral and supersedes #4436. This PR enables the following checkpoint loading features for Mixtral: Supports loading fp8 checkpoints for Mixtral, such as this "nm-testing/Mixtral-8x7B-Instruct-v0.1-FP8" test model Supports static or dynamic activation quantization with static weight quantization (all per tensor) Supports different scales for each expert weight Supports Fp8 in QKV layer Notes: The Expert Gate/Router always runs at half / full precision for now. If there are different weight scales between QKV layer (for separate QKV weights), they are re-quantized using layer.weight_scale.max() so we can have a single gemm for performance.
Signed-off-by: Joe Runde <[email protected]>
`format.sh` now has mypy checks after pulling in upstream changes. This PR makes the mypy suggested modifications to our code. --------- Signed-off-by: Travis Johnson <[email protected]>
…ubi (#23) Changes: - vLLM v0.4.2 was published today, update our build to use pre-built libs from their wheel - bump other dependencies in the image build (base UBI image, miniforge, flash attention, grpcio-tools, accelerate) - little cleanup to remove `PYTORCH_` args that are no longer used --------- Signed-off-by: Travis Johnson <[email protected]>
Cherry-pick of fix commit 6100f4b from ODH: opendatahub-io/vllm#17 --------- Signed-off-by: Travis Johnson <[email protected]> Co-authored-by: Daniele Trifirò <[email protected]>
Install and configure use of the NCCL version recommended by vLLM via the [vllm-nccl](https://github.com/vllm-project/vllm-nccl) package. The install is a little wonky... but this set of changes should work. Signed-off-by: Travis Johnson <[email protected]>
…4927) This PR enables the fused topk_softmax kernel used in moe layer for HIP
Signed-off-by: kevin <[email protected]>
…g the env var that indicates the vLLM backend (#5210)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @maxdebayser this looks much better!
The other thing we may need to look at is what to do when sequences are forked. I think this only applies to beam search. Is deep-copying the LPs the right thing to do? Could there be problems deep-copying arbitrary LPs?
@maxdebayser after addressing the simple comments above (not necessarily the pooling thing yet), maybe you could open an upstream PR? Then we can continue the discussions with others... |
Signed-off-by: Max de Bayser <[email protected]>
Here's the upstream PR: vllm-project/vllm#5329 |
Closing as stale and this repo has been re-purposed for Spyre enablement. |
This PR supports fixes the model loading adapting to new fms convention. ### Changes: - passing data_type FP16 for AIU and FP32 on CPU. #### Code Tests Model loading has been tested for all models on CPU and some on AIU (all models up to 7b in size) #### Note please report any issues regarding loading models/ running on AIU immediately.
This is a proposed solution to the guided decoding crash problem.
In vLLM there are two requests that can be used to submit more than one sequence at the same time: the completion request in the legacy OpenAI API, and the generate batch request in the tgis gRPC API. In both cases the sampling params are validated only once and a single SamplingsParam object is shared between all sequences, even when they belong to different sequence groups. The SamplingsParam is mostly a data object, but it has a list of logits processors that are executable. If the logits processors are stateless there is no problem. However, the
CFGFSM
used by theCFGLogitsProcessor
has an internal state that depends on the sequence generated so far and is updated at each iteration. This causes it to raise a KeyError and crash the asyncio event loop.The first attempted solution added a seq_id parameter to the logits processor call so that it could manage state internally, but that changed the interface that is already used by some external libraries.
The solution proposed here is based on adding factories for stateful logits processors. The basic idea is:
and copy the logits_processors and call the factories to populate SequenceGroupState.logits_processors
the sampling_params.logits_processors
Here are some diagrams to illustrate the current code structure to better visualize the proposed changes:
The idea is quite simple, but the execution is a bit tricky due to the nature of
async
code in python where an async call can't call a non-async function that calls an async function. In the PR I tried to support both using LLMEngine directly as well as the AsyncLLMEngine used for serving.@njhill, I was going to add support to return the processors to the factory, but I realized that it was a little bit more complicated because only the scheduler knows when the sequence is done. Maybe we can add a callback somewhere in the scheduler where we can add the deallocation call. Actually that might be required, because I realized that there is another hidden bug: when sequence are preempted with the recompute policy, that makes the state of the logits processor invalid.