You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do you plan on integrating dynamic serving of LoRA modules, so that new modules can be added / removed during runtime instead of having to restart the engine and add the new modules to the LORA_ADAPTERS env variable?
Motivation
I am training multiple LoRA modules and want to serve them ASAP through my inference endpoint, without the need for manual restarting and adding the new modules there. An example of it would be to send a request to some load_lora endpoint with an url/path to the new module to add.
Your contribution
Could open up a PR
The text was updated successfully, but these errors were encountered:
Hi @rikardradovac thank you for opening this issue, currently we are not planning to support dynamic lora loading in TGI. This is because we load all of the weights into memory at startup to ensure optimal performance.
It's possible to load many loras at startup, but TGI does not provide a way to add/remove these after startup. Might I recommend checking out Predibase's Lorax inference server https://github.com/predibase/lorax, I believe they support dynamic lora adapters and are build on top of TGI foundations.
Feature request
Do you plan on integrating dynamic serving of LoRA modules, so that new modules can be added / removed during runtime instead of having to restart the engine and add the new modules to the LORA_ADAPTERS env variable?
Motivation
I am training multiple LoRA modules and want to serve them ASAP through my inference endpoint, without the need for manual restarting and adding the new modules there. An example of it would be to send a request to some load_lora endpoint with an url/path to the new module to add.
Your contribution
Could open up a PR
The text was updated successfully, but these errors were encountered: