-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lora's "catastrophic forgetting" problem #311
Comments
Hi! We faced this problem, so there are several things you can do:
Combining these two, you can find a balance between performance on humaneval / your codebase However, you can never completely beat that problem while you're using the finetune. There are a couple of methods to prepare data to make the problem less visible though, https://arxiv.org/abs/2312.05934. I guess we'll revisit this in some time, but you're welcome to contribute if you have some ideas |
Thx, i will try. I have two questions: |
Hi, dear:
Thanks for your open source!
How did you overcome the catastrophic forgetting problem in lora finetune.
The performance dropped a lot on humaneval dataset after lora finetune on my own dataset.
The text was updated successfully, but these errors were encountered: