Releases: plasma-umass/cwhy
v0.5
What's changed
The big change here is the removal of LiteLLM use internally. LiteLLM server can still be used to achieve the same functionality as previously. This change makes working with local LLMs such as Llama easier. See README for up-to-date documentation.
Full Changelog: v0.4.7...v0.5
v0.4.7
A lot of warnings (1 from CWhy directly, more from LiteLLM) are now silenced and will not appear.
Full Changelog: v0.4.6...v0.4.7
v0.4.6
What's Changed
Some bug fixes, notably when the LLM tries to call multiple functions at once in conversation mode. Recent Gpt-4/GPT-4-Turbo models seem more willing to do this.
Full Changelog: v0.4.5...v0.4.6
Compatibility fix for Python 3.8
Full Changelog: v0.4.4...v0.4.5
v0.4.4
What's Changed
Bug fix release.
Also introducing new experimental, work-in-progress command, diff-converse
.
Full Changelog: v0.4.3...v0.4.4
Preliminary Bedrock support
What's Changed
- Moving to litellm; added Bedrock. by @emeryberger in #56
Full Changelog: v0.4.2...v0.4.3
v0.4.2
v0.4.1
What's Changed
- Performance improvements when
CWHY_DISABLE
is on. - Name wrapper with a
.py
by default.
Full Changelog: v0.4...v0.4.1
v0.4
Prints cost estimate for each query
What's Changed
- Factor out some functions into llm-utils by @nicovank in #37
- Add cost estimation on text completion by @nicovank in #38
Full Changelog: v0.3.1...v0.3.2