Skip to content

Releases: plasma-umass/cwhy

v0.5

14 Jan 12:33
27be551
Compare
Choose a tag to compare

What's changed

The big change here is the removal of LiteLLM use internally. LiteLLM server can still be used to achieve the same functionality as previously. This change makes working with local LLMs such as Llama easier. See README for up-to-date documentation.

Full Changelog: v0.4.7...v0.5

v0.4.7

09 Apr 18:54
58c4c51
Compare
Choose a tag to compare

A lot of warnings (1 from CWhy directly, more from LiteLLM) are now silenced and will not appear.

Full Changelog: v0.4.6...v0.4.7

v0.4.6

13 Mar 20:21
786cbc0
Compare
Choose a tag to compare

What's Changed

Some bug fixes, notably when the LLM tries to call multiple functions at once in conversation mode. Recent Gpt-4/GPT-4-Turbo models seem more willing to do this.

Full Changelog: v0.4.5...v0.4.6

Compatibility fix for Python 3.8

05 Feb 23:58
Compare
Choose a tag to compare

v0.4.4

05 Feb 22:15
Compare
Choose a tag to compare

What's Changed

Bug fix release.

Also introducing new experimental, work-in-progress command, diff-converse.

Full Changelog: v0.4.3...v0.4.4

Preliminary Bedrock support

05 Feb 18:58
a29d555
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.4.2...v0.4.3

v0.4.2

30 Jan 20:37
6748ace
Compare
Choose a tag to compare

What's Changed

Windows support.

Full Changelog: v0.4.1...v0.4.2

v0.4.1

29 Jan 20:00
1b5eeca
Compare
Choose a tag to compare

What's Changed

  • Performance improvements when CWHY_DISABLE is on.
  • Name wrapper with a .py by default.

Full Changelog: v0.4...v0.4.1

v0.4

16 Jan 19:02
1a34e31
Compare
Choose a tag to compare

Warning
Wrapper mode is now default. Watch out for the new API. This is a breaking change for the CLI.

What's Changed

  • Add new conversation mode: beginning a back-and-forth GPT mode by @nicovank in #48
  • Interface cleanup: wrapper mode default by @nicovank in #49

Full Changelog: v0.3.2...v0.4

Prints cost estimate for each query

05 Oct 13:21
Compare
Choose a tag to compare

What's Changed

  • Factor out some functions into llm-utils by @nicovank in #37
  • Add cost estimation on text completion by @nicovank in #38

Full Changelog: v0.3.1...v0.3.2