Skip to content

Revise backend to use llama-cpp-python instead of transformers #45

Revise backend to use llama-cpp-python instead of transformers

Revise backend to use llama-cpp-python instead of transformers #45

Triggered via push December 3, 2024 18:29
Status Success
Total duration 10m 32s
Artifacts 2
Matrix: package-release
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
meshgen-cpu
4.55 GB
meshgen-cuda
4.94 GB