Skip to content

Xin-Zhou-smu/LLM4PatchCorrectness

Repository files navigation

LLM4PatchCorrectness

Pre-requirement

  1. Python3.10+
  2. CUDA Version: 12.2
  3. Conda

Python Library Installation

$ conda create -n llm4correct python=3.10
$ conda activate llm4correct
$ bash install_library.sh

Download the pre-trained model

Please check this link (https://drive.google.com/drive/folders/1MryWp2iqXAVo4UHxnN-bTspQkysM7Fpy?usp=sharing) to download the model.

In-context learning Inference

Please run the following pipeline script:

$  bash run_pipeline.sh

Notes:

  1. '--task' the format is Patch_{APR_Tool_Name} and it is to choose the target APR tool, e.g. Patch_ACS
  2. '--option' is to choose the guiding information for LLM. "bug-trace-testcase-similar" is the default parameter.
  3. the content in the '--out_dir' is the logits generated by LLM.
  4. the default '--max_length' is 4000 while if you meet OOM problem, you can reduce it accordingly.
  5. the default '--batch_size' is 1 while if you have extra memory, you can set to 2 to speed up.

Read Experiment Results

After finishing all inferences, you can run this Python file to read the results for each APR tool:

$ python read_results_enhanced.py

To train the CL model

cd  cl_pretrain

This also includes implementations of many recent papers studying in-context learning.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published