-
Notifications
You must be signed in to change notification settings - Fork 482
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] commonsenseqa_gen_c946f2中的commonsenseqa_datasets数据集报错 TypeError: 'ListWrapper' object is not iterable #1760
Comments
with read_base():
总体的数据集如下,siqa和commonsenseqa都报这个错 |
相同错误,请问有解决方案么 |
我测commonsenseqa也报一样的错,怎么解决呢 |
但其他的都显示TypeError: Value after * must be an iterable, not LazyObject, |
commonsense_qa: 一样的报错,是否有解决方案? |
We can prioritize fixing the syntax errors first, while the specific evaluation logic hasn't been verified yet.
|
先决条件
问题类型
我正在使用官方支持的任务/模型/数据集进行评估。
环境
{'CUDA available': True,
'CUDA_HOME': '/usr/local/cuda-12.1',
'GCC': 'gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0',
'GPU 0,1,2,3,4,5': 'NVIDIA RTX A6000',
'MMEngine': '0.10.5',
'MUSA available': False,
'NVCC': 'Cuda compilation tools, release 12.1, V12.1.66',
'OpenCV': '4.10.0',
'PyTorch': '2.5.1+cu124',
'PyTorch compiling details': 'PyTorch built with:\n'
' - GCC 9.3\n'
' - C++ Version: 201703\n'
' - Intel(R) oneAPI Math Kernel Library Version '
'2024.2-Product Build 20240605 for Intel(R) 64 '
'architecture applications\n'
' - Intel(R) MKL-DNN v3.5.3 (Git Hash '
'66f0cb9eb66affd2da3bf5f8d897376f04aae6af)\n'
' - OpenMP 201511 (a.k.a. OpenMP 4.5)\n'
' - LAPACK is enabled (usually provided by '
'MKL)\n'
' - NNPACK is enabled\n'
' - CPU capability usage: AVX512\n'
' - CUDA Runtime 12.4\n'
' - NVCC architecture flags: '
'-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90\n'
' - CuDNN 90.1\n'
' - Magma 2.6.1\n'
' - Build settings: BLAS_INFO=mkl, '
'BUILD_TYPE=Release, CUDA_VERSION=12.4, '
'CUDNN_VERSION=9.1.0, '
'CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, '
'CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 '
'-fabi-version=11 -fvisibility-inlines-hidden '
'-DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO '
'-DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON '
'-DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK '
'-DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE '
'-O2 -fPIC -Wall -Wextra -Werror=return-type '
'-Werror=non-virtual-dtor -Werror=bool-operation '
'-Wnarrowing -Wno-missing-field-initializers '
'-Wno-type-limits -Wno-array-bounds '
'-Wno-unknown-pragmas -Wno-unused-parameter '
'-Wno-strict-overflow -Wno-strict-aliasing '
'-Wno-stringop-overflow -Wsuggest-override '
'-Wno-psabi -Wno-error=old-style-cast '
'-Wno-missing-braces -fdiagnostics-color=always '
'-faligned-new -Wno-unused-but-set-variable '
'-Wno-maybe-uninitialized -fno-math-errno '
'-fno-trapping-math -Werror=format '
'-Wno-stringop-overflow, LAPACK_INFO=mkl, '
'PERF_WITH_AVX=1, PERF_WITH_AVX2=1, '
'TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, '
'USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, '
'USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, '
'USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, '
'USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, '
'USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, \n',
'Python': '3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) '
'[GCC 13.3.0]',
'TorchVision': '0.20.1+cu124',
'lmdeploy': "not installed:No module named 'lmdeploy'",
'numpy_random_seed': 2147483648,
'opencompass': '0.3.5+4653f69',
'sys.platform': 'linux',
'transformers': '4.47.0'}
重现问题 - 代码/配置示例
with read_base():
from opencompass.configs.datasets.commonsenseqa.commonsenseqa_gen_c946f2 import commonsenseqa_datasets
llama_instruct_ms_front_french_bias_model = [
dict(
type=HuggingFacewithChatTemplate,
abbr='llama_instruct_ms_front_french_bias_model',
path='/mnt/data1/LLMs/temp_lingual_llamaitlora_singleadvtrain/meaningful_sentence_french_bias_250_1e-05_front-500',
max_out_len=1024,
batch_size=8,
run_cfg=dict(num_gpus=1),
stop_words=['<|end_of_text|>', '<|eot_id|>'],
)
]
重现问题 - 命令或脚本
opencompass /home/wangzihan/opencompass/opencompass/llamait/llamait_meaningfulsentence.py
重现问题 - 错误信息
/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/init.py:19: UserWarning: Starting from v0.4.0, all AMOTIC configuration files currently located in
./configs/datasets
,./configs/models
, and./configs/summarizers
will be migrated to theopencompass/configs/
package. Please update your configuration file paths accordingly._warn_about_config_migration()
12/14 14:13:13 - OpenCompass - INFO - Task [llama_instruct_ms_front_french_bias_model/commonsense_qa,llama_instruct_ms_front_french_bias_model/siqa,llama_instruct_ms_front_french_bias_model/BoolQ]
Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards: 25%|██▌ | 1/4 [00:41<02:03, 41.33s/it]
Loading checkpoint shards: 50%|█████ | 2/4 [01:12<01:11, 35.50s/it]
Loading checkpoint shards: 75%|███████▌ | 3/4 [01:51<00:37, 37.04s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [01:58<00:00, 25.00s/it]
Loading checkpoint shards: 100%|██████████| 4/4 [01:58<00:00, 29.54s/it]
12/14 14:15:15 - OpenCompass - INFO - using stop words: ['<|eot_id|>', '<|eom_id|>', '<|end_of_text|>']
12/14 14:15:15 - OpenCompass - INFO - Start inferencing [llama_instruct_ms_front_french_bias_model/commonsense_qa]
[2024-12-14 14:15:20,804] [opencompass.openicl.icl_retriever.icl_topk_retriever] [INFO] Creating index for index set...
0%| | 0/812 [00:00<?, ?it/s]You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the
__call__
method is faster than using a method to encode the text followed by a call to thepad
method to get a padded encoding.0%| | 0/812 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/tasks/openicl_infer.py", line 161, in
inferencer.run()
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/tasks/openicl_infer.py", line 89, in run
self._inference()
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/tasks/openicl_infer.py", line 107, in _inference
retriever = ICL_RETRIEVERS.build(retriever_cfg)
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/openicl/icl_retriever/icl_mdl_retriever.py", line 73, in init
super().init(dataset, ice_separator, ice_eos_token, ice_num,
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/openicl/icl_retriever/icl_topk_retriever.py", line 83, in init
self.index = self.create_index()
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/openicl/icl_retriever/icl_topk_retriever.py", line 99, in create_index
res_list = self.forward(dataloader,
File "/home/usr/condaenv/opencompass/lib/python3.10/site-packages/opencompass/openicl/icl_retriever/icl_topk_retriever.py", line 138, in forward
} for r, m in zip(res, metadata)])
TypeError: 'ListWrapper' object is not iterable
命令行报错为 OpenCompass - ERROR - /home/wangzihan/condaenv/opencompass/lib/python3.10/site-packages/opencompass/runners/local.py - _launch - 236 - task OpenICLInfer[llama_instruct_ms_front_french_bias_model/commonsense_qa,llama_instruct_ms_front_french_bias_model/BoolQ] fail, see
/home/wangzihan/opencompass/metric/llamait/meaningful_sentence_front/20241214_144849/logs/infer/llama_instruct_ms_front_french_bias_model/commonsense_qa.out
其他信息
是否可能与数据集有关吗,还是我的运行方式不对
The text was updated successfully, but these errors were encountered: