-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parler-TTS extension - RuntimeError: Cannot find a working triton installation. #444
Comments
Please try compile mode: none. |
I agree, I will try my Linux machine ASAP, however I hope that helps with the win. installer |
also noticed: Flash attention 2 is not installed trying compile mode none and eager attention ( other attention did not work before already) Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.45it/s] yes so, no-compile mode and eager are a working combination HOWEVER the outputted audio files are just noisehttps://audio.jukehost.co.uk/80L0cDE7pWc8KyqykR17mKeCLz1Rxnyz the model folder structure remains in cache, is that intended also? data\models\parler_tts\cache\models--ylacombe--parler-large-v1-og full console output below: {'text': 'Hey, how are you doing today?', 'description': "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up.", 'seed': '2832348810', 'model_name': 'ylacombe/parler-large-v1-og', 'attn_implementation': 'eager', 'compile_mode': None} Config of the audio_encoder: <class 'parler_tts.dac_wrapper.modeling_dac.DACModel'> is overwritten by shared audio_encoder config: DACConfig { Config of the decoder: <class 'parler_tts.modeling_parler_tts.ParlerTTSForCausalLM'> is overwritten by shared decoder config: ParlerTTSDecoderConfig { Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.45it/s] |
OS: Win 11, 4070 Nvidia.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 4.13it/s]
The 'max_batch_size' argument of StaticCache is deprecated and will be removed in v4.46. Use the more precisely named 'batch_size' argument instead.
prompt_attention_mask
is specified butattention_mask
is not. A fullattention_mask
will be created. Make sure this is the intended behaviour.Traceback (most recent call last):
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\gradio\queueing.py", line 624, in process_events
response = await route_utils.call_process_api(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\gradio\blocks.py", line 2015, in process_api
result = await self.call_function(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\gradio\blocks.py", line 1562, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 962, in run
result = context.run(func, *args)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\gradio\utils.py", line 865, in wrapper
response = f(*args, kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\gradio_dict_decorator.py", line 133, in wrapper
result_dict = fn(_get_mapped_args(inputs, list_args), outputs=outputs)
File "D:\tts-generation-webui-main\extensions\builtin\extension_decorator_save_ffmpeg\main.py", line 75, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\extensions\builtin\extension_decorator_save_ffmpeg\main.py", line 65, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\extensions\builtin\extension_decorator_save_waveform\main.py", line 29, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\extensions\builtin\extension_decorator_average_execution_time\main.py", line 27, in wrapper
result = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_apply_torch_seed.py", line 7, in wrapper
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_save_metadata.py", line 8, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_save_wav.py", line 15, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_add_model_type.py", line 4, in inner
return fn(*args, _type=model_type, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_add_base_filename.py", line 36, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_add_date.py", line 6, in wrapper
result_dict = fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\decorator_log_generation.py", line 7, in wrapper
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\tts_webui\decorators\log_function_time.py", line 7, in wrapper
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\extension_parler_tts\main.py", line 114, in generate_parler_tts
generation = model.generate(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\parler_tts\modeling_parler_tts.py", line 3564, in generate
outputs = self._sample(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\transformers\generation\utils.py", line 3206, in _sample
outputs = self(**model_inputs, return_dict=True)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\eval_frame.py", line 451, in _fn
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\parler_tts\modeling_parler_tts.py", line 2820, in forward
if (labels is not None) and (decoder_input_ids is None and decoder_inputs_embeds is None):
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\parler_tts\modeling_parler_tts.py", line 1896, in forward
outputs = self.model(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\parler_tts\modeling_parler_tts.py", line 1789, in forward
decoder_outputs = self.decoder(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 921, in catch_errors
return callback(frame, cache_entry, hooks, frame_state, skip=1)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 786, in _convert_frame
result = inner_convert(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 400, in _convert_frame_assert
return compile(
File "D:\tts-generation-webui-main\installer_files\env\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 676, in compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 535, in compile_inner
out_code = transform_code_object(code, transform)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\bytecode_transformation.py", line 1036, in transform_code_object
transformations(instructions, code_options)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 165, in fn
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\convert_frame.py", line 500, in transform
tracer.run()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\symbolic_convert.py", line 2149, in run
super().run()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\symbolic_convert.py", line 810, in run
and self.step()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\symbolic_convert.py", line 773, in step
getattr(self, inst.opname)(inst)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\symbolic_convert.py", line 484, in wrapper
return handle_graph_break(self, inst, speculation.reason)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\symbolic_convert.py", line 548, in handle_graph_break
self.output.compile_subgraph(self, reason=reason)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\output_graph.py", line 1001, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "D:\tts-generation-webui-main\installer_files\env\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\output_graph.py", line 1178, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\output_graph.py", line 1251, in call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\output_graph.py", line 1232, in call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\repro\after_dynamo.py", line 117, in debug_wrapper
compiled_gm = compiler_fn(gm, example_inputs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_init.py", line 1731, in call
return compile_fx(model, inputs, config_patches=self.config)
File "D:\tts-generation-webui-main\installer_files\env\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\compile_fx.py", line 1330, in compile_fx
return aot_autograd(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\backends\common.py", line 58, in compiler_fn
cg = aot_module_simplified(gm, example_inputs, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_functorch\aot_autograd.py", line 903, in aot_module_simplified
compiled_fn = create_aot_dispatcher_function(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_functorch\aot_autograd.py", line 628, in create_aot_dispatcher_function
compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_functorch_aot_autograd\runtime_wrappers.py", line 443, in aot_wrapper_dedupe
return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_functorch_aot_autograd\runtime_wrappers.py", line 648, in aot_wrapper_synthetic_base
return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_functorch_aot_autograd\jit_compile_runtime_wrappers.py", line 119, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\compile_fx.py", line 1257, in fw_compiler_base
return inner_compile(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\repro\after_aot.py", line 83, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\debug.py", line 304, in inner
return fn(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\tts-generation-webui-main\installer_files\env\lib\contextlib.py", line 79, in inner
return func(*args, **kwds)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\compile_fx.py", line 438, in compile_fx_inner
compiled_graph = fx_codegen_and_compile(
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\compile_fx.py", line 714, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\graph.py", line 1307, in compile_to_fn
return self.compile_to_module().call
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\graph.py", line 1250, in compile_to_module
self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\graph.py", line 1205, in codegen
self.scheduler = Scheduler(self.buffers)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_dynamo\utils.py", line 262, in time_wrapper
r = func(*args, **kwargs)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 1267, in init
self.nodes = [self.create_scheduler_node(n) for n in nodes]
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 1267, in
self.nodes = [self.create_scheduler_node(n) for n in nodes]
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 1358, in create_scheduler_node
return SchedulerNode(self, node)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 687, in init
self._compute_attrs()
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 698, in _compute_attrs
group_fn = self.scheduler.get_backend(self.node.get_device()).group_fn
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 2276, in get_backend
self.backends[device] = self.create_backend(device)
File "D:\tts-generation-webui-main\installer_files\env\lib\site-packages\torch_inductor\scheduler.py", line 2268, in create_backend
raise RuntimeError(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Cannot find a working triton installation. More information on installing Triton can be found at https://github.com/openai/triton
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
The text was updated successfully, but these errors were encountered: