We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hey all, I'd like to be able to convert a torch model to MLIR/HLO, then compile it using XLA's internal compiler. Here's what I did:
First, I created a very simple torch model and generated MLIR/HLO according to this guide.. This is my code:
class SimpleLinearModel(nn.Module): def __init__(self): super(SimpleLinearModel, self).__init__() self.a = nn.Parameter(torch.tensor(1.0)) self.b = nn.Parameter(torch.tensor(0.0)) def forward(self, x): return self.a * x + self.b model = SimpleLinearModel() sample_input = (torch.randn(4, 1), ) exported = export(model, sample_input) stablehlo_program = exported_program_to_stablehlo(exported) with open("module.txt", "w") as f: f.write(stablehlo_program.get_stablehlo_text('forward')) with open("module.mlir", "wb") as f: f.write(stablehlo_program.get_stablehlo_bytecode('forward'))
Then, I built & ran the xla_compile binary and tried to compile the module.
xla_compile --platform=gpu --module_file=module.mlir --output_file=output --result_output_file=/tmp/result_output
I got this failure message:
2024-12-18 19:01:41.055629: E xla/service/xla_compile_main.cc:115] Compilation failed: UNKNOWN: <unknown>:0: error: conversion requires module with `main` function <unknown>:0: note: see current operation: "builtin.module"() <{sym_name = "IrToHlo.11"}> ({ "vhlo.func_v1"() <{arg_attrs = #vhlo.array_v1<[]>, function_type = #vhlo.type_v1<!vhlo.func_v1<(!vhlo.tensor_v1<!vhlo.f32_v1>, !vhlo.tensor_v1<4x1x!vhlo.f32_v1>, !vhlo.tensor_v1<!vhlo.f32_v1>) -> !vhlo.tensor_v1<4x1x!vhlo.f32_v1>>>, res_attrs = #vhlo.array_v1<[]>, sym_name = #vhlo.string_v1<"main">, sym_visibility = #vhlo.string_v1<"">}> ({ ^bb0(%arg0: !vhlo.tensor_v1<!vhlo.f32_v1>, %arg1: !vhlo.tensor_v1<4x1x!vhlo.f32_v1>, %arg2: !vhlo.tensor_v1<!vhlo.f32_v1>): %0 = "vhlo.broadcast_in_dim_v1"(%arg2) <{broadcast_dimensions = #vhlo.tensor_v1<dense<> : tensor<0xi64>>}> : (!vhlo.tensor_v1<!vhlo.f32_v1>) -> !vhlo.tensor_v1<4x1x!vhlo.f32_v1> %1 = "vhlo.multiply_v1"(%0, %arg1) : (!vhlo.tensor_v1<4x1x!vhlo.f32_v1>, !vhlo.tensor_v1<4x1x!vhlo.f32_v1>) -> !vhlo.tensor_v1<4x1x!vhlo.f32_v1> %2 = "vhlo.broadcast_in_dim_v1"(%arg0) <{broadcast_dimensions = #vhlo.tensor_v1<dense<> : tensor<0xi64>>}> : (!vhlo.tensor_v1<!vhlo.f32_v1>) -> !vhlo.tensor_v1<4x1x!vhlo.f32_v1> %3 = "vhlo.add_v1"(%1, %2) : (!vhlo.tensor_v1<4x1x!vhlo.f32_v1>, !vhlo.tensor_v1<4x1x!vhlo.f32_v1>) -> !vhlo.tensor_v1<4x1x!vhlo.f32_v1> "vhlo.return_v1"(%3) : (!vhlo.tensor_v1<4x1x!vhlo.f32_v1>) -> () }) : () -> () }) {mhlo.cross_program_prefetches = [], mhlo.is_dynamic = false, mhlo.use_auto_spmd_partitioning = false} : () -> ()
Any ideas or suggestions on what went wrong? Thanks!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hey all, I'd like to be able to convert a torch model to MLIR/HLO, then compile it using XLA's internal compiler. Here's what I did:
First, I created a very simple torch model and generated MLIR/HLO according to this guide.. This is my code:
Then, I built & ran the xla_compile binary and tried to compile the module.
I got this failure message:
Any ideas or suggestions on what went wrong? Thanks!
The text was updated successfully, but these errors were encountered: