In a separate repository https://github.com/j2kun/heir_torch_coverage, I created a set of torch-mlir export scripts, one for each torch operator.
For Conv2d for example, the operators/conv2d contains a single-op torch model with garbage weights, exported to MLIR and annotated with secret before being run through heir-opt (2026-05-01 release). If it succeeds, the resulting openfhe MLIR is saved, otherwise an erorr log is saved.
I let AI decide on what "representative" set of torch operators should be included in the list (there are over 2k torch operators), so if new ones are needed I can add them.
So this issue exists to track progress on supporting these operators in HEIR. Specifically, if the relevant torch operator compiles to linalg that HEIR can handle at all (even if inefficiently) then it is considered done for this issue.
I also created a new label torch-support for tickets related to this.
Copy of the table from that project (as of 2026-05-06)
heir-opt Coverage Results
| Operator |
Status |
Notes |
| Add |
✅ |
|
| AvgPool1d |
❌ |
Layout assignment error: rank 1 vs domain size 3 |
| AvgPool2d |
❌ |
Rank mismatch during lowering to conv_2d |
| AvgPool3d |
❌ |
Layout assignment error: rank 3 vs domain size 5 |
| BatchNorm1d |
❌ |
Layout assignment error: rank 1 vs domain size 2 |
| BatchNorm2d |
❌ |
Layout assignment error during conversion: rank 1 vs domain size 3 |
| BatchNorm3d |
❌ |
Layout assignment error: rank 1 vs domain size 4 |
| Cat |
❌ |
Layout assignment error: rank 4 vs domain size 5 |
| Conv1d |
❌ |
Rank mismatch in linalg.conv_1d_ncw_fcw: rank 2 vs indexing map rank 3 |
| Conv2d |
✅ |
Reduced input size to 16x16 to fit ciphertext degree 1024 |
| Conv3d |
❌ |
Rank mismatch in linalg.conv_3d_ncdhw_fcdhw: rank 2 vs indexing map rank 5 |
| Flatten |
❌ |
Error: No mgmt attribute found in the module for B/FV |
| GELU |
❌ |
Failed to legalize secret.generic containing arith.divf |
| LeakyReLU |
❌ |
Layout assignment error: rank 1 vs domain size 0 |
| Linear |
✅ |
|
| Matmul |
❌ |
Rank mismatch in linalg.vecmat: rank 2 vs indexing map rank 1 |
| MaxPool1d |
❌ |
Layout assignment error: rank 1 vs domain size 3 |
| MaxPool2d |
❌ |
Layout assignment error: rank 2 vs domain size 4 |
| MaxPool3d |
❌ |
Layout assignment error: rank 3 vs domain size 4 |
| Mean |
❌ |
Layout assignment error: rank 1 vs domain size 0 |
| Mul |
✅ |
|
| PReLU |
❌ |
Layout assignment error: rank 0 vs domain size 1 |
| Permute |
❌ |
Layout assignment error: rank 2 vs permutation size 4 |
| ReLU |
✅ |
|
| SiLU |
✅ |
|
| Sigmoid |
✅ |
|
| Sum |
❌ |
Error: 'tensor.extract' op incorrect number of indices for extract_element |
| Tanh |
✅ |
|
| bmm |
❌ |
Rank mismatch in linalg.batch_matmul: rank 2 vs indexing map rank 3 |
| chunk |
❌ |
Segmentation fault during heir-opt |
| div |
❌ |
Failed to legalize secret.generic containing arith.divf |
| eq |
❌ |
Failed to legalize secret.generic containing arith.cmpf |
| exp |
✅ |
|
| gt |
✅ |
Legalized with high-degree polynomial approximation |
| log |
✅ |
|
| lt |
✅ |
Legalized with high-degree polynomial approximation |
| mm |
❌ |
Type mismatch in arith.mulf during lowering: 1x1024 vs 2x1024 |
| neg |
✅ |
|
| prod |
❌ |
Error: 'tensor.extract' op incorrect number of indices for extract_element |
| select |
✅ |
Implemented as multiplication by mask |
| softmax |
❌ |
Error in linalg.reduce: expected equal number of inputs and outputs |
| sqrt |
✅ |
|
| squeeze |
❌ |
Error: No mgmt attribute found in the module for B/FV |
| sub |
✅ |
|
| transpose |
❌ |
Layout assignment error: rank 2 vs permutation size 4 |
In a separate repository https://github.com/j2kun/heir_torch_coverage, I created a set of torch-mlir export scripts, one for each torch operator.
For Conv2d for example, the operators/conv2d contains a single-op torch model with garbage weights, exported to MLIR and annotated with secret before being run through
heir-opt(2026-05-01 release). If it succeeds, the resulting openfhe MLIR is saved, otherwise an erorr log is saved.I let AI decide on what "representative" set of torch operators should be included in the list (there are over 2k torch operators), so if new ones are needed I can add them.
So this issue exists to track progress on supporting these operators in HEIR. Specifically, if the relevant torch operator compiles to linalg that HEIR can handle at all (even if inefficiently) then it is considered done for this issue.
I also created a new label torch-support for tickets related to this.
Copy of the table from that project (as of 2026-05-06)
heir-opt Coverage Results