Fix pytorch related pytest failures#1377
Conversation
|
pre-commit.ci autofix |
|
@calad0i The remaining test failures come from the HGQ + DA tests and I am having some trouble replicating them locally. Can you have a look? |
|
Can you push an empty commit and rerun the tests? |
|
I retried the HGQ + DA tests. Try to replicate locally with the same random seed. Otherwise, could there be version differences between the test environment and some libraries that would make the difference? |
|
It's certainly reproduced in the test environment here, see for example the oneAPI accelerator PR tests: https://gitlab.cern.ch/fastmachinelearning/hls4ml/-/jobs/60904849 So I guess it's just a matter of luck reproducing it online. I just didn't want to run all 386 tests offline to reproduce on failure and haven't found a seed yet that gives me a failure for a single test. |
|
Guess it passed now. A little weird. |
* fix pytorch related pytest failures * remove printouts * [pre-commit.ci] auto fixes from pre-commit hooks --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
There currently are a couple of pytorch-related failures in the pytests.
The main one is a bug with recurrent layers where the weights and hidden weights need to be transposed to mach the keras convention that is used in our implementation. Since this is not strictly a channels-last conversion I have added this directly to the parser so that the channels-last conversion feature can still be used as before. This issue was masked before because of a bug in the converter that ignored the "off" setting. After I fixed this recently, this issue was revealed, but missed at the time.
I also increased the bit width in one of the einsum tests to avoid occasional failures because of limited precision. There is nothing wrong otherwise, so this seems like the best option.
Type of change
Tests
Failing pytests pass now
Checklist
pre-commiton the files I edited or added.