Hello
Thank you for trying this and please feel free to submit a pull request for the holes I left there (if you are comfortable with that). Maybe the "cuda support not available issue" for rocm build is caused by some configuration problem?
I managed to build again Here is the merge request where I describe my issue https://salsa.debian.org/deeplearning-team/pytorch/-/merge_requests/6 Is the export PYTORCH_ROCM_ARCH = gfx1102 ? instead of gfx1100 for my board, should we build for a list ? make a specific package per kind of board ? Did I make an obvious mistake in my filling of the rocm missing part ? Cheers Christian