[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

MIOpen package LFS files



Hi folks,

I’m going to package MIOpen [1], ROCm’s counterpart to CUDA cuBLAS which provides several deep learning primitives (Conv, Pooling, etc.) for PyTorch and other DL frameworks.
There exists some large binary files (*.kdb.bz2) stored as Git LFS[2].
These files are served as kernel performance database [3] for different GPU generations, which will be loaded at runtime by MIOpen [4] to obtain the kernel code objects based on the queried problem size and corresponding solution.

Note that these binary files are not necessary at neitehr compile time [5] or runtime [6], since MIOpen will fallback to JIT the kernel [7] if the database is absent.
IIUC, they are just nice-to-have performance hints to the library.

So my question is: should we track these large binary files in Salsa?
If so, the source package will be large (each of these binary files take up to hundreds of MBs).
If not, the delivered binary package will suffer from poor performance whenever one kernel is invoked for the first time.
Once the kernel is compiled, it will be saved to the database [8] for subsequent calls.

Best,
Xuanteng

[1]: https://rocm.docs.amd.com/projects/MIOpen/en/latest/
[2]: https://salsa.debian.org/rocm-team/miopen/-/tree/master/src/kernels?ref_type=heads
[3]: https://rocm.docs.amd.com/projects/MIOpen/en/develop/conceptual/perfdb.html
[4]: https://salsa.debian.org/rocm-team/miopen/-/blob/master/src/binary_cache.cpp?ref_type=heads#L114-129
[5]: https://salsa.debian.org/rocm-team/miopen/-/blob/master/CMakeLists.txt?ref_type=heads#L501-521
[6]: https://salsa.debian.org/rocm-team/miopen/-/blob/master/src/binary_cache.cpp?ref_type=heads#L162-180
[7]: https://salsa.debian.org/rocm-team/miopen/-/blob/master/src/hip/handlehip.cpp?ref_type=heads#L403-413
[8]: https://salsa.debian.org/rocm-team/miopen/-/blob/master/src/hip/handlehip.cpp?ref_type=heads#L415-424


Reply to: