[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Enabling ROCm on Everything



Hi Cordell,

I changed my mind. Now I agree with the fine-grained ROCm architecture split solution.
When finalizing the pytorch-cuda packaging, I realized that it won't induce much
burden to me if we will build python3-torch-rocm-{gfx900,gfx906,etc}. I have already
prepared some code (for cuda variant) that is reusable for the rocm variants as well.

For me, building these multiple ROCm pytorch variants can be simpler than building
pytorch-cuda. The pytorch-rocm can be built using the Debian infrastructures because
the dependencies are free. While the pytorch-cuda build have to be offloaded to my
own machine or any unofficial builder due to the non-free dependencies.

I can start looking into pytorch-rocm and its ROCm dependencies after pytorch-cuda
cleared the NEW queue.

On Mon, 2023-03-20 at 23:17 -0600, Cordell Bloor wrote:
> Hello everyone,
> 
> In the last round of updates to the ROCm packages on Unstable, I did a 
> bunch of testing with an RX 5700 XT (gfx1010) and Radeon Pro v520 
> (gfx1011). I found that all Debian packaged libraries passed their full 
> test suites (with the exception of an out-of-memory error in one 
> rocprim/hipcub test). So, now the rocRAND, hipRAND, rocPRIM, hipCUB, 
> rocSPARSE and hipSPARSE packages are enabled for gfx803, gfx900, gfx906, 
> gfx908, gfx90a, gfx1010, gfx1011 and gfx1030.


Reply to: