[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Breaking down barriers to ROCm packaging



Hi Kari,

On 2024-01-30 13:33, Kari Pahula wrote:
Replying to myself: I decided to go with a W6600.  If I need more
upgrades I'll figure it out.

That's not necessarily a bad choice, but I wouldn't generally recommend it. The Radeon PRO W6600 is Navi 23 (gfx1032). It is not officially supported by the upstream ROCm project and you will need to use a workaround to get key libraries like PyTorch to work [1]. If you have to file upstream bugs against projects like PyTorch, it might be simpler if you have a GPU that they officially support. Although, I don't think PyTorch officially supports any 1-slot GPUs, so I suppose if the size and power draw of the W6600 were what was appealing to you, then the W6600 may still be the best option.

The set of GPUs that PyTorch supports is needlessly small. It would not be difficult to extend support to all modern AMD GPUs. I believe that skilled and passionate developers in the AI community can help to make that happen, but I just want to warn you that you are choosing an option that requires a workaround to get things running. It's a simple and effective workaround, but a point of friction nevertheless.

Sincerely,
Cory Bloor

[1]: Set the environment variable HSA_OVERRIDE_GFX_VERSION=10.3.0


Reply to: