Are there incompatibilities or performance issues?
I’m planning on upgrading in the near future and start messing with stable diffusion and other AI projects and I’m running Linux (I use Arch, btw) so I was leaning towards AMD instead of Nvidia.
Decent, for now until ROCM & ZLUDA improve. I use NixOS and run my AI stuff using docker containers as its the easiest way imo because of how fucked up the dependencies are for ROCM especially.
Basically to get AMD working for this stuff right now is to make sure certain versions of ROCM for certain versions of projects interacting with certain versions of pytorch all like each other. The most dependency hell of all dependency hells.
So most projects have a hell of a time supporting ROCM so you must use alternative forks mostly, and even if there is a ROCM version it is so hardly used that no one knows if it works or if it doesnt half the time. I will say you will have the EASIEST time by far if you use a 7900 XT because most things are built to support that card. Otherwise good luck. Get used to using environment variables such as:
HSA_OVERRIDE_GFX_VERSION=11.0.0
(Or 10.3.0 if that one doesnt work. I use 11.0.1, these are codes for GPU’s supported by ROCM incase yours isnt supported)
TL:DR - its all a big mess right now but it does work if you fuck with it a bunch, I got my 7800 XT to work nicely with Ollama + OpenWebUI for text generation. For stable diffusion its definitely a shit show atleast for my preferred UI Invoke AI. Doesnt work at all it only uses my CPU (also AMD so maybe some fuckery.) However I dont regret it as AMD is truly the best especially on Linux but definitely not for AI as it currently stands.
Are there incompatibilities or performance issues?
I’m planning on upgrading in the near future and start messing with stable diffusion and other AI projects and I’m running Linux (I use Arch, btw) so I was leaning towards AMD instead of Nvidia.
Decent, for now until ROCM & ZLUDA improve. I use NixOS and run my AI stuff using docker containers as its the easiest way imo because of how fucked up the dependencies are for ROCM especially.
Basically to get AMD working for this stuff right now is to make sure certain versions of ROCM for certain versions of projects interacting with certain versions of pytorch all like each other. The most dependency hell of all dependency hells.
So most projects have a hell of a time supporting ROCM so you must use alternative forks mostly, and even if there is a ROCM version it is so hardly used that no one knows if it works or if it doesnt half the time. I will say you will have the EASIEST time by far if you use a 7900 XT because most things are built to support that card. Otherwise good luck. Get used to using environment variables such as:
HSA_OVERRIDE_GFX_VERSION=11.0.0 (Or 10.3.0 if that one doesnt work. I use 11.0.1, these are codes for GPU’s supported by ROCM incase yours isnt supported)
TL:DR - its all a big mess right now but it does work if you fuck with it a bunch, I got my 7800 XT to work nicely with Ollama + OpenWebUI for text generation. For stable diffusion its definitely a shit show atleast for my preferred UI Invoke AI. Doesnt work at all it only uses my CPU (also AMD so maybe some fuckery.) However I dont regret it as AMD is truly the best especially on Linux but definitely not for AI as it currently stands.