• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: November 10th, 2023

help-circle



  • Pro user would rely on specific accelerators. Whether they are on board like Pro Res accelerators on a Mac soc, RTX cores in a nvidia gpu, or discreet like a chipset in an external audio interface.

    You could require significant Pcie lanes for quad gpus, multiple nvme raid arrays, high bandwidth connection like multiple fiber channel/iscsi/thunderbolt 4 nics.

    Your workloads could require an unusual amount of ram, like llms that require 64gb dedicated to the chatbot. So you’d be starting out at 96-128gb of ram to run your Os, applications separate of the llm overhead.

    High performance core count could be your jam. Think virtualization, high concurrency transaction processing, chuggy builds like compiling the Linux kernel. 32 cores may not cut it at all, your workload starts at 192 cores and 384gb of ram. I don’t think any of the Apple Xeon configs ever got there but that’s a medium Linux host or racked workstation.