This is an abstract curiosity. Let’s say I want to use an old laptop to run a LLM AI. I assume I would still need pytorch, transformers, etc. What is the absolute minimum system configuration required to avoid overhead such as schedulers, kernel threads, virtual memory, etc. Are there options to expose the bare metal and use a networked machine to manage overhead? Maybe a way to connect the extra machine as if it is an extra CPU socket or NUMA module? Basically, I want to turn an entire system into a dedicated AI compute module.

  • SeriousBug
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    "AI compute module"s exist, they are called GPUs. All the matrix calculations that go into neural networks are highly parallelizable, which means GPUs are optimal for them. A cheap used GPU will beat anything you can cook up yourself.