Hello all,

I’m going to be adding 50 micro pcs to my homelab, something similar to this: https://i.imgur.com/SvVeHLu.png Each unit is roughly 4.5 in x 4.5 in x 2.5 in and has hardly any weight to it.

I’d like to find a way to rackmount these in the most space efficient manner. For example, having each of these units slotted in from the top/down, similar to this mock-up I made: https://i.imgur.com/zRc4b7G.png

My research so far has shown me this: https://i.imgur.com/AWznyB5.png which is a simple rackmount on a slider that is metal. I imagine I could maybe build some sort of support framework on top of it to handle sliding things in, albeit I am not entirely sure how. Maybe I could 3d print something.

Would anybody have any ideas on how I could build a server rack that would support something like this, ideally something that is on slide-out bearings?

Note: I have a pretty healthy budget to buy whatever or modify whatever, so that should help option some options.

Thanks in advance!

  • LAKnerd@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Why not VMs? Dell poweredge fx2s with 4 x fc630 or any other multi-node server will not only give you density but also the ability to scale out with the option of just turning off one or more of the nodes. There’s also the Hyve Zeus 1u servers that run pretty quiet, and can also scale out depending on how many you have turned on. They’re absolutely no-frills, only has room for two 2.5" drives, but it’s supermicro-based so there’s plenty of documentation.

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Why not VMs? Dell poweredge fx2s with 4 x fc630

      It is a fair question! The price/performance metric for what I am building far exceeds what you’re looking at.

      For example, with my setup for $23k, I’ll have 700 CPU Cores and 1.6TB of memory, in addition to (not needed) 25TB of NVMe storage, as well as decent amount of clustered GPU compute, albeit not the goal.

      700 fast processing cores for $23k is just not possible using server architecture at this time.

      • LAKnerd@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I just happen to have a spreadsheet that covers compute density for all models of dell, HPE, and supermicro…

        Your cost/density sweet spot is going to be the 2u/4 node platform from Dell or supermicro that use the xeon e5-2600v4 or scalable v2. There’s a wide selection on eBay for this stuff, so it’s definitely available. At 16 cores/CPU, 128 cores/2u unit, you’ll need 6 units.

        For the dell fx2s with 4 x fx630 nodes, 256gb memory/node, 32 cores/node, I spec’d one out for $2400 before storage and GPU

    • StartupTim@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Dell poweredge fx2s with 4 x fc630

      From a cost to performance perspective, I don’t believe the solution you mentioned would be very attractive. For example, what I am looking to build, would be 700 pretty fast physical CPU cores (4.4ghz) @ $23k (and more cores/speed, for incremental price increase). I haven’t found any server solution, used or new, which can compare to that in raw processing power.