So doing a tower build mostly to try and achieve something quieter than what a rack server would offer, didn’t think about CPU coolers until everything arrived so went to local Best buy, kinda digging the dual water cooler aesthetic. Waiting on RAM to arrive still but also have a question.

Currently I have the CPU header split for the radiator fan and pump, is that ok or should I split the system fans with the pump. I guess I could also get a molex pump controller but don’t think I need one.

  • KanadaKid19@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I’m confused. You built this from the start with the intention of making it quiet, but didn’t think about the CPU cooler until you ordered everything else? Isn’t the CPU cooler typically the primary noisemaker? The choice for water cooling sounds almost made for you already if the whole point is silence.

  • Computermaster@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    If you intend to leave that radiator mounted at the back, you at least need to rotate it so the pipes aren’t at the top.

  • Status_Mechanic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Where’s the rest of the RAM slots? Should have 8 per CPU, not 8 total.

    Where’s the rest of the x16 slots? 2x Xeon’s for that gen have 80 total lanes, I only see 2 x16 slots which you can use just fine with 1 CPU.

    My 2687w’s score the same R15/R20/R23 multi-core as my single 5900x. Kinda sad really but with 256gb RAM and either 4x 12gbps SAS drives or 8 6gbps SATA drives and up to 4 GPU’s it’s still a beast, abit a bit power hungry.

    As for the AIO’s…you want 2 dedicated headers for the pump that you can set to 100% then put the curves on the radiator mounted fans. As the other poster suggested I’d try to get both radiators on top rather than the current setup, that way you’ll still have intake/exhaust fans to try to keep the rest of the system cool.

  • burninator34@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Cheap way to get 56 threads. Wouldn’t an EPYC 7601 32 core beat both of these processors combined though?

  • BrassBass@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I just lurk here for the tech pics, please forgive me: Why does that computer have two CPUs?

    • nic0nicon1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Dual-socket systems are an easy way to get more cores, memory size or memory channels for your machine without buying a more expensive CPU SKU, or upgrading the platform to the next generation (which is often still years away back when those systems were operated by their former corporate owners), or building a multi-node cluster. 4-socket and 8-socket systems also exist but uncommon on second-hand market as most motherboards and CPUs do not support this feature.

      • shadowtheimpure@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        They are also a good way to get more PCI-E lanes without having to go to HEDT hardware. Standard CPUs tend to have nowhere near enough PCI-E connectivity for modern times, imho.

        • cas13f@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          …Dual-socket systems are almost always either server or HEDT themselves.

          They also introduce different issues.

          Be sure of what you’re after before you move to dual-socket for PCIE lanes, and how it might effect what you’re already doing.

    • guruglue@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Most likely for virtualization. Though in this configuration, it would be a challenge to provide the storage and ram to support more than say, 16 cores, or a standard single cpu build.

      • chandleya@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Why? NVMe drives produce more throughput and IOPS than whole SANs 7-8 years ago.

        • Gary_Glidewell@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          The first SAN I ever worked with was about 20 years ago

          Cost close to a million dollars and was the size of two refrigerators

          Just bought an NVME off of Amazon for $21 that’s faster :O

  • vasveritas@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Put the fans on both radiators as intake, blowing inside the case, then exhaust out the top.

    That way one of your CPUs won’t be hotter than the other.