I know SSD’s are not meant for data backup, but I do have an external SSD drive that I only plug and use occasionally. I know from research that the data should still be fine at least a year, so I should plug it in no less than that. But… apart from plugging it in, do I need to do anything or will the controller just magically refresh everything? In that case: how long does it need to be powered for this to be completed? Some say you need to actually read through all data, or even re-write it all, however that would be possible on a system drive.

What gives? It’s really hard finding some solid advice googling the matter.

  • dr100@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Give it a full read, which you should anyway to check your backups?

      • tigersoul925@alien.topOPB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        There is no gui way of doing this afaik, I’d guess it involves doing some kind of dd > /dev/null

    • tigersoul925@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, I should probably. An idea I had was to run a manual check of the latest time machine backup against the data partition. This is on a mac.

      • dr100@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That would work. Actually if it’s a constellation that supports TRIM (OS-Filesystem-whatever it sees on the USB - see this to get an idea how complex things can get) reading the saved backup might be equivalent to reading the whole SSD. Even if you used only 64 GBs of 1TB if the rest is TRIMed nothing (more) would be “really” read even if you do a full badblocks (or dd to /dev/null or any other full read test). Sure, it’ll take a while to feed 900+ GBs of zeroes (or whatever the TRIMed sectors return) over USB but not much will be really read from the SSD.

  • Chropera@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I would be impossible to guess without a knowledge of internal working of a particular SSD. For a NAND-specific file system I’ve implemented (not SSD but a device using raw SLC NAND) there was a block refresh immediately after ECC error detection at read and also background process checking slowly all the pages in use (one week for a full cycle). Background scan was starting each time after powering on from a randomized point.

    • tigersoul925@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Make sense. I guess leaving it idle for some time should be part of the routine. Then again, there’s a limit to how far one can go. If the routine ended up being “power up the drive and use it actively for 4 weeks at least” it would just become too much.

      I wish there was just a simple feature to click and a progress bar showing that just did this without us having to try figuring things out.

  • tes_kitty@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    That depends on the controller firmware.

    But if you want to be sure, just do a read-only badblocks test (if you use Linux). That’ll force the controller to read all blocks and (hopefully) rewrite those it finds to be weak.

    • tigersoul925@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s mac actually, maybe should have mentioned that. Not sure what the best way here is to “read all blocks” of a drive. Maybe a dd command > /dev/null?