0

For the sake of drives' (SSDs primarily) longevity, do there exist managing algorithms (built-in or optional/addition/third-party) implemented in drives' controllers or operating systems that take care of avoiding writes to the same physical block of memory many times? Something like choosing to readdress the block that's already been overwritten many times.

Specifically, I want to know how much I underestimated the danger of rewriting the same config file about 100 times while I was trying to make my archlinux work on an SSD. Thanks in advance!

EDIT: My concerns: If I have a 250GB SSD drive which is rated at 150TBW than every piece of memory has expected number of safe writes equal to 150*1024/250 which is exactly 600. So did I just waste 16% of life of my config file?

donaastor
  • 111
  • 3
  • 4
    Yes; They are implemented by the controller firmware of the SSD itself. As with all storage media, you should have proper backups – Ramhound Mar 18 '22 at 17:19
  • You did not waste a significant percentage of your drive life and your math is very out- The firmware in the drive will automatically use different areas of the drive (this is abstracted away in the drive itself, the computer does not know or care.). Assuming a 150tbw, a 10k config file the wear is (10k/150,000,000,000 kb writes) of your drive TBW - so pretty much nothing. – davidgo Mar 18 '22 at 19:39

0 Answers0