Date: Thu, 28 Mar 2024 20:36:14 +0000 (UTC) Message-ID: <966717152.5.1711658174431@186033241a90> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_4_1786203570.1711658174430" ------=_Part_4_1786203570.1711658174430 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
Recently in the CW-Talk Skype chat Mark Goldberg wrote that he uses a RA= M drive instead of the SSD for his OBJs. Why? Because compiling means disk = writes, and SSDs really do wear out. As Geoff Gasior explains in The SSD Endurance Experiment: They're all dead:
This breed of non-volatile storage retain= s data by trapping electrons inside of nanoscale memory cells. A process ca= lled tunneling is used to move electrons in and out of the cells, but the b= ack-and-forth traffic erodes the physical structure of the cell, leading to= breaches that can render it useless.
Electrons also get stuck in the cell wall= , where their associated negative charges complicate the process of reading= and writing data. This accumulation of stray electrons eventually compromi= ses the cell's ability to retain data reliably=E2=80=94and to access it qui= ckly.
How many times can you write to flash memory? Depending on the technolog= y, anywhere from 1000 to 1,000,000 times, but from my limited research I'd = say you can expect most SSDs to be somewhere around the 2500-5000 write cyc= le mark.
Does that mean you could be limited to as few as a 2500 compiles, assumi= ng the compiler is writing to the same location each time? At two compiles = per hour the drive would be toast less than a year, right? Only the assumpt= ion that each write goes to the same location is a bad one.
SSDs employ wear-leveling algorithms so that writes are distributed even= ly across cells. Beyond that, SSDs are typically over-provisioned with memo= ry; you're getting more than the stated capacity, and if cells wear out the= y are replaced with these spare cells.
What you need to think about is total write capacity. Gasior's experimen= t ran a half dozen SSDs in the range of 240GB to 256GB. The first drive to = fail shut itself down at 700TB by design; the last drive standing gave out = after 1.1 petabytes. That's 1,100,000 gigabytes of data written, or more th= an 4000 times the actual drive capacity.
The largest system I've ever worked consists of over 200 apps and writes= about 1.1 GB of files during a full debug mode build. That's about 720 MB = of OBJ files and 380 MB of DLLs and EXEs.
Assuming I'm using a 240GB SSD with 700TB of write capacity, I could com= pile that system from the ground up over 600,000 times. To put that into pe= rspective, imagine that I'll have that drive for five years before I decide= to replaced it with something bigger and faster. On a five day work week w= ith no days off for good behavior, I can still compile that entire 1.1 GB &= nbsp;project 488 times per day; if that's even achievable with currently av= ailable hardware then I'd like one of those machines.
Here's how I suggest you calculate how many years your your drive will l= ast.
First get the total size of your application directory in MB. Call it To= talBytes.
Next, figure out the average size of the DLL or EXE you're creating on a= compile, in MB. Call that ExecutableBytes.
(Drive capacity in MB * 1000)
______________________________________________=
____________________
((TotalBytes * full compiles per hour) + (= ExecutableBytes * partial compiles per hour)) * 40 * 48
TotalBytes overestimates the bytes written o= n a full compile; ExecutableBytes underestimates the bytes on a partial com= pile but not by a lot because when you make one change you're typically onl= y recompiling a small portion of the OBJs.
For ex= ample, say you have a 10MB app that takes up a total of 30MB of space, incl= uding a 4MB DLL. You're a maniac about global data so every hour you have t= o do a full compile. Assuming an eight hour day, that's one per hour. You d= o an incremental compile once every five minutes, or 12 per hour. Rough est= images? Sure, but any error is well within an order of magnitude.
You're= still cheaping out on the SSD so you only have a 240GB model. And while yo= u may be cheap you're also sensible enough to reasonable hours - 40 per wee= k with four weeks holidays. If you're working 60 hours per week on a regula= r basis you're either a freak of nature or you were absent from class when = the professor explained the law of diminishing returns. In any case, feel f= ree to adjust these numbers as you see fit.
=
span>
240,000 * 1000
____________________________
(30 * 1) + (4 * 12) * 40 * 48
The answer: 1,602.56 years!
I don'= t think compiling on an SSD is much to worry about.
But if = you really are concerned about minimizing disk wear, consider that big apps= result in more writes because the executables are bigger. Similarly, havin= g more than one procedure per module (a practice I'd really like to see abo= lished) results in writing procedures that haven't changed.
Is ther= e still a good reason to use a RAM drive for OBJs? Sure - RAM is still quit= e a bit faster than flash memory for both reading and writing, so you will = gain some performance. And since OBJs are expendable it won't matter that t= hey go away the next time you reboot. So no harm no foul. But you don't nee= d to worry about wearing out your SSD.
Disclaimer: I've chec= ked my figures with reasonable care, but I make no guarantees about the dur= ability and/or reliability of any hardware you may purchase or the applicab= ility of this formula to that hardware. You're still on your own. If you do= find an error in my calculation please let me know. And if you have an SSD= you probably have or can obtain software to monitor the lifespan of the dr= ive.
For you= r convenience, here's a spreadsheet with the above formula: SSD Write Life.xlsx