One thing I’m leery about in recent Macs is the inability to repair failed SSDs. The storage mechanism wears out over time – you only get so many writes to an individual location. The parts do a balancing act to equalize the wear but the parts will eventually fail.
With the Apple Silicon Macs there have been statements that you don’t need as much RAM as in the Intel Macs. I think those statements are based on the very fast swapping that can occur in this architecture. However every swap means a disk write. So going with a minimum RAM system, putting money into the SSD capacity first, might be the wrong approach to preserving life.
The reason it is “not for RICH dudes” is that if you buy a new Mac each year it probably won’t be around long enough for SSD failure. The bottom line is that a 1TB drive is good for 600 TB of writes before it will fail. Smaller drives have proportionally less TB of writes before failure.
Massive swap use usually occurs when you’re working with lots of big files; photo and video. If you’re not running with big files, then you’ll have a lot less swap to deal with.
I have an Intel MBA with 8 GB of memory and I hardly use swap with my personal computer use habits. However, if I get an M1 MBA, I’ll get it with 16 GB of memory and a 256 GB SSD because I know I don’t need much storage, but I will need more memory because most actively developed apps are Electron and use more memory than native apps.
Because of the fact that you can’t replace modern SSDs is a very big reason to use multiple forms of backup: both Time Machine and either SuperDuper! or Carbon Copy Cloner.
A related thing that I try to publicize is that external SSD drives, when unplugged, have a rated data retention of about 1 year.
After that, data storage is no longer guaranteed.
For use around the home or office, not a problem, but with 1, 2, and 4 TByte SSD’s becoming affordable, something to be aware of if you archive data and stick it in long term physical storage, bank vault, relative’s home, etc.
Hard drives are rated about 10 years, last time I checked.
Hard drives have mechanical issues - can seize up, lubricant breakdown, etc. so SSD sounds attractive, but only if you are rotating them and powering them back up before 1 year is up.
600TBW is just the manufacturer’s guarantee. Realistically, an SSD will last an order of magnitude beyond that. For most users that’s 20-30 years of heavier use, and lighter use will never wear the SSD significantly.
Maybe, but the video shows a bunch of MacBooks with failed SSDs. Of course I don’t know what percentage this represents and the video is from a repair shop so only sees failures.
I do know that in designing products that use EEPROMs (different but similar technology) I always took steps to minimize writing. Doing it wrong and the parts can actually be destroyed in seconds.
I agree with @sgtaylor5 I’ve never purchased a macbook with more than 8GB ram, even when my company was paying. Currently I’m running Safari, Edge, and Chrome, mail, messages, photos, Arq, and Terminal. And activity monitor says swap used is 256K.
I paid $1000 for my M1 MBA. If I had added 16gb of ram and 512GB of storage it would have cost me $1400 and the chances of it being destroyed for some other reason would be the same.
You have a valid point but, for me, it’s not worth spending more on a computer for “insurance” against an SSD failure. IMO the odds of needing a “too expensive” repair on an out of warranty M series Mac is a much higher risk.
I purchase only what I need and am happy if my laptop lasts 3 years.
Manufacturers have a vested (as in profit) incentive to have the maximum specs possible for a product they are producing.
The rated life is a trade-off between better bragging rights, and higher costs to handle higher failure warranty costs and handling.
Rest assured, hard drive manufacturers “do the math” to the umpteenth decimal point. I would not shrug off their ratings and definitely not by orders of magnitude.
Of course, a one-off sample of any product can last longer than the calculated or measured MTBF, but it really depends upon operating environment, heat, power, humidity, etc. too.
In an earlier job, I worked with some computer systems and we had engineers that spent hours and weeks testing “identifical drives” and came up with very different MTBF rates and quality results.
They may all look the same, but they are very different depending on manufacturer.
I understand. I guess I would just suggest running smartctl yourself to get an idea of realistic wear for your use case. For example, the computer I’m writing on has written 60TB in about 22 months and is projected to last another sixty years (and it would not be unusual for wear to exceed 100%.) Others have worn faster but usually on track to last at least 20 years. The worst I saw in the last few years was an 8GB M1 on track to last 15 years.
More typical drive failure in a personal computer comes from electrical or heat. Environmental factors you mention are more likely to damage the computer in other ways before they prematurely wear the flash.
In a constant use situation (server/datacenter) you do adhere more closely to your formula of projected writes and age, but even there you’re looking at roughly 1500x and 15000x for write-optimized flash if you’re using one of the primary manufacturers Apple uses for their SSDs.
I’m definitely not opposed to having a bigger drive on a Mac but I wouldn’t want someone to spend that money just because they were afraid of wearing out a smaller SSD early.
Another anecdata (love the term): I put a SSD drive in my 2010 MBP in 2013. The computer has been in storage since 2018, and I booted it up a couple days ago in search for some files I forgot to upload to iCloud on its day. The old beast woke up and, to my surprise, it felt more responsive than I remembered. The only tell-tale sign of its longevity was the non-retina display and a surprising smell of old electronics (capacitors? battery? who knows). But the thing was still serviceable.
Edit: For the curious reader, no, I did not find the files. Bummer! Which means that the most common source of data loss is not hardware failure, it’s user failure.
It’s always a probability of failure - the “expected life” is really a “mean time between failures (mtbf)”. For relatively new (or reliable) devices there won’t be enough data to compute this accurately and it is always a MEAN - 50% will fail before that time and 50% after with very early or very late failures being increasingly improbable (good old bell curve).
Given that any Mac is dozens of subsystems and chips, and they all have different mtbfs, it really is chance how long the Mac will last and what failure will kill it off. One average, Macs will last quite a few years, but some won’t and some will last much longer.
Worrying about SSDs instead of screens, motherboards, power supplies, batteries etc. is probably a step too far.
The problem is that the more the components the greater the failure rate (lower the MTBF). And a failure of the SSD is no longer replacing a $100 part but an entire $1000 logic board. Right To Repair will never help here because that Mx processor module which contains the SSD can’t be repaired by mere mortals.
The problem with an SSD as compared to, say, RAM or the processor, is that the SSD has a known wear mechanism. Every write slightly destroys the part. It’s not as bad as a mechanical part, but probably the worst MTBF of any electronic component in the computer.
So therefore in applications where the drive gets heavy use, such as video editing, the wise move is to use an external drive for the heavy writing, and have that be the expendable device instead of the whole computer.
My Mac is backed up so if it happens, it happens. How many years it takes to happen is another story for Apple and me as an Apple consumer. If my 2020 MacBook Pro M1 lasts 7+ years before it happens, that’s what I’d call a reasonable lifespan for a £1500 computer. If it makes it to 3-4, I’d find that unacceptable, and next time around I’d buy a cheaper model, base and that’s all. Maybe even give the iPad another shot for a value option.
Anybody with a tonne of read/writes to a professional capacity whose income depends on their Mac already know who they are, and of the risks and can also replace their machines feasibly in a commercial / professional setting more often than average me or you.
Until we’re seeing <2 year failure rates become the norm, or a vocal population akin to the butterfly keyboard, it’s arguably a non issue.
If a company / professional can get more than 2.5 years out of a device that’s exceptional and benefits the bottom line, and enjoy it while you can, but don’t get used to it. Most things are replaced way more often than that when used professionally. Not every product is going to be the iPad 2 with its remarkable support / durability and ability to last forever. Apple produces those every so often and it’s great, but we can’t expect them to deliver this every release.
Software like Backblaze copies larger files before backing them up - and by default it copies them to the primary drive. This is configurable, but backing up an external Plex library (particularly one with a significant amount of churn) or something similar could very well result in all of that data being written to your primary drive as part of that backup process. And of course if the files change, same thing all over again.
I’ve also seen my 32 GB/1 TB 2018 Mac Mini with over half its RAM free and 3 GB of swap being used. So there are apparently use cases where Apple, for some reason, prefers to use swap instead of RAM, even if the RAM is sitting there and completely available.
Only one of those two is Apple’s fault - but there are non-obvious edge cases that can cause your drive to be used extensively.
I hope I don’t sound flippant, but as someone that relieves heavily on my computer/technology for my work, I haven’t had only a single computer (i.e. single Mac) in many years.
Any computer failure would be an inconvenience, but backup procedures and at least a second computer means i have something working in minutes, not days.
I see your point, but again third party backup solutions of this kind are not the norm in the consumer field. They are for folks like us who like getting techy but I’d wager nobody who doesn’t identify as a techie would be maintaining backups via Backblaze.
And say they did widely adopt this, well this issue seems to be not unique to Macs, but the technology itself. Maybe Backblaze should give a warning, or find another approach before / if it was mainstream if it really had a material effect on lifespan of the computer.
But even then I’d say at worst (if even) it’s a, takes the last 7th or 8th year of life of your machine away, not something that will ever lead to failure within the first 5 surely? There would need to be data to support it to call it a likely, real issue to most consumers?
It strikes me as the oil change twice-as-frequent thing. Sure it’ll help your car last a full 15 years and more, which it might not if you stick to the schedule… but most annual oil change people will buy the car new, keep it until 7 or so years old, sell it on and not care about how far it goes. People using the car in tough environments / commercially will know about more often maintenance required for the reliability they expect. Unless things are failing so fast and often it happens within the first half of the potential life of the vehicle, don’t expect governments or lawsuits to change anything. I’d say it’ll be the same for failing SSDs in computers too.
Are there any real big numbers behind this other than it theoretically being possible?
Another car analogy, people say washing your car too much can damage the paintwork… I mean what’s too often? Every week I do it. Some people say too much, and theoretically, I’m sure it’s scientifically resulting in some kind of more wear, but practically the benefits of doing that will outweigh the cons of doing that across a 20 year period. Is this the same? Micro-applicable?
Personally, I recommend it to tons of end-users. The thing isn’t whether it’s end-user or techy though, but rather that it’s a software process that potentially does unexpected things to the lifespan of your computer.
Can’t speak to Backblaze as I was aware of the potential problem from Crashplan before I switched - but with my dataset I burned 300+ TBW in just a couple of years as a web dev (i.e. not doing things to intentionally grind the disk) using CrashPlan. That was back when SSDs were replaceable, of course.
But what if failure on one device gets synced to all of your devices so now you need to decide how far back to go with your snapshot backups and then restore the interim incremental backups?