How can you backup your files when Optimize Mac Storage is turned on?

The undated rant illustrates what was, at the time, a behavior that - as designed - was likely to cause data loss. That’s the sort of thing @SpivR was talking about, and would require a real backup to restore.

If you need something more current, here’s a thread where people indicate that the problem in the “rant” is still an issue:

https://support.google.com/drive/thread/6112837?hl=en

Sure. The same is true with smartphones, though. Data loss is relatively rare. By that logic, iCloud backup seems pretty useless. I’ll just go disable it right now. :wink:

I don’t think the idea of backups is “falling away” - I think it pretty much never existed to begin with. Working with computers for 30-ish years, I don’t know hardly anybody who actually made backups. It’s not like everybody did it 30, 20, or 10 years ago, and now they’re not. They never did it.

The distinction, of course, is that now manufacturers are starting to develop methodologies whereby consumer data gets backed up without the consumers’ intervention (and sometimes, without their knowledge). It’s good progress.

But calling a cloud-based sync a backup is bad terminology at a minimum. The fact that Apple (hypothetically) has a backup drive in a data center somewhere where they could recover the 30 important photos, or the critical business documentation, or whatever that got torched due to a sync glitch, or user error, or whatever, doesn’t mean you’ll ever get your data back.

And that goes probably double or triple for Google, as their end-user support is (in my experience) is much sketchier than Apple’s.

In the case we’re talking about Google and Apple et al manage and back up the files under their control to which people sync. Just because you aren’t aware of the redundancy and backups taking place is irrelevant to the fact that of course the files are backed up.

Okay - so what’s the mechanism by which I contact them to get a file back?

I’ve read about a number of people losing data due to sync glitches, UI/UX glitches, etc., and I don’t know that I’ve ever seen a story of Apple / Google doing a data restore.

Not sure what exactly you mean by “okay” but now you seem to be pivoting to some other issue from arguing about what a backup is. Not really interested in traipsing around with this; it’s clear that petabyte upon petabyte of data is used and backed up and maintained in the cloud for consumers and businesses with very little trouble or downtime, and that this will proceed apace - and people are generally satisfied with it. No backup of any kind is 100% effective but the FUD and hairsplitting here does not convince.

I’m not pivoting. A backup that can’t be restored isn’t a backup by any definition I’m aware of.

I agree that the data is used and maintained, generally with very little trouble or downtime. But when the trouble and downtime actually occur, I’m find myself really, really fuzzy on what you mean by “backed up” if there’s no actual mechanism to get the data out of the backup.

What does “backup” mean in your parlance if it can’t be restored when problems occur?

Enough. Every single web service that maintains data backs it up. Just because you aren’t aware of it doesn’t mean it doesn’t happen. I reject your unsupported assertion that restores do not take place. They just don’t take place individually at user discretion. And just because you can’t direct retrieval of backed up data under a service’s control doesn’t make it less of a backup. It’s absurd and uninformed for anyone to actually believe that data in iCloud or Google Drive or OneDrive isn’t backed up, or that it isn’t or cannot be restored by the service according to its own needs.

They don’t - not for the only people who really matter in this equation, the end users.

I would submit that the average user, when they lose data, cares very little about whether or not Google / Apple could, hypothetically, put their data back, if they wanted to.

How many people have ever said, “I lost all of my photos due to a sync glitch. It’s great to know what my data is backed up, safe, and that I’ll never see it again”?

Your personal definitions for backup and restore don’t fly, sorry.

They do, if you bother to look. For example, from a 15 second search.

Maybe cloud services have a solid understanding of what and how they need to market than what some individuals would prefer. (shrug)

It’s not ‘blind faith’ to recognize the massive successes in cloud storage, where users sync and clouds backup/restore as needed, nor that this is a trend that is not abating, and that people like you or me who do our own local backups and pay for online backup are a small minority and getting smaller. (And not without reason.)

Not to be too picky here, but you’re the one defining “the user can’t get their data back” as a backup. :slight_smile:

No, you’re the one redefining backup/restore as something that must be user-directed. I do not accept your contention that users can’t get their data back and you shouldn’t use quotes for me if you’re not quoting me. Your redefinitions and fake quoting and hairsplitting are really something else, Robert.

I’m just saying that if the user can’t get their data back, the user’s data isn’t backed up in a way that’s likely meaningful to the user.
This isn’t hair splitting. If you lose your data in Google Drive, you have zero way to get it back. From where you sit, there’s no backup whatsoever.

If you’re less likely to lose your data, that’s super-cool. Kudos to Google for having a reliable data center.

But when you have a problem, the data is gone, and your only option is to restore from an actual backup.

I’m sorry Robert, but I’d continue this discussion only if I thought you were discussing or arguing in good faith. Stats show you already reply to me more than anyone else on this forum; that ought to change.

Belt or suspenders, or both? It depends. There are backups and there are backups. What I mean is that backups don’t exist for their own sake; backups are for recovery from failure, and there different kinds of failures. There is media failure, ie. Inability to read a specific file or storage device because the physical medium exhibits an anomaly (e.g hard disk crash, immersion in liquid, etc. Then there is network failure, inability to contact a remote storage device because of network problems. There is site failure (e.g your house burns down including your computer gear and any backup media stored in the house, or iCloud hosting farm endures an earthquake, etc. As a sysadmin in a development shop years ago, I made two rotating sets of weekly backups. One on-site, and one off- site. So the most we could lose in the event of media or site failure was two weeks of coding labor. Replacing hardware and network configuration might take longer.

Having a local backup disk, a la Time Machine helps recover from media failure. But what if your computer dies and you don’t have immediate access to another Mac? Are you trying to recover only Developer code that runs on Mac? Or are you concerned about only image files that can be recovered on a Windows device, assuming removable media with file system readable by Windows?

If your app/services or geographically disbursed development team rely on network services and storage, you need to account for network failure or remote site failure as well as media failure. How long are you willing/able to go without access to the networked repository? Can each team member work disconnected and check in any changes in a few weeks? How’s your source-code control system gonna handle that? Have you stress-tested your backup plan?

Apple iCloud server farms themselves have, I suspect, both automatically backed up local media (though service levels I’m not sure of) and zoned geographically dispersed backup sites. If it doesn’t then 1) apple needs to rectify that and 2), you the users need to decide what level of outage and recovery time and permanent loss of data that you can tolerate and plan accordingly. Losing some family photos is tragic, but probably not the end of the world; losing data that you need to put food on the table probably requires greater levels/types of redundancy and recovery that are commensurate the the acceptable risks and losses.

Don’t just trust… verify, as Ronald Reagan used to say about arms control with the USSR. Have a well designed plan for backup and recovery based on acceptable trade offs of cost, convenience, and service level, but actually test the recovery plan as well. Modern risk management best practices for governance and accountability in the business world require at least annual recovery plan testing.

3 Likes

Since this discussion of iCloud evolved to include all cloud services I did a quick review of Google. Here’s what I found last night.

Today Google states the following:

“Google’s highly redundant infrastructure also helps protect our customers from data loss. For G Suite, our recovery point objective (RPO) target is zero, and our recovery time objective (RTO) design target is also zero. We aim to achieve these targets through live or synchronous replication: actions you take in G Suite Products are simultaneously replicated in two data centers at once, so that if one data center fails, we transfer your data over to the other one that’s also been reflecting your actions. Customer data is divided into digital pieces with random file names. Neither their content nor their file names are stored in readily human-readable format, and stored customer data cannot be traced to a particular customer or application just by inspecting it in storage. Each piece is then replicated in near-real time over multiple disks, multiple servers, and multiple data centers to avoid a single point of failure. To further prepare for the worst, we conduct disaster recovery drills in which we assume that individual data centers—including our corporate headquarters—won’t be available for 30 days. We regularly test our readiness for plausible scenarios as well as more imaginative crises like alien and zombie invasions.”

Source: https://static.googleusercontent.com/media/gsuite.google.com/en//intl/en/files/google-apps-security-and-compliance-whitepaper.pdf


If memory serves, this is the same language they used two years ago.

Spanning, a cloud to cloud backup service, explained it this way:
“Of course, Google has disaster recovery systems in place, so if there is a problem with one of their servers that could affect your information, they are able to recover any lost data through their own internal backups. However, these backups are not accessible by end-customers, and they don’t cover several common ways of losing data in the Google Apps environment. Essentially, Google can protect you from their own mishaps, but not your non-hardware related issues.”

It appears all cloud services are much the same. Googling Microsoft 365 Disaster Recovery Plan lead to more information than I was willing to read. Searching for iCloud Disaster Recovery yielded zero results.

So, IMO, if you have data in the cloud make your own backups. Unless the zombies win what’s there today will be there tomorrow. What was there yesterday might not.

1 Like

All based on a single page on one site (and a failure to find supporting evidence elsewhere) which must be complete and therefore extend to all cloud services.

“Basically” :thinking:

Nope. Not ever close to being complete. But enough for me to strongly recommend, to the person who asked me the question, that they make their own backups.

If someone wants to keep pulling on this string they may find I was “basically” wrong. :blush:

There is a lot I don’t know, or have forgotten. For example:

1 Like

… that they look into it further than give credence to the painting of an entire industry with a one-webpage-result brush?

Oh.

iCloud backups are supposed to take place nightly. So someone who doesn’t have their phone plugged in overnight (or allow backups) for six months is the edgiest of edge-cases.