Parents who use Google Photos, be extremely careful what you upload

It would depend on whehter they’re blocking an account or a person. I suspect that they’d be inclined to apply measures like this against a person.

I am wondering on this concept. This is just as an assumption.

User has Account A and B.
Account A gets locked
Account B also gets locked

“What If” scenario? - Account B was locked because Account A was listed as the recovery e-mail address.

If that’s how something would work, then you would need perhaps even a 3rd Account to avoid that.

Account A - locked
Account B - 2nd account
Account C - recovery

But if A has been forwarding to B for such an impending disaster, Big Brother Tech, might lock A and B, which would defeat the whole purpose. At that point A needs to backup to another provider

If that happened I would move my domains to another provider and upload all my data from a backup.

Most email servers when unable to deliver an email, will normally try again for up to three days. It normally takes about 24 to 36 hours for an MX (email) record to be noticed by the smallest mail providers. I seen inbound mail from the largest ones start arriving in less than six. It’s rare for inbound mail to be lost during a change.

We can “what if” this thing forever. It will be interesting to see what, if anything, changes now that this story has been reported around the world.

1 Like

Or, possibly even more likely, if the only access to both A and B correlated to the exact same IP addresses / device profiles continuously. I.e. “they know it’s the same person”. Google normally doesn’t care, but if they’re angry with you they have access to that type of information.

3 Likes

Have gotten several flags raised on this thread. Let’s keep in mind that we want to keep things polite.

1 Like

I thought Apple ditched its plans for implementing CSAM…

That seems like an extreme scenario, no?

It seems pretty polite to me, but I have no clue how it’s been able to remain that way. Everybody’s extremely volatile nowadays.

A dot com domain at Hover is around $15/year and I currently pay $6/month for each user account. In addition to having a unique email address (firstname@fullname.com) as long as I keep renewing my domains I will never need to change my email address.


I currently renew my domains every nine years.

I do too, but does the average individual want to actually do that?

I’m by no means an expert on this, so I could be completely wrong, but my understanding of the difference between Apple and Google is twofold (and maybe someone already addressed this, above):

  • Apple has not rolled out its program
  • Apple’s program will only match the hash of a photo against a database of hashes of known bad photos, maintained by an independent organization. Google does this, too, but also uses AI to flag other content that isn’t in that database. In other words, there are two things — known bad content, and “new” bad content. Apple’s only was going to look for the first. Not sure where this one fell.

EDIT: Another difference was Apple’s process was going to happen on-device, not in the cloud, like Google’s.

1 Like

Most of the average individuals I’ve met don’t take care of their tech, have hundreds of unread messages in their Inbox, most of their files on their desktop, and don’t back up their devices. No, they don’t want to set up their own domain.

1 Like

Yeah, that doesn’t entirely surprise me. Do they deserve to be educated on how to manage their technology correctly?

They all did and I worked with a lot of them. Some never changed, some did, and some began helping others. A lot of people aren’t interested in technology and are satisfied in knowing the bare minimum needed to use it.

On device: yes (for now)
They’ve been scanning email for years, and I would not be surprised if your iCloud folders are included in scans for malicious content.

you’re right, after the backlash last year they have not rolled out their on-device scanning process.

Apple’s program will match a hash of whatever [insert government name here] wishes them to search for on your device.

Google’s approach makes more sense as a comprehensive means of detecting illegal material on its network, not just known material. The bit that is missing is an appropriate (human) safeguard. When the AI wrongly flags material as illegal, what rapid remedy is there?

The problem here is that this actually was flagged to the human team and they sent it on to the police

It’s not “rapid remedy”, it’s “any remedy” in this situation. The facts are presumably known to everybody, including Google. Nothing illegal happened. And Google has said that the decision is final and can’t be appealed.

My understanding of that is that the humans were mainly verifying the fact that the photo was, in fact, a photo of a naked child (as opposed, perhaps, to a photo of something else the AI mistook for a naked child). At that point they forwarded it.

To me, that part makes sense. You have a photo of a naked kid, you hand it over to law enforcement so the people who actually determine that something illegal is going on can do their job and determine if something illegal is, in fact, going on.

The thing where it all goes pear-shaped is where the police conclude there’s no evidence of any wrongdoing, but the account remains banned with no ability to appeal.

Google’s stated reason for the ban is CSAM. For a photo to be CSAM though, there logically has to be a legal finding of fact - and that doesn’t exist because law enforcement came to the opposite conclusion.

Effectively, Google has asserted their unilateral authority to define something as CSAM, whether or not the legal system agrees.

5 Likes

They just decide with whom they do their business, that’s all!
And I fully understand that, as the customer knew the regulations (or should know them) (and even participated in the development of the software to find pictures like that!) and did not obey them.
Think about, this customer would be allowed back again, then Google has a chance that he is not doing anything like that again. But what happens, if he did it again?
They run thru the same process again? That consumes time, manpower and a lot of money for Google AND the involved authorities!
And thereafter? Do they let him back? And if not, why?
And if they let him back, how many shots should he get?

And if they, after the 3rd, 5th, 10th time just put an sign on his account, that this account just belongs to an person who don’t care for the rules, and is therefore not monitored anymore to avoid the unnecessary extra work and money, what happens if the person thereafter really uses this account for sharing criminal stuff?
What if something like that is be done by purpose (for example by someone who knows the software, and the system behind, very well) to provocate a reaction like this?

Where should be the red line?
Also compared with other things you can do, to violate their Rules?
They need to put up a large chart, where you can see, which rules you can violate how often to be “punished” with certain consequences?
Maybe they needed to implement a Point-System that is often in use, to punish traffic violations?!

How much “Hate” can you upload, to be equal with a picture of a nude child?
How much Spam can you send to be equal?
How often can you bully someone via Google, to equal that?
And so on…!

Google is a private company, who can descide to do business with whoever they want.
And if they don’t want to do so, with someone who violated their rules, and cost them money and Manpower, that is their sole descicion!

You’ve made your position pretty clear above, but you seem to keep assuming facts not in evidence. Google’s TOS bans CSAM explicitly:

CSAM stands for child sexual abuse material. It consists of any visual depiction, including but not limited to photos, videos, and computer-generated imagery, involving the use of a minor engaging in sexually explicit conduct. Our interpretation of CSAM follows the US federal definition of “child pornography.”

But the material in question isn’t CSAM, as determined by the appropriate law enforcement agencies. And if it’s not CSAM, the user didn’t violate Google’s rules. And yet Google has banned them for violating their rules regarding CSAM.

The question has to do with how Google acts when they make a mistake - no matter how well-intentioned.

And the answer is that they don’t seem to care.

Regarding costs to Google, Google has taken on these extra scanning efforts voluntarily, not because they’re required to. I would think the cost of dealing with errors made in the process would have been implicitly assumed to be part of taking on that non-required responsibility, but that’s the thing - Google won’t admit they made a mistake, despite law enforcement concluding otherwise.

Regarding “involved authorities”, as a general legal principle, I’m not aware of any first-world country that considers the cost of investigating false reports as a strike against the person who’s being accused of a crime. So I’m not quite sure where you’re going with the “and the involved authorities” part…?

I would assume he won’t do this inadvertently again, but honestly? Google took upon themselves the activity of scanning. Google’s AI ID’d the material. Google’s human reviewer decided to pass it on to law enforcement. And in the end, they made an objectively-incorrect determination - which they see no reason to correct.

If the guy had posted ACTUAL CSAM, I’d be fine with his account being suspended. But as long as Google keeps getting it wrong, I think the person should get another “shot” every time Google screws up.

There’s an interesting write-up of the legal side of this, in light of US law (which is the applicable governing law for these particular cases) here from Ben Thompson at Stratechery:

Even if you grant the arguments that this awesome exercise of surveillance is warranted, given the trade-offs in question, that makes it all the more essential that the utmost care be taken in case the process gets it wrong. Google ought to be terrified it has this power, and be on the highest alert for false positives; instead the company has gone in the opposite direction, setting itself as judge, jury, and executioner, even when the people we have collectively entrusted to lock up criminals ascertain there was no crime.

3 Likes