You think it’s safe, but then Google sinks to a whole new indefensible low

So it seems Google, faced with AI that labelled black people as animals, were complicit in a contractor exploiting vulnerable black folks to improve the system.

But it’s cool right. They make good services.

Don’t be evil.


Thanks for posting this article. I’m not surprised by Google’s tactics at all really. This section did give me pause though:

“ The spokesperson added that the “collection of face samples for machine learning training” were intended to “build fairness” into the “face unlock feature” for the company’s new phone, [Pixel]4.

“It’s critical we have a diverse sample, which is an important part of building an inclusive product,” the spokesperson said, adding that the face unlock feature will provide users with “a powerful new security measure”.

If they are building a FaceID like feature, what does “build fairness” mean? Why do you need a diverse sample for YOUR phone?

My guess is that the system needs to be able to very, very accurately recognise a real, live face vs. a photo, mannequin, cactus,… my point being that they don’t want it to be possible to train your phone to unlock when you point it at a random object of your choosing; it should only work with faces (whether this is a good requirement is a separate issue).

To do so, it needs to be fed real, live faces to train. I haven’t read the article (I’m responding to the comment), but it sounds like (a) Google is trying to do a good thing and make its system universal, and (b) Google is doing a bad thing in not being honest with the participants.

Apple had to similarly get data from a diverse group of test subjects when developing FaceID, though it seems like they did much better on the informed consent front:

1 Like

Don’t be evil.


Actually, I’m wondering now whether they do verification through the cloud verses on-device like we get with iPhone?

I think it’s entirely fair to point out that they’re trying to fix their previous racism with new and even more exploitative racism.

They’re not “trying to do a good thing”, and don’t get a cookie for fixing something that should never have happened in the first place.

None of this was a surprise to them. None of it was new. They had all the information they needed, and existing precedent that would have allowed them to avoid it. Privileged engineers build privileged solutions. Their hubris is their responsibility and framing it as “trying” to “do the right thing” is, respectfully, letting them of the hook for being awful when they didn’t need to be.

1 Like