[ad_1]

Google denied reinstating a man’s account after he falsely flagged a medical image of his son’s groin as child sexual abuse material (CSAM), the New York Times first reported. . Experts say this is an inevitable pitfall when trying to apply technological solutions to social problems.

Experts have long warned of the limitations of automated child sexual abuse image detection systems. Especially as companies face regulatory and public pressure to address the existence of sexual abuse material.

“These companies have access to a very invasive amount of data about people’s lives. Engineer Daniel Kahn Gillmor said. “Just the facts of your life are all sorts of unreadable to these giant information companies.” It added that it would put people at risk of being “wiped out” by “state forces”.

The man, identified only as Mark by The New York Times, noticed that his son’s groin was inflamed, took a picture and sent it to the doctor. The doctor used the image to diagnose Mark’s son and prescribe antibiotics, and when the photo was automatically uploaded to the cloud, Google’s system identified the photo as his CSAM. Two days later, Mark’s Gmail and other of his Google accounts, including Google Fi, which provides his phone service, were “a serious violation of company policy and may be illegal.” The Times reported that it was disabled for “harmful content.” message on his phone. He later learned that Google had flagged another video that was on his phone, and that San Francisco police had launched an investigation into him.

Mark has been acquitted of criminal charges, but Google says it supports the decision.

Google spokesperson Christa Muldoon said:

Muldoon added that Google staff who review CSAM are trained by medical professionals to look for rashes and other issues. However, they are not medical professionals themselves and did not consult a medical professional when considering each case.

According to Gillmor, this is just one way these systems can cause harm. For example, companies often put humans in the loop to address the limitations that algorithms may have in distinguishing between harmful sexual abuse images and medical images. However, due to the inherently limited expertise of these humans, they need more access to user data to get the right context for each case. Gillmor said it is a much more intrusive process that can be an ineffective way of detecting CSAM.

“These systems can cause real problems for people,” he said. “And not only do we not believe these systems can detect all cases of child abuse, but they do really badly for people in terms of false positives. It can be really confusing when the person inside just makes a bad decision because there is no reason to try to fix it.”

Gilmore argued that technology is not the solution to this problem. In fact, he said, it could create many new problems.

“I have a sort of techno-solutionist dream. [where people say]“Oh, well, why are there apps to find cheap lunches but no apps to find solutions to thorny social problems like child sexual abuse?” he said. I said, “Well, it might not be possible with the same kind of technology or skill set.”

[ad_2]

Source link

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *