Microsoft’s PhotoDNA

      1 Comment on Microsoft’s PhotoDNA

Have you ever stopped to think how many photos circulate online? A 2014 study estimated that internet users share 1.8 billion images every day. Snapchat users alone share around 9,000 photos each second. Out of these, only an extremely tiny percentage are documenting crimes. But that tiny proportion can be crucial for saving people’s lives.

Child pornography exploded with the birth of internet, and this new medium for photo sharing has created a new economic incentive to abuse and in some cases even to enslave children. PhotoDNA is a new project that attempts to fight child pornography by using tools developed to deal with the enormous amount of photos shared online. The idea is fairly  simple, although the technology behind it is not. It computes hash values of images drawn from Facebook, Google, Twitter and other online sources in order to identify missing children and other victims of child pornography.

PhotoDNA explained in 5 Steps

This image processing technology can trace the images of children who appear in multiple online sources in order to establish their identities and try to get them help.

I found this  this project engaging because it uses new technologies to combat a longstanding problem. But more than this, I’m interested by the fact that it represents a productive partnership the between private and public sectors, particularly tech companies and law enforcers. Establishing a way for these two groups to work toward productive and ethical goals will be essential in the decades to come, especially in the light of the revelations of widespread government privacy violations and digital snooping which was in many cases aided by large tech companies like Microsoft. Indeed, some people reading this, like privacy and free speech advocates, might not like the idea of a company like Microsoft working with law enforcement agencies on a topic like facial recognition. They’d have a point. After all, the exact same methods could be used as well by the NSA to trace entirely law-abiding citizens. These types of private-public partnerships might create an opening for more surveillance and more violations of privacy. I agree, but I also believe that it’s up to us, as members of the civil society and more specifically Academia to advocate for using these technologies in an accountable and ethical manner, and to harness them for humanitarian ends. Technologies like PhotoDNA are still in their infancy, and like all technologies, they can be used for both good, bad, and in between. For instance, a Microsoft research paper dealing with the underlying technology used by PhotoDNA (robust hash processing of a very large corpus of images) describes  protecting intellectual property; as one potential use. One major motive for this technology could therefore be construed as a negative by some – many people might not like the idea of a giant corporation suing individual internet users for copyright infringement involving images shared online, along the lines of the record company industry after Napster. But that’s not what’s being done here. Just because a technology has a potentially negative application, that doesn’t mean that it can’t be reharnessed to serve positive goals.

PhotoDNA press materials are now on the Microsoft Digital Crimes Unit Newsroom

I think there’s a lesson here that both tech workers in the private sector and human rights advocates in NGOs can learn from: technology is a neutral tool, one that our collective ethics can shape. It’s on us to work together, crossing the lines between public and private and the tech and non-proift worlds, to find ways to use these new technologies in ways that are achieving a clear public good, like fighting against child pornography. One thing we know for sure, though, is that making information about these types of projects freely available and well documented is crucial for us to be able to decide, as a society, if they are positive or negative.

One thought on “Microsoft’s PhotoDNA

  1. Nikita Singareddy

    The underlying technology that Microsoft (and MANY other companies) use for child pornography identification is very impressive. While automated though, there’s definitely a need for a human for vetting purposes. I’d be interested to know if there’s something in place for Snapchat? It’s functionality is much more fleeting and harder to interpret than Facebook or Twitter which is much more personal and narrative based, in my opinion. Snapchat definitely has a child pornography problem, but it’s mostly associated with texting. I think a next step for them would be to tackle the issue head on – either developing their own proprietary detection software or using the same base as Google. http://www.digitaltrends.com/social-media/snapchat-child-porn-problem/

Comments are closed.