Two teenage twins walk into the DMV in Georgia. Two teenage twins leave without their driver’s permits. What happened? An algorithm, that’s what. DMVs nationwide are adopting automated fraud detection systems that use computer vision algorithms to detect whether they think someone is trying to get a new license under an assumed name. The algorithm couldn’t figure out the difference between the twins and thought one of them a fraud. So, no permits. Ridiculous.
Stories like this are popping up everywhere now. Algorithms are embedded in nearly every facet of government and industry. On the U.S. government’s secret no-fly list? Could be an algorithm landed you there. Didn’t get picked after the interview for that awesome job? Sorry. Maybe a computer algorithm didn’t like your voice. Are they fair? Is there discrimination? Was someone denied a service they were due? Was someone censored?
Journalists have a role to play here. We need to develop techniques for providing accountability around algorithmic systems.
The Computational Journalism Lab at the University of Maryland Philip Merrill College of Journalism is pleased to be working with the Tow Center for the next two years on projects to research and explore the myriad ways in which journalists need to cope with algorithms: methods for investigating them, sure, but also developing more sophisticated ways for thinking about using them for producing the news itself.
For instance, how do we negotiate the idea of journalistic transparency when news organizations are using recommender and personalization systems, or social listening tools that embed sophisticated editorial criteria? What about when the news homepage is a mix of the things that editors deliberately put there, and of things that algorithms thought would tickle the audience’s fancy? Newsrooms need to evolve their practices. It’s not only about adopting new ethical frameworks but also about developing entirely new standards for transparency that take into account the various dimensions of human involvement, data, modeling, inferencing, and user interfaces in these complex systems. In the future look for us to develop these ideas in much more detail, providing applied examples of how news organizations can move forward on these issues when implementing algorithms in automated content, algorithmic curation, and simulation systems.
Another aspect of the project will be to advance methods for journalistic investigation of algorithms: algorithmic accountability. Currently we’re examining dynamic pricing systems on Uber and developing techniques for examining search engine bias among others. Do you think there’s an important story behind some other algorithm? Drop me a note at firstname.lastname@example.org. We’ll be collecting lots of examples and teaching these methods at workshops in 2016 to encourage more algorithmic accountability reporting in the U.S. and abroad.
Finally, we’re looking forward to developing new computational tools and technology that can augment journalist’s capabilities. We’re hot on bots, with some recently published research looking at the diversity of the news bot ecosystem. It turns out that bots can be pretty useful, not just as alerting mechanisms in the newsroom or on a Slack channel, but for creating social services that respond, aggregate content around niche issues or geographies, and add a unique form of critique to the public sphere. We’re building them too and in the future we’ll be blogging more about what we learn from studying bots.
Algorithms are everywhere and this is a golden age to be studying how they’re affecting our media systems. Stay tuned for lots more!