Blog

categories

tags

Exposing Algorithms @ NICAR2016

Algorithms are everywhere we look (and even places we don’t look) controlling what we see, do, and where we go. They’re great for solving our problems and helping us make better and quicker decisions, or taking the decision-making out of our hands. Their guidance is perfect in their objective and unbiased calculation. Except they are not, actually. Like everything else, they are created by people, and people have biases that get encoded into the algorithms they create. Algorithms learn from data, which is also created by people, so the algorithms also learn biases from data. This can be a problem when they encode these biases into their calculations and go on to perpetuate the bias.

The NICAR2016 panel “Algorithmic accountability: Case studies from the field” explored these ideas and more, detailing findings from two key projects by Nick Diakopoulos and Jennifer A Stark from the University of Maryland, and two from Jeff Larson of ProPublica. Nick introduced the session posing questions like: how to investigate algorithms used in media, government, or industry; how can we characterize the bias, power or influence of an algorithm; and what role might journalists play in holding algorithmic power to account. He went on to provide an example from his lab which scrutinized censorship in both Google and Bing’s autocomplete function, comparing what search terms they say they censor and what is actually censored within a browser showing non-personalized results.

Jeff offered a political story on personalized email algorithms: Obama’s Message Machine. Using crowdsourcing, recipients forwarded election emails on to Larson’s team together with personal data such as gender and age, with the hypothesis that personalization of the content of the emails was targeting something related to demographics or personal history. His second project was the Tiger-Mom Tax , where, collected data via proxy servers dotted about the region, they found that when looking at online courses by Princeton, certain zip-codes were shown higher prices. Larson was asked during question time whether it even matters, that people who earn more pay more. Many people would think that sounds ok. Though that may be true in some cases, that fact that this information was hidden is important. Information on how these algorithms are implemented should be made available.

Speaking third, I wanted to provide a somewhat more accessible account of working through an algorithm project. My article exploring Uber’s wait times and price surge behavior across D.C. was published online just an hour before our panel, and includes a link to the GitHub repository. Using the Uber API, I gathered surge-price and wait-time data from 276 locations across D.C. every 3 minutes for 28 days. Then, data was aggregated across time, and all locations within a tract were also averaged so that I had one wait time value and one mean surge price value for each tract. That allowed me to then analyze the data against census data including median income, race and poverty. I found that race plays some role in how long people wait for a car, and how likely it is that the tract they are in is surging. Challenges included how to dichotomize race, how to collect and validate locations as having addresses, and running additional analyses in response to Uber’s comments while Uber is on the phone.

Map of the mean estimated wait times - or ETA - of an uberX during February 2016 for each census tract in the District of Columbia.

Map of the mean estimated wait times – or ETA – of an uberX during February 2016 for each census tract in the District of Columbia.

Asked how we collected and reported on data acquired using Uber’s API seemingly against the Terms of Use, Nick explained that although Uber press officers initially pushed back at the use of their API in providing data for a previous story, they did ultimately provided a comment on the story.

Another great question was whether any of us seek information from the engineers who design these algorithms, given that their insights and goals could greatly impact the interpretation of what we find in our investigations. This is hard to do since much of the time releasing these details will be disadvantageous to the company. However, if it’s an algorithm designed and used by the government, asking how it works or how it is used by humans can greatly benefit your research – sometimes to the extent that the research focus changes to an heretofore unforeseen aspect of the algorithm and its context.

Seeing how the pros do it is one thing, but how do we start from scratch? Our next session was a hands-on workshop where journalists gathered in 7 groups of 3 people. We asked them a question: ‘What if *insert algorithm* lead to *Insert outcome*’ — would that be newsworthy? For example, “What if an automated teacher rating algorithm lead to a teacher being fired?” Teams gathered a bunch of ideas scribbled down on cue-cards, picked one, and then tried to outline how they would tackle it – what data would they gather, where could they collect data from, what would be the headline, what supporting data might they need, and how might they analyze it. At the end, a spokesperson from each group provided a two-minute summary of their idea. The most encouraging part of this whole exercise was that two or so groups – comprising people who had mostly never met before – were pumped about their ideas and determined to take them further. Perhaps we’ll see more algorithm stories at NICAR2017!

Team brainstorming algorithm accountability / transparency stories.

Team brainstorming algorithm accountability / transparency stories.

Learn more from the slide decks that were presented at these sessions:

Nick Diakopoulos Intro and research slides
Jeff Larson slides
Jennifer Stark slides
Workshop: slides