Value- free technology? I don’t think so

The idea that technology is or can be value free has always struck me as being absurd. Whoever invents a particular application, piece of equipment or a platform has certain objectives in mind and these, in turn,  must have been shaped by their personal attitudes or beliefs or their business aims, often both.

A classic and very un-Olympian example of the latter variety was presented to me several years ago when I attended an IETF workshop. The participants were concerned with developing the protocols to allow browsers to collect and transmit geo-location data from connected devices. I pointed out there would be a number of social consequences attaching to such a development, both good and potentially bad, but almost to a man (repeat, man) the assembled technicians declared they had been sent to the workshop by their employers (mainly big technology companies) to reach an agreement not debate social policy. They didn’t quite say we’re only following orders but it was close.

A recent article in New Scientist  ( “Digital Discrimination”, 30th July – behind a pay wall) shows what can happen when otherwise or supposedly neutral  technologies are allowed to do their thing.

Take the case of Gregory Seldon, a 25 year old black man who lives in the USA. He wanted to make a trip to Philadelphia. Using AirBNB he spotted a particular place, tried to book but was informed it had already gone. Seldon carried on looking and saw that the same location was, in fact, still being advertised for the same dates. Suspicious, he created several new profiles which indicated the applicant was a white man. They were all told the apartment was available.

Seldon Tweeted about his experience on #airbnbwhileblack. The floodgates opened with more or less identical accounts streaming in from all across the country.  It emerged that three academics at Harvard (Edelman, Luca and Svirsky) had found people with names that were primarily thought of as being associated with African Americans e.g. Tanisha and Tyrone were 16% less likely to be accepted as guests on AirBNB than people with names like Brad and Kirsten.

The good news is AirBNB accept they have a problem and are actively seeking a solution but there seems little doubt this goes a lot wider and deeper than social media platforms.

Anupam Chandler, a Professor of Law at UC Davis believes discrimination can be “baked into” the data that form the basis of algorithms thus technology could become a “mechanism for discrimination to amplify and spread like a virus”.

Stands to reason I suppose. Typically algorithms are based on observed patterns of pre-existing behaviour. If that behaviour has a racist (or other) bias then, absent any countervailing measures, the algorithm will simply replicate and thereby, at the very least, sustain it in the future. That would be bad enough but the network effect is likely to give new legs and a new scale to the phenomenon thereby making it worse. In such circumstances it is just not acceptable to say (something like) “it’s not our fault society is riddled with racists (or with sexism)….all we are doing is devising systems which (hand-wringing) unfortunately racists  and sexists are using”.   The logic of this argument is that society needs to deal with racism  and sexism and  technologists are merely hapless, helpless victims of sad circumstance.  Baloney is the least offensive word I can come up with to describe what I think about that argument.

In this connection I was pleasantly surprised to discover that (our old friend) the GDPR has specific provisions which require technology companies to take steps to prevent discrimination based on personal characteristics such as race or religious beliefs. It also creates a “right of explanation”, enabling citizens to question the logic of an algorithmic decision. How easy it will be to enforce this provision is debatable, but it’s not a bad start.