I did my masters in machine learning, so I’m a little touchy on the subject. It always stands out to me when someone says, ‘big data punishes poor people’ because it sounds like “polynomials are anti-semetic” or “bolt cutters are racist”.
Machine learning is a tool like any other, and it can be used for nefarious purposes. I don’t think it’s an unreasonable assertion that things like search-bubbling actually contribute negatively to echo-chamber effects, as they result in people seeing only data that reinforces their viewpoints (as a side effect of being more relevant). To cast the blanket statement like this, however, I think is a catchy but unnecessarily negative act.
I hope the book doesn’t overlook the positive contributions that data mining has made, like discovering genetic markers for diseases, finding new antibiotics, finding treatments for cancers, decreasing water consumption in agriculture, tracking diminishing animal populations, or even more mundane things like providing automatic subtitles to videos for the hearing impaired.
The most interesting question I have to raise is this: is it _more_ humane to remove the biases of a human? Humans are REALLY good at seeing patterns. We’re so good at seeing patterns that we see them where there are none — we see Jesus in toast, we see faces in the sky, we see people as part of a group. That last one is racist, and while we can’t alter our perceptions we can be made aware of them and do everything we can to try and work around our ‘feelings’. Machines are getting good at recognizing patterns too, now. They even beat us in a lot of cases. If we train a model with racist data, though, it will generate racist predictions. Can we efficiently sanitize data to be sure that it’s fair to everyone involved? Is it inevitable that people will abuse statistics to further their own ends? Equally curious: if data suggests a 99% chance that someone will default on a loan, should we chide the operator of the tool for using it? What if they’re trying to protect their own best interests? I don’t know if there’s a winner there.
There’s a lot of answers I don’t have and, ironically, an inability to predict the future, but I do have an emotional response to the article: it’s unpleasant and bothersome. I can’t say it’s wrong, but I can say it’s an incomplete picture and that it furthers the author’s agenda: making a boogeyman of an emerging technology. I don’t like that.
tl;dr: This is a nuanced topic and I’m dubious that the author can reasonably cover it, fearing instead that it devolves into fear-monger.