We’ve had two brushes with big data in the last couple of weeks.
Cathy’s been talking to other foundations about it….
On my first day back in the office after returning from a meeting of international foundations on big data, The Guardian published an article on the use of predictive algorithms by local authorities in relation to child protection and risk of gang exploitation.
This subject had come up in discussion as foundation colleagues from the US had cited examples where such approaches had led to damaging over-intervention in poor and African-American communities.
There was general concern about the risk of perpetuating or worsening inequality and unhealthy power dynamics.
Some of us came up with a list of questions foundations (and dare we say it, local authorities) might want to ask themselves before getting into bed with anything like this:
- Do communities targeted by such technologies have meaningful opportunities to object and/or to engage with the way projects are designed and data is used?
- Is the work likely to perpetuate or worsen existing inequalities and power dynamics?
- What unintended consequences might there be, how will you know and will you be brave enough to change course or to stop if they start to show up?
- What independent accountability and oversight mechanisms are there?
- Has a principle of ‘do no harm’ been applied?
These are all important questions but as The Guardian showed, predictive technology is becoming normal. These things are out in the world already and so the most important question might be the one about accountability and oversight.
Jenny has been meeting charities and techies…
In the same week, I attended a Society-in-the-Loop event here in London which brought together representatives from the social sector, social sciences and from the technology industry to discuss the development and use of technology. They reflected on how technology is inextricably bound to big-picture issues such as democracy, human rights and inequality, the role of the law, ethics and the impact on vulnerable consumers of technology.
What struck me during these discussions, apart from the sheer brainpower in the room, was the sense of inevitability about it all. There was a general agreement that technology had been hijacked and although egalitarian principles are intrinsic to the concept, those who are vulnerable and disadvantaged are currently ill-served by that technology.
Why is that? Well, some of the problems, it seems, start upstream with the Algorithms, Artificial Intelligence and Machine Learning. Tech attracts the super smart and with that comes a certain elitism. Who is writing the code that’s used, say, in surveillance, in facial recognition or for deciding if your telephone call warrants further action? These questions should not be asked just through the prism of race or gender but must include life experience.
During the day the themes of accountability, transparency, explainability, extractiveness and ethics resurfaced across all the discussions.
How should we relate to this?
Speaking for ourselves, we lack the tech/data know-how to be able to engage with these important issues confidently. Does that capability exist in the UK? Are there people out there asking the fundamental moral and ethical questions, especially in relation to communities that are already facing disadvantage? We would love to know the answer to that question…