AI is revolutionising our roads, workplaces, homes – and warfare. And not everybody is happy about it.
2018 saw 3,000 Googlers protest against their company’s participation in the US Department of Defense's project Maven, using TensorFlow to build a computer-vision system for drones to identify humans.
Google withdrew from Maven and challenged other tech firms building AI to follow its lead and abstain from AI projects that “avoids abuse and harmful outcomes.”
The protest broke new ground: an unlikely act of political defiance and consciousness in a sector usually light on ethical baggage.
So, what’s an engineer who’s building general-purpose AI to do in 2019? Do you the have right to address the moral effects of what you build and hold your employer to account? Can you justify your work in terms of “national defense”? Or do you only think about the free workplace coffee and great options?
Lawyer, investigator and writer Cori Crider who has studied drone attacks on the ground in Yemen and met the families of those killed in strikes, presented her case for techies standing up.
A specialist in the ethics of mass data sifting and human rights in counter terrorism, Cori during our Feb 6 lecture looked at how the AI code you're punching could be putting us on a slippery slope to automated war. Cori put debate over AI in warfare in its wider context that techies should consider: how, from policing to immigration, problematic AI often gets beta-tested on the weak and the poor first.
You watch Cori's presentation again, above. ®