Remember when we were considering a global ban on using autonomous AI-powered bots in warfare? And that call to pause on AI development so we could ensure it's safety back in 2023? Ahh, the good old days!
AI-powered drones now dominate battlefield strategies — strategies being informed, and heavily influenced by AIs. Anthropic's Claude is currently embedded in the US war machine. Soon its somewhat more cavalier cousin, ChatGPT, will be donning the fatigues. And Grok, the berserker of LLMs, can't be far behind. How long before drones are joined by autonomous soldier bots in human form factors, holding guns? Move fast and break things has an entirely new, far more sinister connotation, all of a sudden.
No wonder AI is well suited to warfare, it's an amoral, fundamentally exploitative technology, that has generated unfathomable wealth and power on the back of humanity's blood, sweat and tears. What's a few more drops? It's colonialism unshackled from physical constraints — no ships, no guns, no disease-soaked blankets required. So much for Asimovs' 3 laws of robotics!
- A robot may not injure a human being, or through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
*Asimov later added a "Zeroth Law" that superseded the others: a robot may not harm humanity, or through inaction allow humanity to come to harm.
Yet, we also have drones dropping seed bombs — re-foresting entire bushfire-ravaged mountainsides in a day! Robots to help first responders fight those fires, and AI-augmented legs could even help people run from them! AI is accelerating medical science, materials discovery and taking some of the drudgery out of our work day.
At Do Gooder, like many progressive organisations, we've been thinking long and hard about how to harness AI for good. We didn't want to go all in on AI until we understood the benefits, costs and risks – of both using it and not using it. I believe we have reached the inflection point now. Despite the Faustian nature of the bargain, the costs of not using AI now outweigh the risks of using it.
Asimovs' first law – do no harm to humans (or allow harm to come to them through in-action) – whilst simplistic, seems as relevant today as it was in 1964. However, we can't just code our way out, removing humans (and responsibility?) from decision loops that are literally life and death. We need to ask not just can we, but should we?
Case in point. Bulk templated emails from campaigns hitting overcrowded inboxes of representatives are increasingly ineffective. At best they are treated like a petition, at worst ignored. Yet asking supporters to write a personal email to politicians results in woeful uptake. So campaigners default to templates more often than not. One easy answer is to create random AI generated emails, which are each different but all variation of a templated message – now offered by some platforms.
We asked ourselves whether we're making democracy more participatory (our core purpose) by sending these random AI generated emails to reps masquerading as human-written.
The answer was clear. Short term impact will crater when representatives realise they cannot trust any email from constituents – fuelling more distrust whilst giving others the excuse they need to ignore constituents entirely. So – trivial as it would be for us to build – we never will.
The AI-empowered tools we are building at Do Gooder, are something very different, and we're looking forward to sharing more soon.
It's a BRaiVE NEW WORLD, no doubt about it – the task at hand is to make it a good one.

