BRaiVE NEW WORLD

Politics & democracy, AI, civic tech

War — what is it good for? Absolutely nothing... except inventing new ways to kill each other. 

Remember when we were considering a global ban on using autonomous AI-powered bots in warfare? And that call to pause on AI development so we could ensure it's safety back in 2023? Ahh, the good old days!

Drones now dominate battlefield strategies — strategies being informed, and heavily influenced by LLMs. Anthropic's Claude is currently embedded in the US war machine. Soon its somewhat more cavalier cousin, ChatGPT, will be donning the fatigues and Grok, the berserker of LLMs, won't be far behind. How long before drones are joined by autonomous murder bots, some of them in human form factors, holding guns? Move fast and break things has an entirely new, far more sinister connotation, all of a sudden.

No wonder AI is well suited to warfare, it's an amoral, fundamentally exploitative technology, that has generated unfathomable wealth and power on the back of humanity's blood, sweat and tears. What's a few more drops? It's colonialism unshackled from physical constraints — no ships, no guns, no disease-soaked blankets required.

So much for Asimovs' 3 laws of robotics!

Yet, we also have drones dropping seed bombs — re-foresting entire bushfire-ravaged mountainsides in a day! Robots helping first responders fight those fires, while AI-augmented legs are helping people run from them. AI is accelerating medical science, materials discovery and taking some of the drudgery out of our work day. (the banner image above took me 5 prompts and 15 minutes in Google's Nano Banana 2, and I hardly use it!)

At Do Gooder, like most progressive organisations, we've been thinking long and hard about how to harness AI for good. We didn't want to go all in on AI until we understood the benefits, costs and risks – of both using it and not using it. I believe we have reached an inflection point in recent months however. Despite the Faustian nature of the bargain, the costs of not using AI now outweigh the risks of using it.

Asimovs' first law – do no harm to humans (or allow harm to come to them through in-action) – is as relevant today as it was in 1964. Whilst the three laws may need a few system updates of their own, we can never hand ultimate responsibility to the machines. After all, it is a profoundly human thing to ask not just can we, but should we

So when we asked ourselves whether we're making democracy more participatory (our core purpose) by sending random AI generated emails to reps masquerading as human-written, the answer was an emphatic no. Short term impact would soon be eroded by politicians realising they cannot trust any email to be genuine – fuelling more distrust whilst giving others the excuse they need to ignore all constituent emails.

So – trivial as it would be for us to build – we will never build it. The AI-empowered tools we are building at Do Gooder, are something very different, and we're looking forward to sharing more soon. 

It's a BRaiVE NEW WORLD, no doubt about it – the task at hand is to make it a good one.