02
Artificial Intelligence
Artificial intelligence (AI) is already reshaping our world, whether we realise it or not.
From healthcare and education to business and national security, AI is transforming how we live and work.
by Minderoo Foundation
It has the potential to solve big problems, make life easier and create new opportunities. But without the right safeguards it could also pose serious risks.
If left unregulated, AI could widen inequalities, disrupt communities, put our kids at risk and accelerate environmental harm. In extreme cases, AI can be weaponised in ways that threaten global stability.
AI is moving fast, but we still have time to shape its future and ensure it works for – not against – us.
History has shown what happens when we ignore emerging risks. Whether it was requiring seatbelts in cars or restricting the use of asbestos or lead paint in our homes, delays in regulation allowed harm to escalate before action was finally taken. AI is developing at an even faster pace and we cannot afford to repeat these mistakes.
The choices we make now will shape whether AI is a force for good or a risk to our future.
Minderoo believes AI can be a force for good – one that helps society thrive. But for that to happen, it needs to be fair, balanced and developed with the right safeguards in place.
We are helping shape the future of AI by bringing together experts, decision-makers, partners and civil society to tackle the challenges posed by AI from every angle, ensuring policy decisions are people-centred and underpinned by evidence.
Some of our key partnerships and outcomes include:
2019 - US-China Track II Dialogue on AI and National Security: Supported our partners the Brookings Institution and Tsinghua University to establish the Track II Dialogue between US and Chinese experts to build consensus on the use of AI in national security.
2020 - Tech Impact Network: Committed AU$20 million to establish a global research network on the societal impacts of AI, spanning leading universities in Australia, the United States and the United Kingdom.
2023 - AI Corporate Governance: Partnered with UTS Human Technology Institute as the major funder of the AI Corporate Governance Program, ensuring corporate leaders have the tools to govern AI safely and responsibly.
2024 - Voluntary AI Safety Standard: Supported our partners, UTS Human Technology Institute, to help shape Australia’s Voluntary AI Safety Standard.
The need for people-centred AI is most critical in military decision-making, where human lives are at stake. In a time of rising geopolitical tension and rapid technology advancement, it is essential that militaries do not outsource decision-making to machines that lack human judgement and compassion.

Caption: Andrew Forrest attending the Munich Security Conference in February 2025. Credit: MSC/Lennart Preiss.
Through our partnership with the Brookings Institution and Tsinghua University, we supported the United States-China Track II Dialogue on AI and National Security, bringing together US and Chinese experts to build consensus. A key outcome was aligning the terms for the use of AI in national security, which was followed by a historic agreement between the two military superpowers to ensure that human beings, not artificial intelligence, would retain decision-making authority over the use of nuclear weapons.
“The mantra behind all of this, Minderoo’s work, our ability to make AI a friend of humanity, not a terrible enemy in the military, is a simple four word slogan: no harm to citizens,” Dr Forrest said.

Caption: The OceanOmics team planning for sample collection. Credit: Megan Beaudry.
Our oceans are vital to life on Earth, acting as a giant heat sink that absorbs heat and regulates global temperatures. But as carbon emissions rise, so too do ocean temperatures and acidity, with devastating effects on marine life and ecosystems. Monitoring these changes is especially challenging in Australia, where our marine estate spans an area larger than the entire European continent.
Now, technology is transforming how we understand our oceans. Minderoo’s OceanOmics program uses environmental DNA (eDNA), enhanced by artificial intelligence, to detect and monitor marine species from tiny genetic traces in water samples. AI is also scaling data collection, with autonomous marine drones operating longer and more efficiently than traditional methods, overcoming the immense challenge of distance and cost.
These innovations give policymakers the clearest insights yet into climate change’s impact, driving more effective marine conservation efforts.
Deepfakes are fooling us

Credit: Mininyx Doodle via Getty Images.
Imagine receiving a panicked phone call from a loved one. Their voice, every word, sounds just like them. They need urgent financial help. Without thinking twice, you send them the money. But the call wasn’t real.
Phone scams aren’t new, but AI has taken them to a terrifying new level. Scammers are now using AI voice cloning technology to impersonate loved ones, colleagues and public figures with eerie accuracy. With just a few seconds of recorded audio, they can replicate someone’s voice well enough to fake a desperate plea for help.
In the second quarter of 2024 alone, losses from scam telephone calls totalled more than $23 million, with over 10,000 scam phone calls reported.
This technology is advancing rapidly, making scams harder to detect. With AI getting better at mimicking reality, the need for stronger protections has never been greater.
Without stringent, people-centred rules, AI can be used to manipulate, deceive and exploit. But with the right regulations and public awareness, we can prevent this technology from being used the wrong way.
The question is do we act now, or wait until the damage is done?
The hidden cost of AI

Credit: FrankRamspott via Getty Images.
You ask an AI chatbot a simple question. In seconds, it scans and processes vast amounts of data to deliver a response. But behind that interaction, something else is happening. Energy-hungry data centres are working overtime, consuming vast amounts of electricity and water.
Every query, every image generated and every AI model trained requires enormous computing power, contributing to carbon emissions and placing a growing demand on energy and natural resources.
Without proper safeguards, AI risks become another unchecked driver of climate change, just like we’ve seen with past industrial revolutions. We believe human health and environmental health are inseparable. That’s why we’re advocating for people-centred AI policies and clear rules to ensure AI innovation doesn’t come at the cost of our environment.
AI lifesaver in the outback

Credit: Yuichiro Chino via Getty Images.
Imagine living in a remote community, hundreds of kilometres from the nearest specialist. You’re feeling unwell, but getting a diagnosis could mean a long journey and weeks of waiting. Now, imagine if life-saving medical expertise was available instantly through AI.
This is becoming a reality in Western Australia’s Pilbara region, where an AI-powered retinal scanner developed by the Lions Eye Institute is revolutionising eye care. It allows doctors to detect diseases early, preventing blindness for those who might otherwise go undiagnosed.
Similarly, AI is stepping in to diagnose heart disease in remote communities. Even untrained individuals can perform heart ultrasounds with assistance from AI technology, ensuring patients receive the care they need without a cardiologist on site.
These breakthroughs show how AI, when used responsibly, can enhance lives and bridge the rural and remote healthcare gap. AI is not just about risk, it also presents a huge opportunity. People-centred policy, with the right guardrails can ensure AI empowers people and protects the environment, creating a future where both can thrive.
Is AI a digital friend for our kids?

Credit: Maskot via Getty Images.
Imagine your child chatting with an AI-powered assistant. It’s friendly, knowledgeable and always available. But here’s the catch. Kids don’t always know where the AI ends and reality begins.
A recent study found that many children see AI chatbots as lifelike and trustworthy, treating them as human friends rather than programmed tools. But AI doesn’t have feelings, human morals or a sense of right and wrong. Left unregulated, these systems could expose kids to harmful content, reinforce biases or even be used to manipulate young minds.
AI is developing quickly, but who’s making sure it’s safe for our kids? Without clear rules, we’re leaving the next generation vulnerable to a technology we don’t fully understand. Minderoo is pushing for people-centred AI – designed with children’s safety in mind, not as an afterthought.