Action by governments to grapple with the impact of AGI is now urgent.
Nations across the world are not yet exhibiting the necessary degree of urgency, and Australia can lead the way.House of Representatives - 6 February, 2023
Recently, there have been media reports of students in Australia using artificial intelligence to cheat in their exams. AI technology, such as smart software that can write essays and generate answers, is becoming more accessible to students, allowing them to complete assignments and tests without actually understanding the material. This is causing concern, understandable concern, for teachers, who are worried about the impact on the integrity of the education system.
By using AI to complete their work, students are effectively bypassing the educational process and gaining an unfair advantage over their peers. This can lead to a lack of critical thinking skills and a decrease in the overall quality of education. Moreover, teachers may not be able to detect if a student has used AI to complete an assignment, making it difficult to identify and address cheating.
The use of AI to cheat also raises ethical questions about the responsibility of students to learn and understand the material they're being tested on. It also highlights the need for teachers to adapt their teaching methods and assessment techniques to address the challenges posed by new technologies.
Now, I have to admit I didn't write that. In fact, no human wrote that. The AI large language model ChatGPT wrote that. Last night I simply asked ChatGPT: 'In 90 seconds, please summarise recent media reports about students using artificial intelligence in Australia to cheat, and explain why teachers are worried about this.' I think it did a pretty good job, and it represents a significant step forward towards AGI—artificial general intelligence—which we need to think about.
To be clear, the development and implementation of artificial general intelligence in Australia brings both risks and benefits to the country. On the benefits side, AGI has the potential to revolutionise many industries, including health care, transportation and finance, by increasing efficiency, reducing costs and improving decision-making. AGI could also bring new opportunities and economic growth as companies invest in developing and implementing the technology.
However, AGI also brings a range of risks that must be carefully considered and managed. One of the main risks is for potential job loss as machines and algorithms become better at performing tasks that were previously done by humans. There is also a risk that AGI could perpetuate existing biases and discrimination, particularly in decision-making processes such as hiring and lending. AGI raises significant ethical and moral questions such as: who is responsible when a machine or algorithm causes harm or makes a decision that is harmful to society or individuals? There is also a risk that AGI could be used for malicious purposes such as cyberattacks and disinformation campaigns.
It is important for the Australian government and our society to carefully consider the risks and benefits of AGI and take a responsible approach to its development and implementation. This may include investing in training and education programs to prepare workers for the changing job market, as well as regulation and oversight to ensure that AGI is developed and used in a responsible and ethical manner. AGI brings both benefits and risks to Australia, and it is crucial for the government and society to carefully weigh the potential outcomes and take a responsible approach to its development and implementation.
I confess, I did not write that, either. I asked ChatGTP to explain in two minutes the risks to Australia from artificial general intelligence—another pretty good job! I don't think I've breached any standing orders but I am sure I will be told later if I have. I don't think there's a rule that says that members can't write their speeches with ChatGTP. It's certainly not a practice I would recommend or propose to continue in the future, though the opposition may find it useful. They could find good applications for it! What do we stand for? What are our values? Why does the deputy leader think it is okay not to have any policies? See what it has to say on that. Which of our 22 different energy policies should we have stuck with? Is climate change real? Why did we think it was a good idea to dump Malcolm Turnbull? Who let Senator Rennick into our party room? What is wrong with Alan Tudge? I'll stop with the ChatGTP references now because I think I have made my point, but it is a serious point that I am trying to illustrate.
AGI—artificial general intelligence—presents a broader and deeper set of both risks and opportunities to society than any previous technology. Plausible risks include the disruptive, the catastrophic and the existential. It doesn't take long, if you start thinking, to realise the disruptive and catastrophic risks from untamed AGI are real, plausible and easy to imagine. 'Existential', however, is an exceptionally strong word, I know. It sounds and is inherently alarmist. Existential risks are posed by events that would annihilate, or permanently and drastically curtail, the potential of intelligent life on earth. There are specialist scientists and brilliant thinkers who spend their lives studying these risks—things like asteroids, runaway climate change, super volcanoes, nuclear devastation, solar flares or high-mortality pandemics—but artificial general intelligence is increasingly topping their list of worries.
I haven't asked ChatGTP about this, but I spent time over the summer talking with leading researchers and drawing the rest of my remarks from an article by Wim Naude and Otto Barten. AGI has the potential to revolutionise our world in ways we can't yet imagine, but if AGI surpasses human intelligence, it could cause significant harm to humanity if its goals and motivations are not aligned with our own.
If humans managed to control AGI before an intelligence explosion, it could transform science, economies, our environment and societies with advances in every conceivable field of human endeavour. But the risk that increasingly worries people who are far cleverer than me is what they call the 'unlikelihood' that humans will be able to control AGI or that a malevolent actor may harness AGI for mass destruction.
Of course, many—optimists, if you like—doubt that these risks will materialise, and they remain optimistic that humans will find a way to manage them. But things are evolving so rapidly. In just the last few weeks we've seen the explosion of articles in the media about ChatGPT, for instance, and there's a new version coming soon, vastly superior. They're evolving so rapidly that, just as the world has finally and belatedly started acting collectively on climate change, we have to get our collective act together on AGI—and urgently so.
Many think the challenges of collective action on AGI across nations is directly comparable to the decades-long efforts on nuclear nonproliferation or action on climate change through international climate agreements. So, we have to start now. While the certainty and timing of the arrival of AGI remain in question, the level of risk it poses—the same arguments we've had about climate change—and the scope of policy development that is needed to manage it warrant immediate attention and action by the government and the parliament: a concerted, serious, urgent policy think, not in the next few years but certainly this term and preferably starting this year.
In every conceivable public policy domain one can foresee astounding benefits accompanied by serious risk. This includes the most serious national security and defence domains. The military applications of AI are well known, and it is widely acknowledged that AGI has the potential to transform warfare as we know it. If AGI surpasses human intelligence it could then pose a threat to our military, potentially rendering our current defensive capabilities obsolete.
Defence nerds—chairing the defence committee, I spent a bit of time with them—rightly tell us there's a rapid race across developed militaries across the world to pursue artificial intelligence, given the radically improved command-and-control enhancements it can bring. But, if we lose that battle, an AGI-enabled adversary could conquer Australia or unleash societal-level destruction without being restrained by globally agreed norms.
An AGI global diplomat—it's not inconceivable—could actually start resolving international conflicts and see ways through that humans haven't been able to see. But unequal access to AGI between nations inflates international conflict through disinformation and asymmetric political warfare. That's in the risk column. Think about trade: the benefits, the growth in AGI-enabled or enhanced services for export. As with other technologies, if we race to get the technology, particularly in the services sector, we become more competitive and grow our economy through trade. But, conversely, those who lose out in the battle are the non-AGI-enabled industries, defeated by AGI-enabled international competitors.
In the benefits column we can see new employment opportunities through human-centred AGI—'cobots', human AGI-created goods and services using and harnessing that technology. The worst, most dystopian fears are mass unemployment as more and more of our jobs—areas of society we thought could never be automated—suddenly can be automated and done more cheaply.
Online we're seeing deepfakes as well as now publicly accessible art sites that can create unbelievable images just through typing in the style of a particular person or object. There is AGI enhancement of artistic creativity and endeavour. But, in the negative column, AGI could replace human artists in the literature, filmmaking, game creation and visual arts domains. Imagine if Netflix and co could lower their costs. Australian content would be the least of our issues.
AGI can generate cures and treatments for diseases, we could imagine, reducing pressure on health and aged-care services. The Minister for Health and Aged Care is sitting over there. I'm sure he'd appreciate a bit of lowering of the cost. But in the risks column, untamed health care—AGI—could rationally decide that really eliminating the ill and the aged is the best way to achieve its goals. There is a need for ethics in all these decision-making processes.
You could imagine an AGI road network coordinator improving safety and productivity and the environmental outcomes of road transport—big technology optimising things in ways that are currently beyond us, getting more from our assets. On the risk side: the AGI road network coordinator is hacked, allowing large-scale coordinated takeover of networked autonomous vehicles. These are not fantasy scenarios. These are scenarios we can, let's say, reasonably imagine that our national security agencies and others across the world are plausibly contemplating.
In the benefits column, you could see improved judicial process through coordination and data analysis, improved efficiency of the penal system through tracking and other coordinated constraints. But there's a 'real risk' column and questions: Are the existing legal constraints, which have been designed for humans, ineffective for artificial agents? Who's actually accountable for the actions of an artificial intelligence system, legally? Will we need separate legal personalities just as companies gained centuries ago?
AGI could monitor and neutralise cyberthreats at a scale and speed beyond human capability. But in the risk column, again, they could be hacked if they're running critical infrastructure.
You can see the benefits from coordinated identification, tracking and disruption of antisocial individuals and groups, and our intelligence agencies and those across the world are looking at this. But you can also see the potential for radical groups to be empowered, if they gain access to this technology, through turbocharged influence campaigns, misinformation, deep fakes and coordinated weapons systems.
Even aside from the almost endless policy nerd analysis that you could do on this, AGI has the potential to change how we as humans relate to each other. What might AGI do for loneliness? What might it do to inequality? Governments will rapidly possess ever-greater artificial intelligence capabilities but citizens may be left behind, leaving populations in many parts of the world far more vulnerable to populist and authoritarian regimes, and manipulation. Unequal adoption of the technology within societies may see large swathes of our citizens left further behind.
I firmly believe that action by governments to grapple with the impact of AGI is now urgent. Nations across the world, in my view, are not yet exhibiting the necessary degree of urgency, and Australia can lead the way. We need to examine serious and fundamentally important questions right across all public policy domains in our society and economy.
What are the benefits of AGI? Think this through. What are the risks? How likely is it that we'll achieve AGI at all? When are we likely to achieve AGI, if we do? What risk control efforts are already underway? What public policy thinking have we done here and elsewhere in the world around this? Are those risk controls and efforts sufficient? Why should government get involved in controlling the risks of AGI? What could government do to harness benefits while controlling the risks of AGI?
These kinds of questions and deep dives will require us to consider the full toolkit of government interventions, ranging from research, policy, legislation and regulation as well as direct public investment, grants, co-investment, equity investments for public goods where markets may or are failing to serve the collective interest, whether that's research grants or similar. There are many forms that such a proper examination could take, and it won't be a static point in time thing.
Thinking this through at a societal and governmental level will require ongoing and, possibly, institutionalised reflection. We could consider a white paper, an expert inquiry, a parliamentary inquiry, a commission, like the Climate Change Commission, to institutionalise this topic for the next few years, at least, an international intergovernmental collaboration—that sounds catchy—that Australia leads the way to assemble or some combination of the above. The key message I want to convey is that we have to start now.
I'm old fashioned, as you know, on this side of the House. The Minister for Health and Aged Care is a former minister for climate change, and he's old fashioned too. We on this side of the House believe you should listen to the science.
'Not at all,' says the member for Hinkler—at least a moment of honesty! They don't listen to the science. You did say this morning, actually, that you thought it was pretty dense, which I thought was peak irony. Anyway, we'll keep it nice. Public policy should be informed by evidence, and public-policy makers should listen to the science. That's what we're doing on climate change, like every other sensible, developed country in the world, dragging Australia back from being an outlier, bringing us back into the mainstream of international fora.
We have an opportunity to be one of the world's leaders in listening to the science. This stuff is important, it's compelling and it's urgent. I encourage the government, of which I'm a proud member, to acknowledge that this is something we need to think through. No society, no government, has got it right. This is not a criticism. It's a reflection I've made in having a look at the Governor-General's speech—our enormous existing agenda, that which we have delivered already, as I outlined at the start of my remarks, that which we already have in front of us.
But this is big. It's compelling, it's important and it is urgent.