YouTube player

The Australian federal government has outlined its intention to regulate artificial intelligence, saying that there are gaps in the existing law and the new forms of AI technology will need safeguards to protect society.
The National Science and Technology Council (NSTC) released advice as well as a discussion paper on AI. They said that while the full risks and opportunities of AI were difficult to predict, in the near-term generative AI technologies will likely impact everything from banking and finance to public services, education and creative industries. The Industry and Science Minister has outlined two goals for the government. The first is to ensure that businesses can confidently and responsibly invest in AI technologies. The second is to ensure there are appropriate safeguards, in particular for high risk tools. The federal government has proposed a three-tiered system that would classify AI tools as low, medium or high risk, with increasing obligations for higher risk classifications, as a possible response. An example of a high risk tool could be an AI surgeon, and it would require peer-reviewed impact assessments, public documentation, meaningful human interventions, recurring training and external auditing. The government is also concerned about ensuring that any regulation helps to develop the industry rather than stifle it. It is calling for meaningful participation from both government and industry to establish flexible guardrails as generative AI technologies evolve.

This segment was created for the It’s 5:05 podcast