New AI guides for businesses aim to make privacy law compliance easier

Carly Kind recognises rising concerns about use of Australians' personal information by AI

New AI guides for businesses aim to make privacy law compliance easier

The Office of the Australian Information Commissioner (OAIC) has announced that it published new guidance seeking to remove doubt about how Australia’s existing privacy law applies to artificial intelligence (AI) and to make compliance easier for businesses.

“Robust privacy governance and safeguards are essential for businesses to gain advantage from AI and build trust and confidence in the community,” said Carly Kind, privacy commissioner, in the OAIC’s media release.

The regulator’s new guides aim to lay out its expectations for businesses and to help them follow the best practices for privacy compliance. The guidance tackles the applicable law, the state of technology, and current practices. It also explores how to strengthen AI privacy protections for the benefit of society as a whole.

The OAIC’s guidance seeks to further its goals of promoting privacy in the context of emerging technologies and digital initiatives and improving compliance through clearly stating what good compliance is supposed to look like.

One of the OAIC’s priorities is addressing privacy risks arising from AI, including the effects of the increasing accessibility of powerful generative AI capabilities across the economy, according to Kind in the media release.

“How businesses should be approaching AI and what good AI governance looks like is one of the top issues of interest and challenge for industry right now,” Kind said in the OAIC’s media release.

Two new guides

The first guide newly released by the OAIC aims to help businesses using commercially available AI products comply with their privacy obligations, as well as assist them in choosing the appropriate products for them. On the other hand, the second guide offers privacy guidance for the benefit of developers who use personal information to train generative AI models.

The OAIC recognises that Australians are increasingly concerned about developers’ use of their personal information, particularly when training generative AI products, Kind said in the media release.

The regulator – which expects organisations using AI to be cautious, to assess risks, and to ensure that privacy is an important consideration – may take action if organisations fall short of these expectations, she added in the media release.

“With developments in technology continuing to evolve and challenge our right to control our personal information, the time for privacy reform is now,” Kind said in the media release. “In particular, the introduction of a positive obligation on businesses to ensure personal information handling is fair and reasonable would help to ensure uses of AI pass the pub test.”

Recent articles & video

Legal tech company Litera names Avaneesh Marwaha as CEO

Investigation into illicit tobacco trade leads to execution of 27 warrants

New AI guides for businesses aim to make privacy law compliance easier

DLA Piper helps Macquarie Technology Group with $450m secured debt refinancing

AI oversight by humans could become impractical, UK judge warns

New Jersey Supreme Court allows disbarred lawyers to seek reinstatement after five years

Most Read Articles

Rio Tinto, helped by Allens and Linklaters, acquires Arcadium Lithium for US$6.7bn

Consultation opens on review of AI and Australian Consumer Law

Colin Biggers & Paisley adds partner Patrick Boardman and four others to insurance group

Proposed merger reform will make clearance process more challenging, Allens partner says