Approximately 60 percent of surveyed licensees plan to expand their use of AI
The Australian Securities and Investments Commission (ASIC) has issued a call for financial services and credit licensees to bolster their governance frameworks to keep pace with the rapid expansion of artificial intelligence (AI) use in the industry.
This appeal follows ASIC’s inaugural market review, which assessed AI adoption across 23 licensees and highlighted a potential governance lag in AI oversight.
ASIC’s review found that while current AI applications among licensees are conservative, largely supporting human decision-making and boosting efficiencies, the landscape is rapidly evolving. According to ASIC Chair Joe Longo, approximately 60 percent of surveyed licensees plan to expand their use of AI, potentially reshaping AI’s impact on consumers and underscoring the need for governance reforms.
“Our review shows AI use by the licensees has to date focused predominantly on supporting human decisions and improving efficiencies,” said Longo in a statement. “However, the volume of AI use is accelerating rapidly, with around 60 percent of licensees intending to ramp up AI usage, which could change the way AI impacts consumers.”
ASIC’s report reveals that nearly half of the licensees lack policies addressing consumer fairness or bias, and an even smaller number have frameworks in place to inform consumers about AI usage. Longo warned that insufficient governance in these areas could lead to substantial risks, including misinformation, unintentional bias, manipulation of consumer sentiment, and data security issues.
“There is the potential for a governance gap—one that risks widening if AI adoption outpaces governance in response to competitive pressures,” Longo explained. “Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment, and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence.”
ASIC’s review also emphasised the responsibility of licensees to pre-emptively address AI-related risks instead of waiting for new regulations. Under existing regulations, licensees must meet consumer protection provisions, director duties, and licensee obligations, which place the responsibility on institutions to ensure adequate oversight and risk management for AI deployment. This includes maintaining robust due diligence to mitigate potential risks, especially with third-party AI providers.
Longo reiterated ASIC’s stance on AI’s potential to bring benefits to both consumers and financial markets. However, he stressed that these benefits depend on robust and ethical governance frameworks being in place before AI technologies are fully operationalised. “We want to see licensees harness the potential for AI in a safe and responsible manner—one that benefits consumers and financial markets. This can only happen if adequate governance arrangements are in place before AI is deployed.”
Monitoring and responding to AI use in the financial sector is a priority for ASIC, as highlighted in its recent corporate plan, which lists the responsible management of AI as a key focus area. The agency has committed to ongoing oversight of AI’s impact on consumer outcomes and the integrity of the financial system, and it signalled that enforcement action would be taken where misconduct involving AI arises.
ASIC’s report called for licensees to take immediate steps to strengthen governance, positioning the financial sector to responsibly manage the opportunities and challenges posed by the rapidly advancing AI landscape.