Bridging the AI Governance Gap

As artificial intelligence (AI) reshapes industries, organizations face the challenge of balancing rapid adoption with responsible governance. According to a recent report from the Australian Securities and Investments Commission (ASIC), many organizations are adopting AI faster than they’re updating governance and risk management practices, particularly in sectors such as banking, insurance, and financial services. This gap poses significant risks, as without proper oversight, AI systems can introduce unintended biases, privacy issues, and compliance risks that undermine consumer trust and regulatory alignment.

Beware the Gap

The Australian Securities and Investments Commission (ASIC) released a report in October 2024 Beware the gap: Governance arrangements in the face of AI innovation. The research analysed 624 AI use cases that 23 licensees in the banking, credit, insurance and financial advice sectors were using, or developing, as at December 2023. These were use cases that directly or indirectly impacted consumers and included generative AI and advanced data analytics (ADA) models. The review focused on risk management and governance arrangements for AI.

Simply put, some licensees are adopting AI more rapidly than
their risk and governance arrangements are being updated to
reflect the risks and challenges of AI. There is a real risk that such
gaps widen as AI use accelerates and this magnifies the potential
for consumer harm.
— Joseph Longo, ASIC Chair

As artificial intelligence (AI) rapidly reshapes business landscapes, organizations worldwide are accelerating AI adoption. However, according to ASIC’s recent report, many companies are embracing AI technologies faster than they’re updating essential risk management and governance frameworks. This oversight can lead to significant consumer risks as the potential for AI-related issues grows.

The report highlights a rapid increase in AI adoption among licensees, with a shift towards more complex, opaque AI types like generative AI. Despite this trend, most organizations are cautiously integrating AI to support, not replace, human decision-making, with limited direct consumer interaction. However, as competitive pressures grow, many licensees plan to expand their AI usage, which could quickly increase consumer risks.

A key concern is that some licensees are not fully prepared to manage the challenges of expanding AI use. Many are updating governance practices as they scale AI, but in some cases, these updates lag behind the adoption. Because governance and risk management practices adapt slowly, the gap between AI use and governance is expected to grow, potentially leaving organizations unprepared to respond safely to AI innovations from competitors.

KEY STATISTICS FROM THE REPORT

  • 57% of all use cases were less than two years old or in development.

  • 61% of licensees in the review planned to increase AI use in the next 12 months.

  • 92% of generative AI use cases reported were less than a year old, or still to be deployed.

  • Generative AI made up 22% of all use cases in development.

  • Only 12 licensees had policies in place for AI that referenced fairness or related concepts such as inclusivity and accessibility.

  • Only 10 licensees had policies that referenced disclosure of AI use to affected consumers.

Increasing risks during rapid AI adoption

The accelerated pace of AI adoption, while promising for innovation and efficiency, presents challenges for regulatory compliance, consumer trust, and operational stability. As AI becomes integral to decision-making and consumer interactions, companies must consider the implications of using AI tools without proper governance. Unmanaged AI implementation can lead to biased algorithms, privacy issues, and compliance risks that negatively impact both reputation and consumer trust.

The findings from the ASIC report suggest significant implications beyond the financial sector for both business stability and consumer trust:

Risk of Eroding Consumer Trust: As AI adoption grows without adequate governance, consumers may face greater risks from unregulated or poorly monitored AI systems. Errors, biases, or opaque decision-making could undermine consumer confidence, leading to a loss of trust in both the technology and the business that deploys it.

Increased Vulnerability to Regulatory Scrutiny and Compliance Issues: Businesses using complex AI systems like generative AI without robust oversight may struggle to meet evolving regulatory standards. This could result in fines, legal challenges, or mandated operational changes, ultimately harming business reputation and increasing costs.

Operational Risks and Reputational Damage: Without adequate oversight, AI implementations can produce biased or harmful outcomes, leading to public backlash.

To foster responsible AI adoption, organizations need to prioritize governance structures that include:

  • An AI Inventory and regular assessment of AI tools.

  • Updated governance and risk management practices, incorporating AI specific behaviours, benefits and risks.

  • Transparent AI usage policies, made public.

How the AI360 Review supports effective AI governance

The AI 360 Framework can help you evaluate your current AI use and governance structures for effective AI management. By partnering with AI 360 Review, organizations can confidently manage AI risks, understand the implications of AI adoption across the organization, and ensure compliance with evolving AI regulations.

Artificial intelligence is disrupting organisations globally. The AI 360 Review tells you how your teams are using AI, where the gaps are, and where to look for real capability uplift. Contact us today to explore how our tool can help you achieve AI governance excellence.

Previous
Previous

Sensemaking with AI: How Trust Influences Human-AI Collaboration