Sensemaking with AI: How Trust Influences Human-AI Collaboration
Artificial intelligence (AI) is transforming industries at an unprecedented pace, and is impacting our future of work. But as we integrate AI into professional settings, a fundamental question exists: How do we build trust in AI?
AI360 Review Co-founder, Sarah Daly, was recently published in Sensemaking with AI: How Trust Influences Human-AI Collaboration. The study explores this question across different industry contexts, revealing how trust influences AI adoption, decision-making, and collaboration. The findings have implications for organizations looking to harness AI’s potential while ensuring it aligns with human values and professional norms.
Bridging the AI Governance Gap
As artificial intelligence (AI) reshapes industries, organizations face the challenge of balancing rapid adoption with responsible governance. According to a recent report from the Australian Securities and Investments Commission (ASIC), many organizations are adopting AI faster than they’re updating governance and risk management practices, particularly in sectors such as banking, insurance, and financial services. This gap poses significant risks, as without proper oversight, AI systems can introduce unintended biases, privacy issues, and compliance risks that undermine consumer trust and regulatory alignment.