The vast majority of financial services and banking (FS&B) professionals (91%) have called for regulations and standards governing the use of artificial intelligence (AI) and generative AI, according to research from Alteryx.
Alteryx’s ‘The Defining the Enterprise of the Future’ report found the majority (86%) had already implemented AI security, ethics and governance policies – an 11% increase on the global average across all sectors.
Surveying 2,800 professionals across the world, including 700 in FS&B, the research found that 80% harboured concerns about using AI-produced answers, versus 73% across all sectors.
Furthermore, FS&B professionals were less optimistic about the impact of AI on enterprises, with just 37% saying it would have a positive impact.
Damage to workplace desirability (42%), damage to brand reputation (41%) and loss of intellectual property and data (36%) were all identified as potential threats.
The greatest potential perceived risks of not having an AI policy, however, were the legal and ethical consequences (49%).
This was echoed by 92% of FS&B organisations saying that AI policies were the key to implementing responsible AI into businesses.
Transparency and explainability (54%), accountability (52%), and inclusive growth, sustainable development and wellbeing (34%) were cited as the key ethical considerations shaping policy development for the industry.
Jason Janicke, SVP EMEA at Alteryx, said: “The evolution of AI capabilities, specifically generative AI, has presented the FS&B sector with significant opportunities to unlock the power of data to automate tasks for productivity value creation while lowering costs.
“However, it has also promoted data privacy, ethics and cybersecurity concerns.
“Financial services and banking professionals are stewards of highly sensitive data, so any misuse of AI – even if unintentional – may leave organisations vulnerable.
“From opening new avenues for hackers to breach advanced cybersecurity software to users of these systems accidentally sharing confidential or proprietary data with the open AI community, it’s right for the sector to take a proactive approach to the implementation of effective guardrails that include practical checks on data quality, privacy, and governance.
“Data literacy upskilling will help establish a robust data-centric approach to AI, which is critical to safeguarding the business and maintaining security and trust amongst stakeholders and customers.”