How can we govern artificial intelligence - Global Banking | Finance (2024)

By Gaurav Kapoor, COO and Office of the CEO, MetricStream

Artificial intelligence (AI) has swept across almost every industry with the purpose of automating processes, increasing efficiency and improving our personal lives and businesses.

It is widely believed that AI promises to be objective and help us to avoid human bias, opinion, or ideologies. However, there have been many instances where the opposite has been true and technology has failed to behave with impartiality. One of many examples where AI has failed to be partial comes from Amazon’s AI recruiting tool which was found to be biased against hiring women as it largely only recommended male CVs and consequently, the technology had to be scrapped by Amazon to avoid any further scrutiny.

With AI rapidly evolving and taking up more room in the business landscape, it is understandable that the European Commission is eager to draft regulation to help prevent the misuse of AI, but how can we effectively govern the bots?

A challenging question for a complicated process

On 19 February during a press conference in Brussels, the European Commission set itself up for the unenviable task of trying to regulate AI as the technology is constantly changing. What may work to regulate AI one day, may fail to stretch far enough a few weeks later as AI rapidly evolves and could be completely irrelevant after only a month of being introduced.

The need to have policies in place though is not doubted among the community as a KPMG study found that 80 per cent of risk professionals are not confident about the governance in place around AI. However, what is of concern for technology leaders is the consequence of tighter regulations stifling innovation for AI and hindering the enormous potential benefits for the world.

For example, CheXnet is an AI algorithm from Stanford that can detect pneumonia among older patients through chest X-rays, but for technologies like these to work, they need creative and scientific freedom.

Although AI and its innovation holds great power to be used for good, its accelerating adoption across industries comes with numerous ethical concerns that need to be addressed in governance.

Navigate evolving AI with forward-looking risk management

While the EU works hard to try and set policies in place, organisations should take the time to consider their own governance, risk and compliance (GRC) processes to ensure they are not caught out with their use of AI when legislation does finally arrive.

One way organisations can overcome unforeseen exposure to risk from evolving AI technology, as well as the ever-changing business landscape, is by implementing a governance framework around AI within and outside organisations. Unlike model management in financial services industries where internal controls and regulators require companies to validate and ‘manage’ models on a regular basis, AI model controls are already being put in place.

This reflects the proliferation of AI usage in enterprises and the need for organisations to monitor where they are being used for business decisions and avoiding inherent biases or lack of underlying datasets for them to operate with accuracy. Regulators are not far behind demanding proof points of the right controls in place.

The other added element is to set up a forward-looking risk management program around AI. This program improves an organisation’s ability to manage both existing and emerging risks by analysing past trends, predicting future scenarios, and proactively preparing for them around the hypothesis and the impact of AI – both positive and negative and monitoring them.

Once an organisation is set up in this way, it should be better prepared for any new regulation that may be introduced to govern AI and stop its misuse or bias.

Create one information hub for all

A study by Gartner found that poor data quality, created from multiple information siloes by several business units and operations in numerous geographic locations, is responsible for an average of $15 million per year in losses.

These figures can be crippling for organisations, so it is important that restrictive siloes are abandoned and replaced with one centralised information hub. With this information management system, senior management and risk professionals can ensure that they are always aware of their risks, created by AI or otherwise, as one centralised information hub allows organisations the ability to operate with a clear vision of the bigger picture so that they can respond to any threat appropriately and efficiently.

Hold integrity and ethics at the core of the business strategy

One argument that is often presented for when AI fails to act impartially is explained by an article in the Harvard Business Review which said that AI systems learn to make decisions based on training data, which may include biased human decisions or reflect historical or social inequalities.

Although organisations may not be responsible for creating the bias in the AI technology that they purchase, organisations are responsible for their own character and reputation.

With scrutiny coming from regulators, as well as the rise of the consumer voice due to social media, organisations need to strongly protect their reputations. To avoid any bias from the use of AI, integrity and ethics should be at the core of each company’s business model and strategy besides monetary success.

After making integrity and ethics core to a brand, it is important for organisations to invest in integrated, holistic, and regularly assessed GRC programs, as well as ethical technologies that keep companies on track and can spot any lapse in impartiality from AI.

It is clear that AI is only going to take up more space in our business processes, so it is easy to see why the European Commission is working hard to address the potential risks that come with using the technology while trying to not stifle the development and innovation of AI to better our lives.

However, until a solution can be reached by the EU and adequate regulation is introduced, organisations should do their utmost to govern their own AI technologies and any potential risks with a governance model, forward-looking risk management program, a centralised information hub and by always holding integrity and ethics at the core of the business strategy.

Share on FacebookShare on TwitterShare on Linkedin

How can we govern artificial intelligence - Global Banking | Finance (2024)

FAQs

How do you govern artificial intelligence? ›

Developing an Artificial Intelligence Governance Framework

In a world where AI is everywhere, it is crucial to have corresponding governance and compliance processes in place. This helps ensure transparency, address personal privacy and data privacy considerations and fosters a commitment to ethical AI.

How can we regulate artificial intelligence? ›

Another way of regulating AI is to mandate beta testing of all algorithms to evaluate their safety in controlled settings before releasing them to the public.

How can artificial intelligence be used in banking? ›

Banks are now using AI algorithms to evaluate client data, identify individual financial activities and provide personalized advice. This kind of individualized attention enables clients to make better informed financial decisions, increases trust and strengthens customer loyalty.

What are the biggest challenges in implementing artificial intelligence in banking? ›

Ethical and Legal Concerns: AI raises ethical and legal questions related to privacy, security, transparency, and algorithmic bias. Banks must navigate these challenges carefully. Solution: Implement robust governance frameworks, ensure transparency in AI decision-making, and address privacy concerns.

How can we control artificial intelligence? ›

Maintaining your knowledge base and ensuring it is accurate, reliable, and aligned with your organization's desired outcomes will contribute to the success of mitigating risk. The answer to “how do you control AI” begins with a robust knowledge management strategy.

What is an example of AI governance? ›

AI Governance Framework Examples

Data encryption, strict access controls, and anonymization techniques protect patient privacy and promote responsible use. Clinical Decision Support: AI can enhance decision-making in medical diagnostics and treatment planning.

What governs AI? ›

McCain National Defense Authorization Act (P.L. 115-232) established the National Security Commission on Artificial Intelligence "to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national ...

What are governments doing about AI? ›

1.7 In 2023, CDDO, DSIT and HM Treasury began working together on a strategy for AI adoption in the public sector, but this is at an early stage. The draft strategy sets out four aims. transparent use of AI to improve public services and outcomes. and will have confidence that the government's use of AI is responsible.

How to prevent AI from taking over? ›

If you want to prevent AI from being used against you, it is mandatory to implement these superior security controls, which include multifactor authentication (MFA), role-based access control (RBAC), biometrics, data encryption, and many more.

How AI can make decision in banking? ›

Data Analysis: With AI, banks can harness predictive analytics to evaluate credit risks by sifting through complex data patterns that humans might overlook. Consistency: AI helps maintain decision-making consistency, essential for fairness and regulatory compliance.

How will AI change investment banking? ›

AI-powered systems can make reports, enter data, and analyze documents. This lets investment bankers focus their time and energy on things that add value. Investment banks can successfully cut costs, improve efficiency, and boost output by automating these processes and increasing production.

How can banks use AI for regulatory change management? ›

AI systems can continuously monitor transactions and operations to ensure that they comply with relevant regulations. These systems can automatically generate reports and documentation required by various regulatory bodies, reducing manual effort.

What are the risks of AI in banking? ›

However, hallucination, algorithmic bias and vulnerability to data quality issues present risks to the accuracy of AI predictions. If financial entities base their decisions on faulty AI predictions which are not checked, this could lead to outcomes that may result in economic losses or even disorderly market moves.

How will AI affect banking jobs? ›

Citi Sees AI Impacting More Than Half Of All Finance Jobs

According to a new report by Citi, a little more than half — 54% — of jobs in the banking sector have a higher potential for automation, while another 12% could be augmented by AI. “AI-powered clients could increase price competition in the finance sector.

What is the conclusion of AI in banking? ›

Conclusion. AI presents exciting opportunities for the banking industry, but it also poses significant challenges. Banks must carefully consider the ethical, regulatory, and security implications of AI adoption to ensure that they leverage the technology effectively and responsibly.

Are there laws governing artificial intelligence? ›

In sum, there is no special federal regulation framework that is comprehensive for artificial intelligence aspects specifically. However, the US has established several sector-specific AI-related agencies and organizations that address some of the challenges arising from the evolution of AI.

What is governance in responsible AI? ›

The governance of AI involves establishing robust control structures containing policies, guidelines and frameworks to address these challenges. It involves setting up mechanisms to continuously monitor and evaluate AI systems, ensuring they comply with established ethical norms and legal regulations.

What is the government doing about artificial intelligence? ›

The AI Center of Excellence was later codified into law by the AI in Government Act of 2020. These efforts are supporting and coordinating the use of AI in the Federal agencies, helping to deploy scalable solutions and facilitating the sharing of best practices and tools for AI adoption.

Top Articles
Latest Posts
Article information

Author: Frankie Dare

Last Updated:

Views: 5957

Rating: 4.2 / 5 (53 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Frankie Dare

Birthday: 2000-01-27

Address: Suite 313 45115 Caridad Freeway, Port Barabaraville, MS 66713

Phone: +3769542039359

Job: Sales Manager

Hobby: Baton twirling, Stand-up comedy, Leather crafting, Rugby, tabletop games, Jigsaw puzzles, Air sports

Introduction: My name is Frankie Dare, I am a funny, beautiful, proud, fair, pleasant, cheerful, enthusiastic person who loves writing and wants to share my knowledge and understanding with you.