ILA Directors' Day panellists see opportunities, but also accuracy and bias risks, in AI

Author Image
BELBAL Najia

Artificial intelligence can open up new business opportunities in Luxembourg, particularly for the country’s financial industry, but the current limitations and flaws of generative AI present risks that boards and executives must learn to manage, according to a panel of experts during the 12th ILA Directors' Day conference on November 16.

The audience heard that very few business leaders in Luxembourg feel comfortable using AI, something that needs to change if the technology is to be subject to effective governance. They will also have to learn to manage planned new European legislation intended to provide a legal framework for AI use and accommodate the requirements imposed on regulated industries.

The discussion of AI was one of the highlights of the Directors' Day event, introduced by ILA chair Virginie Lagrange and presented by Thomas Seale. Presentations and discussions covered hot topics such as sustainability and cognitive bias, the latest challenges for investment fund directors, IT security, and financial sector governance rules.

But with ChatGPT having seized the headlines this year as the foremost iteration of a technology that could transform entire economic sectors and have huge implications for employment, the session on the impact of AI on business and governance was a key topic of conversation as ILA members gathered for networking opportunities.

The panel was moderated by Ananda Kautz who, in her capacity as a board member of the Luxembourg Bankers Association, drew on a survey carried out by the ABBL last year in conjunction with the University of Luxembourg, which found that 75% of the 40 financial institutions surveyed saw generative AI as an opportunity. 

What are these opportunities? Solenne Niedercorn-Desouches, a fintech venture capital professional and host of the podcast Finscale, sees great added value opportunities for Luxembourg. “You can see all the impact on compliance, risk, and valuation functions,” she said. “Now you can use a tool that can help you calculate your net asset value in just a minute. It's exactly the same for sanctions screening.” 

INSEAD professor Theodoros Evgeniou, who has been working on AI and machine learning for the past 25 years, cited a report from McKinsey this year that argues that that the impact of generative AI could add trillions of dollars in value to the global economy, mostly through an increase in productivity.

The study finds that members of the workforce who use generative AI were faster and the quality of their work up to 40% better than those who were not using it, with the positive impact most notable among less experienced consultants. Said Evgeniou: “Well-designed AI effectively decreases the knowledge gap, and pulls all of us upwards.” 

However, he says, we are only at the beginning of a journey to understand generative AI’s power, reach, and capabilities: “The second order effects, meaning the kind of innovation that will come and the kind of businesses and markets that will be created, we can’t even imagine yet.”

Another finding from the ABBL survey was that only 8% of the senior management in financial institutions say they are very comfortable using AI technology. Said Niedercorn: “Choosing the right board members to tackle this specifically is very important, it's a very niche skill. First, it will enable companies not to miss commercial opportunities, and secondly, have the knowledge to properly mitigate the risk.”

Evgeniou believes that everybody should know something about the subject: “You cannot have discussions about the governance of AI between an expert and somebody who is clueless. Unless everybody's up to speed, there is no debate.” According to Kautz, this represents a challenge for all directors to at least familiarise themselves with AI tools.

But AI also poses potential risks. Emilia Tantar, chief data and artificial intelligence officer at AI software creation and consultancy firm Black Swan Lux raised the issue of trustworthiness in generative AI or the large language models that underpin it. “These knowledge bases are impressive, and you can navigate tonnes of data, but sometimes the links between the notions are not accurate,” she said. “This is a systemic risk that you introduce into your business.”

It has become well known that generative AI can ‘hallucinate’ – providing answers to prompted questions taking words semantically from the context without these bodies of knowledge being checked with a linguist or an anthropologist, and most importantly, often not checked against the domain knowledge. “You should ensure that there is also human oversight – you should not trust it blindly,” said Tantar, who also underlines that these knowledge bases may also contain biases.

Both trustworthiness and bias are critical concepts that appear in the regulatory framework first proposed by the European Commission in 2021. The AI Act was adopted by the European Parliament in June of this year is and currently the subject of negotiations between the EU legislative institutions with the aim of reaching an agreement on its final form by the end of the year.

Once approved, the AI Act will arguably constitute the world’s first legislative framework for artificial intelligence, although Evgeniou notes that China already has some rules in place. Under the EU law, applications would be analysed and classified according to the risk they pose to users, with different risk levels affecting the level of regulation. The AI Act will link with existing EU legislation such as the General Data Protection Regulation and the Digital Services Act as well as other cyber-security requirements.
 
Tantar notes that in November the UK’s first AI safety summit introduced the AI Safety Institute: “You should keep that in mind if you are in the US or the UK market. The AI Safety Institute is a voluntary effort, an institution to support you in assessing your systems when you enter the market.”

At the end of the panel discussion, Yannick Bruck, chief technology officer at the Luxembourg Stock Exchange, described a case study: how the exchange is developing and implementing AI, amid a range of business challenges, regulatory requirements, and risk considerations, especially how to adopt it in a regulated industry. 

Bruck says LuxSE has taken two approaches. One is a very simple use case – how to automate the extraction of information from legal and financial documents. This technology has been around for some time, and it’s relatively secure and robust because you know where the data is coming from.

The second approach is cloud-based, which Bruck says raises a series of questions: “I'm in a regulated industry. Am I allowed to use the technology? Where is the data stored? Who will be using my data and my prompts? Do I want those people to learn from what I'm doing? Because I'm sending prompts, I'm sending questions that are relevant to me. Does it make sense for me to share my internal intellectual property with whoever is behind this particular AI?”

In this case, LuxSE chose a cloud provider that had invested heavily in generative AI technologies, and Bruck was happy from a compliance perspective that the exchange’s proprietary data was not being shared.

Bruck agrees with the panel that the technology will still need human oversight even though AI will augment human intelligence. He said: “If you see what is coming out of those models, you know you will still need people. Sometimes it generates brilliant ideas for people who are less experienced and for whom it makes a lot of sense to use them. And sometimes you're happy that somebody with expertise is there to say 5% of these ideas should be deleted because they make no sense.”