HomeBusiness & FinanceAI could serve as a value driver for finance teams if used with caution

AI could serve as a value driver for finance teams if used with caution

AI could serve as a value driver for finance teams if used with caution

One of the promises of artificial intelligence (AI) is the technology’s ability to boost productivity by tackling mundane and repetitive tasks. This should theoretically enable finance teams to focus on higher-order strategic thinking.

 

But what are the risks in this new technology landscape?

 

Especially in the finance department, there’s no margin for error when it comes to accuracy, security, privacy and ethics.

 

The critical importance of caution and trust

There’s wisdom in being cautious, says Matt Van Itallie, founder and chief executive officer of Sema, a software company that works with companies to analyze and manage compliance, legal and security risks in engineering tasks that incorporate generative AI. “We are strong proponents of adopting AI given the benefits. It just needs to be managed carefully,” he says.

 

Case in point: Sema recently conducted a global analysis of the risks posed by AI technologies used within businesses, interviewing more than 200 lawyers, developers, investors and technology leaders, and analyzing 20 million words over thousands of documents to better understand steps their customers can take to implement AI responsibly.

In the end, Mr. Van Itallie and his team found 2,373 total risks in the information shared with them, including 13 instances that were “critical,” and would need to be solved within 60 days in order to address concerns such as “protecting intellectual property, data leakage and global compliance.”

 

It’s important, Mr. Van Itallie says, to know and evaluate the risks, some of which are “inherent” to AI – like the fact that laws and regulations still lag behind this technology.

 

When it comes to finance-related functions including payroll, forecasting financial outlooks or establishing an investment strategy for a company’s assets, there’s a need to manage stakeholder trust in the AI solutions being implemented.

 

“Trust is a big one. In order to feel confident in following any AI system’s outputs, it can’t be a black box,” says Joshua Pantony, CEO and co-founder of Boosted.ai, a platform that uses generative AI to cut down on the time- and data-intensive stock picking process. “Explainability of what is going into everything is paramount for our users.”

 

Part of this “explainability,” he adds, is transparency. AI is fallible.

 

“AI systems, like humans, will get things wrong from time to time,” Mr. Pantony says. “That human oversight is still so important. We see the marriage of human intuition and AI’s processing power as being greater than the sum of its parts.”

Key components of an AI plan

 

Finance leaders and their teams need to have a plan as they make the most of the paradigm-shifting technology.

 

“AI has the power to fundamentally change the way finance operations personnel do their work and have access to information at their fingertips like never before,” says Priya Bajoria, senior vice-president of Financial Services, Canada, for Publicis Sapient, a digital consulting firm that specializes in digital transformation for major financial institutions, including three of the big Canadian banks.

 

Having worked with companies to implement AI for several years, Ms. Bajoria has a few key pointers.

 

“There is a need to set up guardrails to manage hallucinations and ensure that the output of the AI models can be trusted. Data security becomes paramount especially around sensitive data not getting exposed while being sent to external APIs,” she says, referring to the interfaces that allow two different programs to speak to each other.

 

“Technology stacks have to be set up to deal with the scale and volume of data and need to be modernized to be on the cloud.”

 

Hanging onto any legacy technology, she adds, will limit how much AI you’ll be able to use.

 

You’ll also want to ensure you’re paying attention to what the government of Canada is doing, particularly when it comes to the proposed Artificial Intelligence and Data Act. One of the first such pieces of legislation globally, it is designed to regulate businesses who use AI and similar technologies in order to protect the Canadians who use their services or would be affected by their work.

 

The bottom line

The biggest risk of them all? Ignoring AI and machine learning altogether, especially when your competitors are already using it to make “better, faster and more data-driven decisions,” Mr. Pantony says.

 

“This technology is not going anywhere, and teams will need to find ways to adopt it that work for their purposes,” he says. “Ultimately we see AI as a tool, like Excel. Do you want to be learning how to use Excel in five years, or would you like to be an expert today? We see the latter as the more appealing option.”

 

 

 

This article was first reported by The Globe and Mail