As companies around the world start to adopt more innovative forms of AI, big questions are being asked about how those AI systems will be controlled and governed in practice. In this article, we hear from Giovanni Leoni, Head of Business Strategy & Development at Credo AI, about how companies are embedding responsible AI into their governance processes and business.
The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or Borealis AI.
What is the challenge facing companies as they try to apply governance to AI?
AI is rapidly moving into all areas of the business. And I think business leaders and executives understand that they are accountable for how those AI-enabled tools and solutions are acting as they are an integral part of business processes.
At the same time, regulation is rapidly evolving, and that makes it challenging for companies to keep up. For many organizations, the answer is often to expand the compliance team. But the problem with that is that it creates potential bottlenecks to future growth – you can’t have every decision come through the compliance function when an increasing amount of business process needs to be governed. I think business leaders and executives are increasingly realizing that they can’t use a 19th-century governance process to manage 21st-century technology.
Do business leaders buy into the need for responsible and compliant AI?
From my experience at IKEA, I’d say that if you focus on aligning around a core set of organizational values, you can really have a good conversation with the business. When we go out to business stakeholders, we often open the door by saying that we want to see how we can bring their values, strategies and policies into their AI context. Then we work in partnership with them to explore what that means, to tailor it to their situation and to make ethical AI and data an integrated part of the business. I think if we approached it by saying that we are focused on compliance, we wouldn’t see the same adoption, and we certainly wouldn’t be doing enough.
What does that mean in terms of an overall corporate policy? How does that get translated into different lines of business?
I’ve found a stepwise approach to often be most effective. At the top, you create a digital ethics policy that is purposefully quite abstract and high-level. The idea is to recognize that things are changing very quickly and to give yourself the ability to adapt to those changes.
But then, when you get down to specific use cases, you really need to make it real for each space within our organization – HR, IT, Supply Chain and so on. That’s where we get into the details. But it still needs to stay lean, particularly as new legislation – like the EU’s AI Act – is developed.
That’s where I think Credo AI’s governance platform really shines. Our platform basically integrates into your AI system and technical tools to enable AI, data or business teams to track, prioritize and control AI projects across the enterprise to ensure their AI remains profitable, compliant and safe.
What challenges do you see with new technologies like large language models?
The governance of generative AI adds another level of complexity compared to, for instance, machine learning, but I don’t think it’s a challenge we can shy away from. We need to embark on a journey where we start to talk about it, create good practices around it and take ownership of it.
In part, this is about building a better understanding of the field. Business leaders and users need to understand what the limitations and risks are. We need to bring people along with strong communication and help them set their expectations for the technology. Beyond understanding, we need to implement and embed AI Governance to create a relevant level of oversight and control to manage the AI systems and AI applications used within the organization.
I think we need to see broad collaboration – including users, developers, policymakers, regulators and others – to build on our knowledge base and start creating the right checks and balances. If we can find common ground and agree on some standards, we can start to build trust in the technology.
What is your advice to other business leaders going through a similar process?
To really have a better business using digital, you need to do it the right way. Delivering functional value with higher productivity and efficiency, but also being values-led and ethically-guided.
AI is eventually going to encompass how we all work in terms of data, processes and technologies. It will influence how we choose the interfaces with coworkers and customers. We can’t have all of those decisions funnel through one unit at the end, or we will stifle innovation and business processes.
Compliance needs to be secured, but the approach to it will be different in terms of the nature of AI technology. Having organizational values and ethics as guiding stars, embedding intelligent AI Governance and designing your AI workflows to be compliant by design will enable companies to reap the full benefits of AI and to use AI with confidence.
About Giovanni Leoni
Giovanni is the Head of Business Strategy and Development at Credo AI, a global-based company focused on AI Governance and Risk Management. Prior to this, he served as Global Head of Algorithm and AI Ethics at Ikea. He is a member of THINK AI Sweden and an Advisory Board Member of the Ethical AI Governance Group, a community platform promoting the adoption of responsible AI governance across the industry.
Advancing Responsible AI
Responsible AI is key to the future of AI. We have launched RESPECT AI hub to share knowledge, algorithms, and tooling to help advance responsible AI.
Visit RESPECT AI hub