In corporate boardrooms and executive suites everywhere, artificial intelligence (AI) is marching to the top of the agenda. At the same time, regulators are turning their attention towards this technology too, enacting new rules – or modifying old ones – to govern how companies use it.

This leaves Corporate Compliance Officers in a challenging position. To maintain a strong compliance posture for your organisation, you will need to be able to do three things:

  • Stay abreast of new, AI-specific regulation, in multiple jurisdictions;
  • Understand how existing regulations might already introduce compliance risk for your company’s use of AI, and
  • Work with senior leaders and business-unit leaders to be sure their adoption of AI does not violate any of the previous two regulatory tripwires

Let’s start with a look at the regulatory pressures around artificial intelligence, both new and old.

The EU AI Act

At the end of 2023 the European Union adopted the AI Act, the world’s first regulatory framework to govern its adoption. Full implementation of the AI Act is still several years away, but already we can see the basic contours of what “AI compliance” will look like for companies doing business in Europe.

For example, all businesses using AI will need to perform an assessment of its risks to privacy and cybersecurity. There are obligations to put security measures in place to prevent the misuse of AI. Senior managers are advised to embrace a ‘Secure by Design’ approach to developing or deploying AI systems. This outlines that manufacturers of AI systems must consider the security of the customers as fundamental business requirement, rather than a technical feature. It also prioritises security throughout the whole lifecycle of the product, from inception of the idea to planning for the system’s end-of-life.

None of that is terribly surprising; it is similar to what happened when the EU General Data Protection Regulation (GDPR) first arrived in 2016. Companies subject to it then had two years to get their compliance house in order, and the AI Act seems to be following a similar timeline.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The US approach

On the other side of the Atlantic, various regulators in the United States are already trying to apply existing regulations to misuse of AI.

For example, on March 7 the Justice Department said that “where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences” for individuals and corporations alike. Moreover, when evaluating a company’s compliance program as part of resolving a criminal case, prosecutors will now assess a company’s ability to manage AI-related risks as part of its overall compliance efforts.

Other US regulators have gone even further, with actual enforcement actions. In December 2023, the Federal Trade Commission barred a retail pharmacy chain from using AI-driven facial recognition to identify potential shoplifters, on the grounds that the retailer had inadequately tested the technology (which was riddled with false positives) and failed to train employees on how to use it.

And while all this is happening, the Biden Administration has also directed all federal agencies to think about possible new regulations for oversight of AI. Those rules have barely begun to arrive, but they are on their way.

An effective compliance response

Given all the uncertainty here – when these rules will arrive, exactly what they will say, how vigorously regulators will enforce them, and so forth – Corporate Compliance Officers might feel overwhelmed and confused.

Fear not. Even amid all that uncertainty, a few basic steps can position your company as smartly as possible for what is to come.

First, Compliance Officers will need to form strong relationships with the Chief Technology Officer (CTO) and other executives making decisions about AI. They will need to know what dreams of AI and automation that the CTO, senior management, and business-unit leaders have. It is essential to engage in frank conversations about the compliance, security, and ethical risks those dreams might bring. If Compliance Officers are not in strategic councils and management meetings, they will always be playing an endless game of catch-up that they will never win.

Second, Compliance Officers will need strong capabilities in regulatory change management. Many countries are likely to follow the EU’s basic approach to AI, modeling their own regulations along the lines of the AI Act. This trend is also referred to as ‘the Brussels effect,’ to reflect how regulations such as the GDPR became a template for privacy regulation elsewhere in the world.

That said, the United States, China, and other nations will have their own visions for AI too. Compliance Officers will require a way to track those emerging regulations – map them to existing policies, procedures, and controls – and address any internal control gaps identified.

Third, Compliance Officers will need to assure the company has strong ‘AI risk assessment’ capabilities. For example, the company will need a way to study and validate the data feeding into AI learning systems, and a means to test the output of AI systems for implicit bias, discriminatory behaviour, copyright infringement, and much more. Compliance Officers do not necessarily need to do the assessment and testing themselves (internal audit is a far better candidate), but they must be able to assure the tasks are completed.

All of this only scratches the surface of AI compliance, of course. Already, however, we can see that the Chief Compliance Officer should play a central role in an organisation’s adoption of AI, because it will require strong oversight and compliance like no other technology that has come before.

That, more than anything else, is the message that Compliance Officers need to convey to the board and senior leadership now, before it is too late.

Jan Stappers is Director of Regulatory Solutions at NAVEX