The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular SME’s.
The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond. It will do so by ensuring that AI systems respect fundamental rights, safety, and ethical principles. And it will and address risks of very powerful and impactful AI models.
Why do we need rules on AI?
The AI Act ensures that Europeans can trust what AI has to offer. Most AI systems pose limited to no risk and can contribute to solving many societal challenges. However, certain AI systems create risks that need to be addressed to avoid undesirable outcomes.
The European AI Office was established in February 2024 within the Commission. It oversees the AI Act’s enforcement and implementation with the member states. The EU says it aims to create an environment where AI technologies respect human dignity, rights, and trust. It also fosters collaboration, innovation, and research in AI among various stakeholders. Moreover, it engages in international dialogue and cooperation on AI issues, acknowledging the need for global alignment on AI governance. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.
How should the UK respond?
Scott Dawson, Head of Sales and Strategic Partnerships at DECTA, tells RBI: “Ideally, the role of regulation should be to facilitate innovation. The EU’s AI Act is a good example of regulation that has the potential to do just that. Classifying AI systems based on risk will allow fintech companies to benefit from the new capabilities of the technology while keeping a regulatory eye on the ‘black box’ problem.
“As AI models become more complex and opaque, their workings and reasoning are ever more difficult for any one human to understand. The act emphasises the need for transparent AI, ensuring companies can explain how algorithms arrive at decisions. Naturally, there are considerations presented by this approach. But by creating a conceptual structure for firms to innovate within, the EU is creating a regulatory framework we can pre-emptively manage.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataWait and see approach poses challenges
“While the UK hasn’t enacted similar legislation, its “wait and see” approach poses challenges. This is especially so because fintech firms aiming for the EU market will need to comply with the Act’s requirements. This includes increased transparency and robust due diligence for AI used in areas like credit scoring. Without clear guidelines and alignment with key other key jurisdictions, it will be difficult for firms in the UK to innovate in a manner that can be effective and scalable in the future.
“During the UK’s AI Safety Summit in Bletchley Park last year, Rishi Sunak suggested the UK should be a leader in this space. However, it will need to make more decisive moves than just waiting and seeing if that is to be the case. If it does take the reins, regulating for the sake of innovation is very much an option. While the UK hasn’t adopted its own AI legislation, the EU Act’s influence is undeniable. UK’s fintech sector must adapt to the new standards to ensure continued access to the EU market and foster trust in AI-driven financial services.”