February 2 represents the first compliance deadline for the EU AI Act, which represents a complex regulatory framework approved by the European Union last March. The act officially took effect on August 1, and this deadline represents the first one in a series of compliance requirements.
More details about this EU AI act are outlined in Article 5, but mainly the AI Act EU aims to regulate a wide range of artificial intelligence apps, across consumer interaction as well as real-world environments.
It should be mentioned that under this European Union approach, there are about four risk levels. The first one represents the minimal risk, such as email spam filters that will not have regulatory control. After this, the limited risk, such as customer service chatbots will most likely have regulatory control from the EU. The third level represents the high risks that include artificial intelligence for various healthcare recommendations, and this surely will experience close monitoring. The last level of risk in the AI Act EU includes unacceptable danger, which involves apps that will be officially banned in the European Union.
Also, in the EU AI Regulation Act, some unacceptable activities cannot be crossed such as artificial intelligence used for social scoring various users, manipulating the user's decisions, or exploiting various vulnerabilities like age or disability.
Even more so, artificial intelligence cannot be used to gather biometric data in public spaces for law enforcement purposes, to get involved in people’s emotions at school or even work, and also to increase the databases of facial recognition based on the images of various security cameras.
It’s important to know that for any business or company that will be eventually found using artificial intelligence technology for the purposes stated above, the European Union will impose major punishments. So, all companies risk receiving a punishment of about €35 million, or about 7% of their annual revenues. The European Union will determine the punishment according to the higher amount.
“Organizations are expected to be fully compliant by February 2, but … the next big deadline that companies need to be aware of is in August. By then, we’ll know who the competent authorities are, and the fines and enforcement provisions will take effect.”, the head of the technology at Slaughter and May, Rob Sumroy, stated in an interview.
We should expect further guidelines from the European Commission in early 2025, as some sources state, to have a clear understanding of the dynamic of this new EU AI Act.
“It’s important for organizations to remember that AI regulation doesn’t exist in isolation. Other legal frameworks, such as GDPR, NIS2, and DORA, will interact with the AI Act, creating potential challenges — particularly around overlapping incident notification requirements. Understanding how these laws fit together will be just as crucial as understanding the AI Act itself.”, as Rob Sumroy said.
Stay tuned for more updates!