The European Union is cracking down on potentially harmful AI systems, with regulators now having the power to ban technologies they believe pose an “unacceptable risk” to society. This move is a result of the comprehensive AI regulatory framework that was approved by the European Parliament and went into effect in August, with the first compliance deadline set for February 2.
Under the new regulations, AI applications will fall under four categories based on risk levels: minimal risk, limited risk, high risk, and unacceptable risk. Applications deemed to be unacceptable, such as those used for social scoring, deceptive manipulation, or predicting criminal behavior, will be strictly prohibited.
Companies found to be using these banned AI applications in the EU could face hefty fines of up to €35 million or 7% of their annual revenue from the previous fiscal year. The enforcement of these fines will come into effect next August, adding to the urgency for organizations to comply with the new regulations.
In preparation for these changes, over 100 companies signed the AI PACT, pledging to adhere to the principles of the AI Act prior to its enforcement. While some major players like Meta and Apple did not join the voluntary pact, the majority of companies are taking steps towards compliance.
Although there are exceptions to some of the prohibitions set by the AI Act, such as the use of certain AI systems by law enforcement in specific circumstances, it is crucial for organizations to understand how these regulations interact with existing laws like GDPR, NIS2, and DORA.
As the compliance deadlines approach, it is essential for companies to stay informed about the evolving guidelines and standards to ensure they are fully compliant with the new AI regulations. Understanding the implications of these regulations and how they intersect with existing laws will be key to avoiding potential challenges in the future.