The European Union has implemented its first compliance deadline for the AI Act, allowing regulators to ban AI systems that pose unacceptable risks to individuals. As reported by Kyle Wiggers for TechCrunch, the regulations that took effect on February 2 establish four risk categories for AI applications, with the highest risk category facing complete prohibition. The banned applications include AI systems for social scoring, manipulation of human behavior, exploitation of vulnerabilities, and unauthorized biometric data collection. Companies violating these regulations could face fines up to €35 million or 7% of their annual revenue, whichever is greater. The enforcement of these fines will begin in August, when competent authorities will be designated. While over 100 companies, including Amazon, Google, and OpenAI, signed a voluntary EU AI Pact last September to implement these principles early, notable companies like Meta and Apple did not participate. The Act includes specific exemptions for law enforcement in cases of immediate threat to life or safety, subject to proper authorization. The European Commission plans to release additional implementation guidelines in early 2025 following stakeholder consultations.