The European Union is weakening key parts of its artificial intelligence and data protection laws following pressure from technology companies and the US government. Robert Hart and Dominic Preston report for The Verge.
The European Commission has proposed changes that extend the deadline for high-risk AI regulations originally set to take effect next summer. These rules cover AI systems that pose serious risks to health, safety, or fundamental rights. Under the new proposal, the regulations will only apply once standards and support tools are available to AI companies.
The changes also modify the General Data Protection Regulation to allow AI companies to use personal data for training AI models, provided they meet other GDPR requirements. Companies will find it easier to share anonymized and pseudonymized datasets.
Henna Virkkunen, executive vice president for tech sovereignty at the European Commission, says the bloc aims to cut red tape and support innovation while protecting fundamental rights. The proposal also simplifies AI documentation requirements for smaller companies and centralizes AI oversight in the EU’s AI Office.
Civil rights groups and politicians have criticized the Commission for weakening safeguards and yielding to Big Tech pressure. The proposal must now gain approval from the European Parliament and a qualified majority of the EU’s 27 member states, a process that could take months.