The European Union is navigating through the choppy waters of AI regulations, aiming to strike a balance between safeguarding local startups and addressing societal risks posed by rapid AI advancements. It seems what they are interested in regulating is none other than the latest threat, ChatGPT. The aim is to protect the users’ data, integrity, and rights, but at what cost? Let’s see what the EU AI rules mean in detail.
In a marathon session, EU negotiators hashed out a deal on regulations for generative AI tools, like ChatGPT and Google’s Bard. On Thursday, a source close to the negotiations revealed that after 10 hours of discussion, parliamentarians and EU member states had reached an agreement on regulations for ChatGPT and other AI systems.
One Agreement, 27 Countries—Will the EU Regulate AI First?
The agreement, a crucial step toward the AI Act, involves the European Commission, Parliament, and 27 member countries. However, this discussion underscores the complexity of the AI regulation debate. The aim is not just for local control but to set the global tone for regulating AI tools. And the clock is ticking—the urgency stems from the looming European elections that might disrupt progress.
What’s intriguing is the timing: Google hinted at Gemini AI’s new capabilities, and OpenAI had a dramatic moment with Sam Altman’s ousting and reinstating, all coinciding with these discussions.
Any impact on Google’s AI plan?
While the impact is still not known, one thing is clear: tech companies will not turn their backs on AI possibilities in this era of innovation. To compete with ChatGPT, Google is introducing Gemini, an AI solution, to its suite of products, a move that aligns with EU AI regulations. The technology will be used to power bard chatbots and search-generative experiences.
Google’s move to license LLM via cloud aligns with AI regulations and reflects the expanding use of AI across industries. The AI alliance between Meta Platforms and IBM also suggests a competitive landscape where tech firms are exploring AI’s potential, reflecting the ongoing technological advancements driving policy discussions.
Tug-of-War Among EU Members
The EU, US, and UK face a tussle in balancing the protection of local AI startups (think Mistral AI and Aleph Alpha) while addressing societal risks. France and Germany are particularly vocal about avoiding rules that might disadvantage their companies.
Negotiations are optimistic about striking a deal soon, but technicalities demand further meetings. The proposed regulations would require AI developers, including those behind ChatGPT-like tools, to track training data, summarize copyrighted material usage, and label AI-generated content. Furthermore, AI systems posing “systemic risks” will need to follow an industry code, collaborating with the commission to monitor and report any arising incidents. This tug-of-war among EU members reflects the struggle to find the right balance for AI regulation.