- Ireland’s Data Protection Commission probes Google’s PaLM2 for EU data law compliance.
- The US, UK and EU sign an AI Convention Treaty for human rights-focused AI governance.
- Google’s Gemma 2 models prioritize AI safety and transparency to filter harmful content.
Ireland’s Data Protection Commission (DPC) has launched a cross-border investigation into Google Ireland Limited. The inquiry aims to determine if Google complied with EU data protection laws. The investigation centers on the use of EU citizens’ personal data. This data helped train Google’s Pathways Language Model 2, or PaLM2. The model came out on May 10, 2023.
PaLM2 is advanced in many languages, reasoning, and coding. It was announced as faster and more efficient than past models. It comes in four versions namely, Gecko, Otter, Bison, and Unicorn with each having different uses. Google plans to update PaLM2 as it goes into products. Over 25 Google products now use PaLM2. These range from language tools in Google Docs to medical aids in Med-PaLM2.
The inquiry also looks at the need for a Data Protection Impact Assessment. This assessment checks if personal data use risks harming fundamental rights. The inquiry is to protect EU citizens’ personal data during AI development. This is part of an effort by EU data protection regulators to oversee how personal data is used in AI across Europe.
Global AI Treaty
Just days before Ireland’s announcement, a major international AI treaty was signed. In September itself, the US, UK and EU united to sign the historic AI Convention Treaty. It will make sure that AI technology aligns with human rights and democracy. The treaty marks a global agreement among 57 nations. It was adopted in May 2024 and emphasizes harmonizing technology with human rights.
Coinbase Sees First AI-to-AI Crypto Transactions on BaseTechnological Advances
Earlier, on August 1, 2024, Google had unveiled new AI models in its Gemma 2 Series. These models focus on safety and transparency. The series includes Gemma 2 2B, ShieldGemma, and Gemma Scope.
The Gemma 2 series enhances AI safety and efficiency. ShieldGemma, for instance, filters harmful content like hate speech. These developments reflect a growing industry focus on ethical AI use. These steps highlight an ongoing global effort to manage AI’s impact responsibly. They also show how regulations and new technologies are shaping the future of AI.