Now AI is also in danger, conspiracy to copy Google Gemini, hackers understood the task by inserting 1 lakh prompts

AI Security Threat: Google Has made a big revelation. According to the company, hackers have hacked its AI chatbot. Google Gemini Tried to prepare a copy. For this, more than 1 lakh thoughtfully designed prompts were put on the system. The purpose of these questions was not to directly steal code, but to understand how AI thinks, responds and makes decisions. According to the company, such attacks are called “model extraction” or “distillation attack” It is said. That is, to prepare a map of the AI’s mind by continuously asking questions from outside.

How was cloning attempted through prompt?

Google said the attackers were repeatedly asking questions that would help understand Gemini’s internal logic. This included situation based questions, logic puzzles, context based questions and tests based on different situations. Through these questions, the attackers were trying to learn how the model processes information, understands the context and solves problems.

Since large language models answer to everyone, this becomes both their biggest strength and weakness. Due to continuous answers, hackers gradually understand the pattern and try to create similar models on the same basis.

How was the attack revealed?

This suspicious activity was detected by Google’s internal security team, Google Threat Intelligence Group. This team constantly keeps an eye on digital threats. They used tools like behavior analytics, automated classifiers and anomaly detection. When thousands of prompts started being sent from some accounts in an unusual manner, the system flagged them. After investigation, suspect accounts were blocked and additional security measures were implemented to make it difficult to extract sensitive information through repeated questioning.

Not only big companies, startups too are in danger

Experts believe that AI models are not closed systems like traditional software. They are interactive and constantly respond to the user. This openness makes them easy targets for reverse-engineering. Google has warned that not only big companies but also small firms and startups are at risk. Many companies use sensitive business-related data in their custom AI models. If similar questions are asked to those models continuously, business information may be leaked.

Also read: India is becoming a superpower of AI and cloud, but is there a big threat to your data?

The threat is not limited to Google alone

This is not just a matter of Google. Earlier, OpenAI had also alleged that some companies are trying to copy its models using similar distillation technology. It is clear that the entire AI sector is struggling with this new cyber threat. As AI tools become a part of everyday life and business, not just data, but “intelligence” This means that it has become mandatory to protect even the thinking of AI.

Comments are closed.