Be careful if you use Gemini, your personal information is at risk from Google’s AI.

If You Use Gemini Be Warned Google's Ai Poses A Threat To Your Personal Information

AI Security: if you Google Of AI Assistant Gemini If you are using it, then now is the time to be alert. A recent security warning has raised serious questions regarding the privacy of users. To make Gemini smarter, Google had added features like Calendar access to make it easier to manage meetings and schedules. But now this facility seems to be becoming a new path for cyber criminals.

How did Calendar access become a risk?

After getting permission for Calendar, Gemini started giving users complete information about their appointments, free time and upcoming events. At first glance this feature seems quite beneficial, because there is no need to open the calendar again and again. But cyber security experts say that when an AI gets access to such deep personal information, the dangers also increase proportionately. Gemini’s ability to understand language and context becomes possible for abuse.

Hackers adopted a new method

Researchers at cyber security firm Miggo Security revealed that the hackers were using a special technique called Indirect Prompt Injection. In this method, a simple Google Calendar invite is sent to the user. This invite looks absolutely normal, but in its description there are hidden instructions written which are easily understood by AI rather than humans.

How your personal information can be leaked

When a user asks Gemini if ​​they are free on a particular day or time, the AI ​​scans the entire calendar. During this time, he also reaches that suspicious invite, which contains hidden instructions. Gemini then automatically summarizes the meetings and events and creates a new calendar event. From the outside this process seems absolutely normal, but during this there is a risk of the user’s personal information being silently exposed.

Also read: Elon Musk’s AI became a problem, 30 lakh obscene pictures in 11 days, created a stir after seeing content related to children

Google admitted the mistake, the flaw was fixed

After receiving information about this serious flaw, Miggo Security alerted Google’s security team. After investigation, Google accepted this weakness and fixed it. Experts believe that this case is a big warning, because now the threats related to AI are not limited to just code or software, but have reached the everyday life of common users.

What should users do

When using AI tools, it is important that users keep an eye on their permission settings and be wary of unknown invites or links. Technology provides convenience, but a little carelessness can prove costly.

Comments are closed.