Turns out it’s super easy to hack Gemini and hijack your smart home

How do you hack a whole house in 2025? You say please to Gemini, of course.

0comments
Google Gemini AI assistant
Modern AI models have done away with complex lines of code, and instead respond to user commands made in natural language. However, this means that it has become a lot easier to trick an AI model into executing malicious inputs, including controlling someone’s smart home.

Researchers brought Google’s attention to this matter back in February. The team was able to embed prompts in a Google Calendar invite, which led to Gemini carrying out actions that the original user had not asked for.

Gemini began to turn off the lights and fire up a boiler, just because the user had said thanks. Of course, a lot more dangerous actions could have been taken in a smart home had the hackers not been using this flaw to demonstrate vulnerabilities.

Would you trust AI with your home?



It’s a lot easier to “hack” generative AI models. You don’t need to use any advanced code, you just need to say please and thank you. Even ChatGPT’s base instructions from OpenAI are in simple English, not code.

The team of researchers made use of exactly this, telling Gemini that it must execute a certain task whenever the user said a specific phrase. Gemini, as expected, took the instructions to heart. It’s highly reminiscent of the earlier “jailbreaks” of ChatGPT, where someone would pretend to be from OpenAI, leading to GPT completely abandoning the restrictions placed on it.



Google has since patched these vulnerabilities, and claims that this scenario required some preparation that wouldn’t be possible in real-world situations.

The entire ordeal serves as a cautionary tale for the future that we are all headed into. Generative AI models have already made their way into our homes, our devices, our cars, various customer service roles, and even into our healthcare systems.

This is an entirely new sort of beast, which comes with its own challenges. Companies providing these AI models need to be extra careful with the security, lest a car be hijacked by someone who was polite to the AI piloting it.

For now, I think that current AI models are still a bit too rudimentary for me to trust them with my house. I’ll stick to switching off the lights myself, thanks.
Loading Comments...

Latest Discussions

Recommended Stories

FCC OKs Cingular\'s purchase of AT&T Wireless