At the ongoing Mobile World Congress (MWC) 2025 in Barcelona, Google showed off some new capabilities for its AI assistant Gemini. Screenshare allows you to share your smartphone screen with the assistant and ask questions based on the context of what it sees.
Google demonstrated the feature with a video that shows a user sharing their screen with Gemini and asking it for outfit ideas based on the image of a pair of jeans on a webpage. It isn't entirely new ground for Google. Lens can identify things based on what you highlight on the screen, but pairing it with an AI voice assistant that you hold natural conversations seems like the logical step forward.
It cuts out monotony and automates most of the work for you. It also brings Gemini one step closer to ChatGPT, which is multimodal and supports various inputs ranging from images to video. ChatGPT can also do real-time video analysis in advanced voice mode, and Google has taken notice.
The second new feature, shown off as part of Gemini Live, allows the assistant to process and analyze live video through the phone's camera.
The example video shows a user firing up the camera to show Gemini a vase and ask it for suggestions on color.
These features will be released soon to Google One AI Premium plan subscribers. The plan also includes access to Gemini Advanced and 2TB of cloud storage.