News
Google Begins Rolling Out Screen-Sharing for Gemini Live
<p>Google is finally delivering on a promise it made at MWC 2025. The tech giant has started rolling out screen-sharing capabilities for Gemini Live, its AI-powered assistant, months after confirming the feature under the codename &#8220;Project Astra.&#8221; Some users are already spotting it in action.</p>
<h2>Gemini Live Can Now See Your Screen</h2>
<p>A Reddit user with a Xiaomi phone and a Gemini Advanced subscription recently demonstrated the new functionality in a video. The clip shows how users can share their screens with Gemini Live and receive real-time AI assistance based on what&#8217;s displayed.</p>
<p>The ability to analyze on-screen content adds a new layer of interaction between users and AI. Instead of simply answering voice or text queries, Gemini Live now visually processes the information, leading to deeper and more dynamic conversations.</p>
<p><a href="https://www.theibulletin.com/wp-content/uploads/2025/03/google-gemini-live-screen-sharing.jpg"><img class="aligncenter size-full wp-image-56960" src="https://www.theibulletin.com/wp-content/uploads/2025/03/google-gemini-live-screen-sharing.jpg" alt="google-gemini-live-screen-sharing" width="1052" height="808" /></a></p>
<h2>What Can Gemini Live Do with Screen Sharing?</h2>
<p>The feature allows for a range of interesting interactions:</p>
<ul data-spread="false">
<li>Ask Gemini Live to summarize articles, webpages, or documents displayed on the screen.</li>
<li>Get explanations for complex terms or numerical data, such as GDP statistics on Wikipedia.</li>
<li>Have the AI read aloud from the screen or even turn the text into a melody.</li>
<li>Translate or rephrase content into another language without switching apps.</li>
</ul>
<p>This integration makes Gemini Live a more capable assistant, understanding user intent with greater accuracy since it can &#8220;see&#8221; what&#8217;s happening on the device.</p>
<h2>A New Era of AI Assistance</h2>
<p>Gemini Live doesn’t just answer questions—it keeps track of context over longer interactions. If a user opens Chrome, navigates to a webpage, and then activates Gemini Live, they can ask follow-up questions about the content without needing to re-explain what’s on their screen.</p>
<p>For instance, a user reading a Wikipedia entry on economic indicators could ask for a quick summary of GDP, followed by an explanation of inflation, and then a comparison between different countries—without restating the topic every time. This persistent contextual awareness sets Gemini Live apart from traditional voice assistants.</p>
<h2>Exclusive to Gemini Advanced Subscribers</h2>
<p>While the feature has started appearing for some users, it&#8217;s not available to everyone just yet. Google previously stated that screen-sharing capabilities would be exclusive to Gemini Advanced subscribers, a tier priced at $19.99 per month. The rollout appears to be gradual, with availability varying by region and device.</p>
<p>This latest development underscores Google&#8217;s push to integrate AI deeper into everyday smartphone interactions. As AI assistants become more sophisticated, the line between human and machine collaboration continues to blur.</p>

-
News4 months ago
Taiwanese Companies Targeted in Phishing Campaign Using Winos 4.0 Malware
-
News2 months ago
Justin Baldoni Hits Back at Ryan Reynolds, Calling Him a “Co-Conspirator” in Blake Lively Legal Battle
-
News4 months ago
Apple Shuts Down ADP for UK iCloud Users Amid Government Backdoor Demands