What are Google’s new AI features? Can Google Lens diagnose diseases now?

0

Google has pushed a new update that enhances the AI-based capabilities of its various apps. One of the biggest new features is the AI-enhanced Google Lens being able to identify medical skin issues. Google’s recent innovations in its AI portfolio, particularly with Lens and Bard, reaffirm the tech giant’s ongoing commitment to harnessing generative AI technologies. By providing advanced features like skin condition identification and Live View, Google Lens emerges as a potent tool capable of assisting in a wide array of daily activities, from shopping to health consciousness. The company announced its news features in a Keyword blog post.

Know skin condition with Google Lens

Users can capture or upload an image through the Lens app, enabling the technology to find visual similarities for an array of skin ailments. This feature streamlines the process of recognising skin complications, offering users potential correspondences for conditions such as moles, sun spots, rashes, and even further issues like lip anomalies, nail streaks, and hair loss.

It’s important to emphasise that Lens’s outputs serve informational purposes only and do not represent a medical diagnosis. Prior to taking any serious medical actions, users are advised to seek professional medical counsel. While not as advanced as Google’s AI-driven diagnostic app available in the European Union, Lens still offers users a basic comprehension of potential skin complications.

The system has gone through rigorous testing, recording an 84 per cent success rate in identifying different ailments. It has secured approval in Europe, although it is yet to be evaluated by the FDA. However, a common critique is its reduced accuracy in detecting issues for individuals with darker skin tones due to underrepresentation in image databases. Google is working to address this by enhancing the diversity of its image databases.

 A symbiosis of Google Lens and Bard

Google has not only updated Lens’s capabilities but has also integrated it with Bard, Google’s AI chatbot. Users can insert images into dialogues with Bard, who can now identify brands or provide fashion advice. This collaboration enriches the user experience as Bard leverages the Lens capabilities to interpret and respond accurately.

Live View: Google Lens breaks new ground

Adding to the suite of advancements in Google Lens is the debut of Live View10. This AI-facilitated feature allows users to identify objects and text in their real-world environment through augmented reality. Users can access overlay data about objects or text on their display, including specifics such as name, category, and typical uses. Use cases include hovering over text in real life and getting a translation of the text on your smartphone screen live. 

Live View is presently available in English for a selection of Android devices, and Google is planning to extend its support to more languages and devices in due course. This progress could revolutionise Google Lens, making it an even more resourceful tool for shopping, education, and navigation.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment