Integration of artificial intelligence into Google Lens: What is known
Google has introduced advanced visual search capabilities in its Google Lens app. It now includes the use of artificial intelligence to get answers, UAportal reports.
What's new
Users can point the camera at an object of interest, ask a question about it, and get a detailed answer generated by a neural network based on data from the Internet. This feature is a significant improvement over previous versions of Lens, which mostly showed images similar to the object in question.
For example, by pointing the camera at a houseplant and asking, "How often should I water it?", users will receive specific recommendations for caring for this type of plant. This personalized guide offers practical information, not just visual information.
Where the information comes from
Lens relies on a combination of data from various online sources, including websites, stores, and videos. This makes it different from another experimental service called Google's Search Generative Experience, where answers to queries are generated by neural networks.
In addition, the new integrated features are compatible with the Circle to Search gesture, which allows users to select an object or area in an image. The updated Lens feature is already available in the US for iOS and Android platforms and supports English-language queries.
If you want to get the latest news about the war and events in Ukraine, subscribe to our Telegram channel!