It was announced by Google today that they are going to introduce a new “multisearch” feature that lets users to search using text and images at the same time through Google Lens, the company’s image identification technology.
In September 2021 Google teased the functionality at its search on event and declared that after testing and evaluating the feature they will launch it soon in upcoming months. The new multisearch is starting from today and is available as a “beta feature” in English in the US.
To begin, open the Google app on Android or iOS, touch the Lens camera button, then either search or take a photo of one of your screenshots. You may then add text by swiping up and tapping the “+ Add to your search” button. Users should have the most recent versions of the app to take use of the new features, according to Google.
You can even ask any question about any object in front of you or to clarify your search results by brand, color, or visual attributes with the help of this new multisearch feature. According to sources, the new feature is showing best results for shopping searches.
You can do things other than shopping with this initial beta launch, but it won’t be as accurate and perfect for every search.
This is how the new feature would operate in practice. Let’s say you find a dress you like but don’t like the color it comes in. You might look at a photo of the garment and then search for it using the word “green” in your search query to find it in the color you want.
In another scenario, you’re in the market for new furniture but want to make sure it matches the rest of your décor.
To discover a table that matches your dining set, snap a photo of it and search for “coffee table” in your search query. Let’s imagine you’ve just received a new plant and aren’t sure how to properly care for it. Take a photo of the plant and add the text to it. You could take a picture of the plant and add the text “care instructions” in your search to learn more about it.
This new function can be used for the types of questions that Google currently has problem with, when you’re seeking for anything with a visual component that’s difficult to express with words alone. Google may have a better shot at delivering relevant search results by combining the image and the words into one question.
Google claims that its newest artificial intelligence developments have enabled the new feature. MUM, the company’s newest AI model in Search, is also being looked into as a means to improve multisearch.
MUM, or Multitask Unified Model, can analyze information in a variety of formats at the same time, including text, photos, and videos, and draw insights and connections between themes, concepts, and ideas.
The new feature is made possible by its latest advancements in AI, says Google. The company is also investigating ways in which multisearch could be improved by MUM, its latest Artificial Intelligence mode in search. MUM or Multitask Unified Model can all together understand the information across a vast range of formats that includes, texts, images and videos, draw insights and connections between topics and concepts and ideas.
To read our blog on “North Korean hackers stole Data for 6 weeks of Google Chrome Users’,” click here.