Google announces that it will update image search, Google Maps, and Google Translate with AI

On February 8, 2023, Google announced the conversational AI service

`` Bard '' and announced updates on AI-enabled image search, Google Maps, and Google Translate. These updates roll out enhanced Google Lens , more immersive Google Maps, and better contextual understanding of Google Translate.

Google AI makes Search more visual through Lens, multisearch

New Google Maps features including immersive view, Live view updates, electric vehicle charging tools and glanceable directions.

New features make Translate more accessible for its 1 billion users

Google is still drip-feeding AI into search, Maps, and Translate - The Verge

Google will hold a presentation in Paris on February 8, 2023 to announce the conversational AI service 'Bard', which will be a rival of ChatGPT equipped with LaMDA technology, and use AI for various Google services. announced an update.

◆ Google Lens update
At the time of article creation, you can search by entering photos and images taken in the search bar. However, with this announcement, it was announced that by using Google Lens, you will be able to search for images and videos being displayed without switching from the site or app you are viewing. This feature will be coming to Android in the coming months.

In addition, the multi-search function that allows you to search for images and text at the same time is available in the United States or the United Kingdom at the time of article creation, but the function has been expanded further, using AI to search for vegetarian food in nearby Chinese restaurants. It corresponds to the location information such as 'Narrow down and display only the stores that offer'. This feature will roll out globally in the coming months.

◆ Update Google Maps
A feature called 'Immersive View' will roll out in London, Los Angeles, New York, San Francisco and Tokyo from February 8th. Immersive View fuses billions of street views and aerial imagery to create a digital model of the city. It is also possible to superimpose weather, traffic conditions, congestion conditions, etc. on the digital model.

In order to create these digital models, an advanced AI technology called `` Neural Radiance Field (NeRF) '' is used to convert ordinary images into 3D representations. By using NeRF, realistic expressions such as lighting, textures, and backgrounds are possible.

In addition, by using AI and augmented reality (AR) together, the 'indoor live view function' that overlays directions on the camera screen and leads to buildings, restaurants, etc. has been expanded to more than 1,000 airports and stations, including Tokyo. increase.

◆ Update Google Translate
Google Translate has been enhanced to accurately translate phrases, idioms, and appropriate words according to your intention when translating English, French, Japanese, etc.

In addition, the input screen of the Google Translate app has been enlarged, making it easier to access by inputting by voice and translating from images. This feature is available only for Android at the time of article creation, but will be available for iOS within a few weeks.

In addition, image translation by AI has been enhanced as an Android-only function. A function to translate the displayed text using the device's camera and display it naturally on the original image using the AR function will be deployed.

in Web Service, Posted by log1r_ut