I tried various kinds of new features of Google Pixel 3 such as 'Active Edge' which can be held and operated, Google lens, etc.
" Pixel 3 " which was released in Japan on November 1, 2018 is equipped with "Active Edge" which can be grasped and operated by Qualcomm's chipset " Snapdragon 845 ", and Artificial Intelligence (AI) supports various functions It is a Google genuine high-end smartphone that will do it. In Pixel 3 like that, I actually tried many things what I can do.
◆ Active Edge
The grip force sensor "Active Edge" on Pixel 2 of the previous model is also installed in Pixel 3.
Put the power on the hand holding Pixel 3 ... ...
You can call Google Assistant. You can also call it from the lock screen. By unlocking and launching the application ... ... without having to do, various operations can be performed just by tightly grasping and talking.
The following movie talks to the Google Assistant actually called by Active Edge and talks with Cinnamon with "Google Cinemolor Talk " application of Google Home.
I tried using "Active Edge" which can be held and operated by Google Pixel 3 - YouTube
◆ Android 9 Pie
Pixel 3, a genuine Google smartphone, has the latest version of Android 9 Pie installed at the time of article creation. In Android 9 Pie, AI technology realized by cooperation of DeepMind which developed Go AI "AlphaGo", the possession of the battery and the brightness of the screen are automatically adjusted.
Also, when tapping "Android version" that can be viewed from "terminal information" several times in succession ... ...
A colorful 'P' was displayed on the screen full.
When you tap the screen, the color changes. Furthermore, when I try to tap a part of "P" several times in succession ... ...
Paint software started. It can not be saved, but you can paint freely.
◆ Google lens
The Google lens is a function that allows you to translate text captured on the camera by image analysis using machine learning and retrieve information about what is shown. Although it was provided as an application from before, it did not correspond to Japanese. However, in response to the Japanese landing of Google Pixel 3, we finally corresponded to Japanese.
To use Google lens, first start the camera.
Select "Lens" from "Others".
For the first time, the screen "Welcome to Google lens" is displayed, so tap "Continue"
Tap "Allow" as you are asked if you want Google to take photos and videos.
It looks like this when Google lens starts. White spots are displayed around the screen, and the ones shown in the camera are recognized.
If you tap Mario's amiibo here, amiibo of Mario was hit properly in image search.
When scanning the outer box of Google Pixel 3, the results are displayed with "Google Pixel 3" entered in the search field.
When an editorial member shoots the meat that was burning nearby, it seems that he was saying superbly that it is beef. Also, checking with the editorial staff, "It is not Kobe Beef", even Google's AI was not able to say until the beef varieties.
I recognized the underwear pants and shoes when I posted the footsteps of editorial staff who are eating meat on Google lens. When you tap the shoes of the right editorial staff to try it, similar items were displayed. When I checked with the person actually wearing it, I heard that it is no doubt that it is sandal slippers that are displayed on the right. When you want to know the brand of clothes and shoes you are wearing, it seems quite useful as you can scan it with Google lens immediately.
Also, when you scan the barcode, it will display the correct product name, image and price after reading it.
When you project the camera, there seems to be some recognized points.
I was reading the logo engraved on the camera.
When I scanned the pages of English notation in the manual of Google Pixel 3 ......
One URL was displayed in recognition result.
When accessing the displayed URL, the official page about Pixel's setup was displayed. It seems that AI judged from keywords and contents included in the text.
This is the card of " Monopoly Game: Cheaters Edition " which is not translated into Japanese.
The scanned character part could be selected and copied like text. You can translate and copy and paste on the spot on the spot by using the Google lens, even if you do not bother to explain the English sentences. The Google lens was a fun feature just to see what kind of search results it projects, reflecting various things anyway.
◆ Portrait function
Pixel 3 is a single camera, but you can shoot in portrait mode. For example in the following cases, Mario focused on the front ... ...
You can also tap and focus on the squid in the background.
The following 2 photos were taken actually. Each feeling of blurredness is as good as I thought that I shot with a single camera.
Also, if you take a picture of a person's face, AI chooses a smiley photo that faces the camera from behind even if it gets shaken. Pixel 3 is a single camera model while iPhone XS / XS Max and other dual camera models whose camera lenses are pushed at the back are increasing. However, with support of portrait mode and AI, it can be said that it is sufficiently high function.
Related Posts: