Google's latest black technology: naked-eye 3D video calls, just like real people face to face! Jeff Dean: Magic mirror, magic mirror
Meng Chenmingmin sent this from Aofei Temple
Quantum Bit Report | Public Account QbitAI
The Google I/O conference, which was suspended for a year due to the epidemic, is back, and this time it will be held entirely online.
At the conference, Google announced a black technology that had been secretly developed for more than five years:
This 3D video call technology, called Starline, makes the person on the other side of the screen look
like they are sitting in front of you with
volume, depth and shadows
.
And it is naked eye 3D , no need to wear any glasses or helmets.
Even Jeff Dean, head of Google AI, was shocked and called it a " magic mirror ."
During the epidemic, the way people communicate with each other has been challenged, isolation has prevented scattered family members from reuniting, and remote work and online education have become new hotspots.
Starline is Google's answer.
About 100 Google employees have participated in internal testing and said they can recall details more vividly
after meetings using Starline than they can with regular video conferencing
.
I walked away from the meeting feeling like I had actually met the other person .
Light field display
The equipment used by Starline is a 65-inch light field display and more than a dozen cameras and sensors deployed on site.
These sensors capture human images from different angles, use deep learning to compress them in real time, transmit them to the other side, and then reconstruct them into 3D images for playback.
This is all done in real time, not rendered after the fact.
Combined with spatial sound effects, people on both ends of the screen can communicate instantly.
Data transmission is based on WebRTC, just like ordinary video conferencing, and the unique compression algorithm makes two-way real-time transmission of 3D images possible.
Although the two booths used for the demonstration had direct fiber connections, Google engineers insisted that
a standard office environment network
would do the job.
A reporter from Wired magazine experienced it on site. He said the images would be presented in true proportion , with the people in a transparent box.
But if you move too far on the sofa , the sense of volume disappears and it becomes just watching a normal big-screen TV.
In addition, the experience also involves projecting a web page onto a light field display, allowing two people to collaborate in real time.
Development History
In the past few years, Google has put a lot of effort into bringing people closer together.
Although Google Glass and independent VR helmets can present human portraits in front of your eyes to a certain extent, the effects they can achieve are very limited.
Neither Google Glass nor Daydream VR headsets were commercially successful and both are no longer in production.
Starline has become a new research direction.
Without the need to wear additional equipment, the technical components can be hidden and people can focus on the person they are communicating with.
But when will it enter the homes of ordinary people remains a big question.
Google hasn't said how much Starline will cost, but it won't be cheap.
Currently, Starline is only a small-scale test application within Google.
They plan to conduct tests in a small number of companies this year, mainly in the fields of cloud services, telemedicine, and media, but declined to disclose the names of these companies.
But the technology used could soon be applied to general video conferencing, such as enhancing lighting and shadows.
Other I/O highlights
In addition to the eye-catching "magic mirror", what new content was released at this year's Google I/O conference?
This year, Google officially released the fourth-generation TPU , which will run in Google's data centers.
The computing power of the fourth-generation TPU is twice that of the third-generation . A pod (chip set) can perform more than 1 exaflop of computing power, that is, 10 to the 18th power of operations per second, which is 10 times higher than the previous generation.
AI is undoubtedly the highlight of this year's conference, and Google launched two algorithm models this time.
LaMDA is a dialogue model based on the Transformer architecture. It can "understand" human instructions, respond fluently and ensure the correctness of logic and facts.
LaMDA is still in the experimental stage and will be provided to third parties for testing in the near future. Google said that LaMDA will eventually be used in Google Search, Google Assistant and other functions.
At the same time, Google also released a brand new model - MUM .
It is a multi-task unified model based on search results, which can process text, pictures, videos and other information at the same time, and finally obtain information with reference value.
For example, when asked, "I have successfully climbed Mount Adams. I want to climb Mount Fuji next year. What should I prepare?"
MUM can plan the most reasonable climbing plan based on the photos, videos, route maps and other information provided by people, combined with the local climate and geological conditions. However, it is still in the experimental stage.
In addition, Google unveiled for the first time its Quantum AI Campus in Santa Barbara, California , which houses a quantum data center, a quantum hardware research lab, and Google's own quantum processor chip manufacturing facility.
They say they are moving forward with a big plan: to build a quantum computer with error correction capabilities using 1 million physical quantum bits.
This is a huge breakthrough considering the size of current quantum computers (less than 100 qubits).
Finally, of course, Android 12 is also a retained project .
This time, Android 12 introduces a new design language - Material You. Google emphasizes that "you" should be the creator of the operating system.
In the new interface, users can freely define the color combination of the system, not just individual colors.
And completely rewrote some of the underlying interaction logic to increase fluency and extend battery life.
Another focus of Android 12 is privacy protection . A unified privacy panel is added, and the privacy settings are clear at a glance.
△ Xiaomi hehe
In addition, when the APP uses the camera or microphone, users can get prompts in the notification bar and add a global permission switch.
At the same time, the concept of a " private computing core " is introduced . All data obtained based on AI calculations in the mobile phone will be stored in a specific space and will not be stolen from outside.
In the future, Android 12 also plans to add the digital car key function and has already started cooperation with BMW.
In addition, Google announced that they will work with Samsung and Fitbit, which has been acquired by Google, to rebuild Wear OS.
Google I/O online participation address:
https://events.google.com/io
Reference links:
[1]
https://www.wired.com/story/google-project-starline/
[2]
https://blog.google/technology/developers/io21-helpful-google/
-over-
This article is the original content of [Quantum位], a signed account of NetEase News•NetEase's special content incentive plan. Any unauthorized reproduction is prohibited without the account's authorization.
click here
Featured Posts