Tsinghua JiTu has received a major update: it supports popular differentiable rendering, and the speed of many CV tasks exceeds PyTorch
Xiaoxiao sent from Aofei Temple
Quantum Bit Report | Public Account QbitAI
Want to study differentiable rendering, but worried about finding a suitable framework?
Now, the official deep learning framework that supports differentiable rendering is here:
Tsinghua University 's self-developed Jittor deep learning framework has added a differentiable rendering library in its updated version .
Differentiable rendering is a hot area in computer graphics. The Best Paper Award of CVPR 2020 was awarded to the related work of differentiable rendering
(Jittor has optimized the related open source code)
.
Of course, as a deep learning framework focusing on computer graphics, Jittor's update also "keeps up with the trend", adding the latest modules such as Vision Transformer, and its performance optimization is much better than that of frameworks such as PyTorch.
Lets come look.
Differentiable rendering, a powerful tool for image reconstruction
What exactly is rendering?
Simply put, "rendering" usually refers to the process of converting a 3D scene into a 2D image.
This is very easy for the human eye because there is a lot of natural light in the real world, and the human eye can see the depth and shape of objects in all directions through the reflection of light.
However, the 3D scene in the computer's eyes does not have the various light in the real world. In this case, the generated 2D image not only has no parameters, but also the shape is prone to errors. So, how to directly simulate all the light coming from all directions in the computer?
The amount of calculation is too large.
Therefore, in order to make computer-generated images better, that is, to generate 2D images closer to what the human eye sees faster and more realistically , "rendering" is currently an important research field in graphics, and is often used in directions such as making animated films:
So, what about differentiable rendering?
This is a bit like the "reverse operation" of "rendering", generating the required 3D scene information from 2D images, including 3D geometry, lighting, materials, perspective, etc.
In the process of generating 3D scenes using deep learning, a gradient descent optimization algorithm is also required, which uses differentiable rendering.
Currently in the field of graphics, differentiable rendering is still a very novel direction, but in comparison, only a few deep learning frameworks have established a related library to facilitate related work on differentiable rendering.
Tsinghua JiTu, after releasing the instance segmentation model library and 3D point cloud model library, has now officially released the differentiable rendering library, which supports the loading and saving of obj and the rendering of triangular mesh models.
In addition, this differentiable rendering library has two built-in mainstream differentiable renderers, supports multiple material rendering, and is 1.49~13.04 times faster than PyTorch.
Of course, this update of "Jitu" brings more surprises than these.
Good news for visual gamers: training speed is faster than PyTorch
After achieving the best performance in the NLP field, Transformer has entered the image field. Currently, Vision Transformer has also achieved the best results in visual classification.
Regarding Vision Transformer , "Jitu" has now achieved reproduction, and the training speed is 20% faster than PyTorch.
At the same time, this update also brings the acceleration and reproduction of YOLOv3 , and the training speed is increased by 11% compared to PyTorch.
The training and inference speed of MobileNet , which was originally able to run on Jittor , has also been comprehensively improved. The speed improvement ranges from 10% to 50% on different images and batch sizes.
It is really good news for visual classification players.
Which deep learning framework should I choose for graphics?
As for the traditional major mainstream frameworks, compared with the speed of Caffe, Tensorflow and PyTorch are more focused on "easy to get started".
Compared with Tensorflow, PyTorch is built on a higher level. Although it is more user-friendly, the training speed will be slower.
In addition, these deep learning frameworks are not completely targeted at the field of graphics like "Graphics", so they cannot keep up with every new field in a timely manner, whether it is rendering or graphics processing.
Caffe author Jia Yangqing also said on Zhihu that "Jitu" focuses more on computational graph optimization and JIT (real-time) compilation.
In other words, in terms of training speed and user-friendliness, "Jitu" is superior to PyTorch, and the interface imitates PyTorch so that everyone can adapt to the new framework faster.
So, how does this differentiable rendering library compare to Hu Yuanming's Taichi rendering tool?
According to Liang Dun, one of the developers, the two generally belong to different fields.
Taichi does differentiable physics simulation similar to the one shown below, while Jittor adds a differentiable rendering library.
But in the field of rendering, Taichi has a simple differentiable rendering part, and currently it mainly completes simple rendering work through physical simulation of light refraction.
In other words, rendering is to complete the transformation between the three-dimensional model and the image, while physical simulation is to complete the change between the three-dimensional model and the force.
If you want to get started with CV in a systematic way, "Jitu" would be a good deep learning framework.
about the author
The development team of "Jitu" all comes from the Graphics Laboratory of the Department of Computer Science at Tsinghua University, and the person in charge is Professor Hu Shimin of the Department of Computer Science at Tsinghua University.
The main people responsible for the development are the doctoral students from the laboratory: Liang Dun, Yang Guoye, Yang Guowei, Zhou Wenyang...
Liang Dun believes that this upgrade of "Jitu" is both innovative and forward-looking, and differentiable rendering is also an increasingly hot research field.
The training speed for Vision transformer is also faster than many international mainstream platforms.
Students who are interested can update/install "Jitu"~
Jittor project address:
https://github.com/Jittor/jittor
Reference link:
https://mp.weixin.qq.com/s/MpxW82hqW7cgPL6IgncwRA
-over-
This article is the original content of [Quantum位], a signed account of NetEase News•NetEase's special content incentive plan. Any unauthorized reproduction is prohibited without the account's authorization.
The Quantum位 Annual Intelligent Business Conference has kicked off, and the big names are in place!
On December 16, Dr. Kai-Fu Lee , Academician Jianrong Tan , Professor Tang Jie of Tsinghua University , and big-name guests from well-known AI companies such as Xiaomi , Meituan , Baidu , Huawei , iQiyi , XiaoIce , AsiaInfo , Inspur , Ronglian , Pengsi , Horizon Robotics , G7 , etc. will gather at the MEET2021 conference. We look forward to friends who are interested in AI to sign up for the conference and explore the development path of the intelligent industry under the new situation.
Quantum Bit QbitAI · Toutiao signed author
Tracking new trends in AI technology and products
One-click triple click "Share", "Like" and "Watching"
Advances in science and technology are being seen every day~