New PyTorch graph neural network library is 14 times faster than its predecessor: LeCun praised it, GitHub 2000 stars
Fang Lizi from Aofei Temple
Produced by Quantum Bit | Public Account QbitAI
“CNN is old, GNN should be established!”
When scientists discovered that graph neural networks (GNNs) can handle non-Euclidean data that traditional CNNs cannot, they found the key to many problems that deep learning could not solve before.
Today, there is a graph network PyTorch library, which has received more than 2,000 stars on GitHub and was also picked up by CNN's father Yann LeCun :
It is called PyTorch Geometric , or PyG for short, and it brings together code implementations of 26 graph network studies.
This library is also very fast . Compared with its predecessor DGL graph network library, PyG can be up to 15 times faster.
A comprehensive library
If you want to run irregular data, use PyG. Whether it is graphs , point clouds or manifolds .
△ The right side is irregular, non-Euclidean space
This is a rich library: PyTorch implementations of many models , various useful transformations , and a large number of common benchmark datasets .
Speaking of implementations, here you can find quick implementations of (at least) 26 graph network research presented at major conferences from 2017 to 2019, including Graph Convolutional Networks ( GCN ) by Kipf et al. and Graph Attention Networks ( GAT ) by Bengio Lab .
How fast can it be? The two authors of PyG conducted an experiment using NVIDIA GTX 1080Ti.
The opponent DGL is also a graph network library:
In all four datasets, PyG ran faster than DGL. The biggest difference was when running the GAT model on the Cora dataset: running 200 epochs took the opponent
33.4 seconds
, while PyG only
took 2.2 seconds
, which is 15 times faster than the opponent.
The implementation of each algorithm supports CPU and GPU computing.
How to eat
The authors of the library are two German teenagers from the Technical University of Dortmund.
△
One of them
They say that with PyG, building graph networks is a breeze.
You see, implementing an edge convolution layer is just like this:
1import torch
2from torch.nn import Sequential as Seq, Linear as Lin, ReLU
3from torch_geometric.nn import MessagePassing
4
5class EdgeConv(MessagePassing):
6 def __init__(self, F_in, F_out):
7 super(EdgeConv, self).__init__()
8 self.mlp = Seq(Lin(2 * F_in, F_out), ReLU(), Lin(F_out, F_out))
9
10 def forward(self, x, edge_index):
11 # x has shape [N, F_in]
12 # edge_index has shape [2, E]
13 return self.propagate(aggr='max', edge_index=edge_index, x=x) # shape [N, F_out]
14
15 def message(self, x_i, x_j):
16 # x_i has shape [E, F_in]
17 # x_j has shape [E, F_in]
18 edge_features = torch.cat([x_i, x_j - x_i], dim=1) # shape [E, 2 * F_in]
19 return self.mlp(edge_features) # shape [E, F_out]
Before installing, make sure you have at least PyTorch 1.0.0. Make sure cuda/bin is in $PATH and cuda/include is in $CPATH:
1$ python -c "import torch; print(torch.__version__)"
2>>> 1.0.0
3
4$ echo $PATH
5>>> /usr/local/cuda/bin:...
6
7$ echo $CPATH
8>>> /usr/local/cuda/include:...
Then, start various pip installs.
PyG project portal:
https://github.com/rusty1s/pytorch_geometric
PyG homepage portal:
https://rusty1s.github.io/pytorch_geometric/build/html/index.html
PyG paper portal:
https://arxiv.org/pdf/1903.02428.pdf
-over-
Subscribe to AI Insider to get AI industry information
Buy AI Books
Sincere recruitment
Qbit is recruiting editors/reporters, and the work location is Beijing Zhongguancun. We look forward to talented and enthusiastic students to join us! For relevant details, please reply to the word "recruitment" in the dialogue interface of the Qbit public account (QbitAI).
Quantum Bit QbitAI · Toutiao signed author
Tracking new trends in AI technology and products
If you like it, click "Like"!