This book guides readers to implement a computational graph-based deep learning framework MatrixSlow (similar to a simplified version of PyTorch, TensorFlow or Caffe) using native Python language and Numpy linear algebra library. The book is divided into three parts. The first part is the principle part, which implements the core infrastructure of the MatrixSlow framework and explains the concepts and principles of machine learning and deep learning based on it, such as models, computational graphs, training, gradient descent and its various variants. The second part is the model part, which introduces a variety of representative models, including logistic regression, multi-layer fully connected neural network, factorization machine, Wide & Deep, DeepFM, recurrent neural network and convolutional neural network. In addition to focusing on the principles, structures and connections between these models, this part also uses the MatrixSlow framework to build and train them to solve practical problems. The third part is the engineering part, which discusses some engineering issues related to the deep learning framework, covering training and evaluation, model storage, import and service deployment, distributed training, etc. Part I Principles Chapter 1 Machine Learning and Models 2 1.1 Models 2 1.2 Parameters and Training 4 1.3 Loss Function 9 1.4 Training of Computation Graphs 10 1.5 Summary 12 Chapter 2 Computation Graphs 13 2.1 What is a Computation Graph 13 2.2 Forward Propagation 14 2.3 Function Optimization and Gradient Descent 18 2.4 Chain Rule and Back Propagation 29 2.5 Executing Gradient Descent on Computation Graphs 36 2.6 Node Class and Its Subclasses 36 2.7 Building ADALINE with Computation Graphs and Training 44 2.8 Summary 48 Chapter 3 Optimizers 49 3.1 Abstract Implementation of the Optimization Process 49 3.2 BGD, SGD and MBGD 53 3.3 Gradient Descent Optimizer 58 3.4 Limitations of the Naive Gradient Descent Method 60 3.5 Impulse Optimizer 61 3.6 AdaGrad Optimizer 62 3.7 RMSProp optimizer 64 3.8 Adam optimizer 65 3.9 Summary 68 Part II Model Chapter 4 Logistic Regression 70 4.1 Logarithmic Loss Function 70 4.2 Logistic Function 73 4.3 Binary Logistic Regression 75 4.4 Multi-Classification Logistic Regression 78 4.5 Cross Entropy 81 4.6 Example: Iris 85 4.7 Summary 88 Chapter 5 Neural Networks 90 5.1 Neurons and Activation Functions 90 5.2 Neural Networks 95 5.3 Multi-layer Fully Connected Neural Networks 99 5.4 The Significance of Multiple Fully Connected Layers 101 5.5 Example: Iris 108 5.6 Example: Handwritten Digit Recognition 110 5.7 Summary 116 Chapter 6 Non-Fully Connected Neural Networks 117 6.1 Logistic Regression with Quadratic Terms 117 6.2 Factorization Machine 124 6.3 Wide & Deep 132 6.4 DeepFM 137 6.5 Example: Titanic survivors 141 6.6 Summary 150 Chapter 7 Recurrent Neural Networks 151 7.1 RNN structure 151 7.2 RNN output 152 7.3 Example: Sine wave and square wave 155 7.4 Variable length sequence 159 7.5 Example: 3D electromagnetic pronunciation instrument word recognition 164 7.6 Summary 167 Chapter 8 Convolutional Neural Networks 168 8.1 Mondrian and Monet 168 8.2 Filters 170 8.3 Trainable filters 176 8.4 Convolutional layer 183 8.5 Pooling layer 186 8.6 CNN structure 189 8.7 Example: Handwritten digit recognition 190 8.8 Summary 194 Part III Engineering Chapter 9 Training and Evaluation 196 9.1 Training and Trainer 196 9.2 10.1 Principles of Distributed Training 224 11.2 Architecture Based on Parameter Server 230 11.3 Principles of Ring AllReduce 241 11.4 Implementation of Ring AllReduce Architecture 248 11.5 Distributed Training Performance Evaluation 257 11.6 Summary 259 Chapter 12 Industrial-Grade Deep Learning Framework 261 12.1 Principles of Distributed Training 264 11.2 Architecture Based on Parameter Server 230 11.3 Principles of Ring AllReduce 241 11.4 Implementation of Ring AllReduce Architecture 248 11.5 Distributed Training Performance Evaluation 257 11.6 Summary 259 Chapter 12 Industrial-Grade Deep Learning Framework 261 12.1 12.1 Tensors 262 12.2 Computational Acceleration 263 12.3 GPU 265 12.4 Data Interface 266 12.5 Model Parallelism 266 12.6 Static and Dynamic Graphs 267 12.7 Mixed Precision Training 268 12.8 Graph Optimization and Compilation Optimization 270 12.9 Mobile and Embedded Terminals 270 12.10 Summary 271
You Might Like
Recommended ContentMore
Open source project More
Popular Components
Searched by Users
Just Take a LookMore
Trending Downloads
Trending ArticlesMore