Reading experience of "Deep Reinforcement Learning in Action"
[Copy link]
Advantages: This book focuses on theoretical explanations, and is quite clear. Even in the code section, it explains the execution flow of the program. Overall, it is a very good book.
Flaws: The design structure of this book contains a lot of inclusion relationships. The later chapters will include the content of the previous chapters. Compared with general domestic books, the thinking logic is difficult to understand. It is not because the content of this book is so profound, but because the way of description is not familiar.
mind Mapping
Deep reinforcement learning in action
- What is Reinforcement Learning
- The computer languages of the future will be more focused on goals and less focused on the procedures specified by the programmer.
- Deep neural networks have many layers
- Reinforcement learning is a general framework for representing and solving control tasks
- Deep Learning
- Reinforcement Learning
- Common tasks such as image classification belong to supervised learning
- Markov Decision Process
- PyTorch Deep Learning Framework
- bonus system
- Greedy Strategy
- Select strategy
- Subtopics
- PyTorch builds the network
- Automatic differentiation
- The neural network generates the expected reward for each possible action.
- Value and Policy Functions
- Policy Function
- Deep Q Network
- Q Function
- state
- Q-learning navigation
- Gridworld Game Theory
- Hyperparameters
- Hyperparameters for training multiple machine learning algorithms
- Discount Factor
- Controls how much the agent discounts future rewards when making decisions
- Build the network
- Layer 3 Network
- 164 (input layer), 150 (hidden layer), 4 (output layer)
- Gridworld Game Engine
- Constructing a Neural Network for the Q Function
- Create a neural network model, define the loss function and learning rate. Build an optimizer, and define some parameters.
- PyTorch code implementation
- Preventing catastrophic forgetting and experience replay
- In essence, very similar state-action pairs (with the same goal) have different results, causing the algorithm to fail to learn
- Experience replay is a way to alleviate the main problem of online training algorithms (catastrophic forgetting)
- DQN code implementation - DQN loss graph
- Improve stability with target networks
- Using the Q value of the target network to train the Q network will improve the stability of training
- Code
- Compared with the previous training results, its training convergence speed is faster
- Policy Gradient Method
- Policy Function Theory of Neural Networks
- Policy Gradient Algorithm
- Define your goals
- Neural networks require an objective function that is differentiable with respect to the network weights (parameters)
- Strengthening Action
- After a single action is sampled from the probability distribution of the policy network
- Log probability
- Credit Allocation
- The training Gridworld policy network receives a 64-dimensional vector as input and generates a 4-dimensional action probability distribution.
- OpenAI Gym Collaboration
- OpenAI Gym is a suite of open-source environments with a general API that is well suited for testing reinforcement learning algorithms.
- The CartPole environment belongs to the classic control part of OpenAI
- REINFORCE algorithm
- Creating a policy network
- Agent Interaction with the Environment
- Training the model
- Calculate the probability of the action, calculate the future reward, calculate the loss function, and perform backpropagation
- Complete training cycle, code implementation
- Critic Algorithm
- Introduction
- This algorithm is used to improve sampling efficiency and reduce variance
- Reconstructing the value-strategy function
- Q-learning learns directly from the information (rewards) available in the environment
- Distributed training
- Python can use multi-process operations to speed up the training algorithm
- Critic Advantage Algorithm
- This book describes in detail the code development process and program operation logic
|