Hands-on deep learning (PyTorch version) - [Reading activities-experience sharing] PyTorch linear algebra related operations
[Copy link]
This post was last edited by Misaka10032 on 2024-10-20 04:50
Introduction
In this chapter, we will continue to learn the pre-context of Pytorch, about the related operations of linear algebra. I am using Google's colab here, and I will share the code files of this chapter at the end of this chapter.
text
1. Scalar
A scalar is a tensor with only one element. We can use torch.tensor() to quickly create a scalar, and we can use scalars to perform some basic data operations.
2. Vector
A vector can be understood as a bunch of scalar values, or an array. We can use the following API to quickly create a vector:
When creating a vector, just note that range does not include the second parameter. We can also use len() to access the length of the vector
3. Matrix
The understanding of matrices can be understood as the extension of vectors, that is, multidimensional arrays. If the rows and columns of a matrix are equal, we can call this matrix a square matrix.
We can use the following API to quickly create a matrix
The above uses torch.range to create a vector of length 20, ranging from 0 to 19, and then uses the reshape function to transform the vector into a 4 * 5 matrix (20 elements)
We can use the matrix's .T method (transpose) to transform a matrix, that is, the rows of the matrix become columns, and the columns become rows. Then the above matrix becomes the following after transformation.
If the current matrix is a symmetric matrix, then transpose the matrix, and the transposed matrix will be equal to the original matrix.
4. Tensors
The tensor mentioned in the book is a bit abstract. Just like vectors are generalizations of scalars and matrices are generalizations of vectors, we can build data structures with more axes. My personal understanding is that a tensor is a combination of multiple two-dimensional arrays. For example
In the above code, we use arange to specify an element interval, and then use the reshape function to convert this vector into a tensor, that is, two 3*4 matrices, and at the same time we can use the above tensors to perform some mathematical operations.
One thing to note is that if we use a tensor to perform data operations on a scalar, the output result does not affect the shape of the tensor.
5- Dimensionality reduction of data
Data dimensionality reduction refers to the process of converting high-dimensional data into low-dimensional data while trying to retain the most important structure and information in the original data. The purpose of dimensionality reduction is to reduce the complexity of data and remove redundant information to facilitate analysis, visualization, and modeling. Dimensionality reduction techniques are particularly useful when dealing with high-dimensional data (such as images, text, or gene expression data), because high-dimensional data may have the "dimensionality curse" problem, resulting in poor model performance or excessive computational costs.
You can also specify which data axis to reduce the dimension of.
The above code demonstrates dimensionality reduction based on two axes of the data axis, axis 0 or X axis, and axis 1 or Y axis.
At the same time, the dimension can be kept unchanged when summing.
6- Dot Product
The dot product is the sum of the multiplication of the same elements of the matrix. For example, the following operation actually multiplies the element in each position of the array by the sum of the same position in another array.
I haven't figured out vector product, matrix multiplication and norm yet. I need to learn more about math. I didn't understand it after reading the book.
|