1. Tensor Creation and Reshaping

1.1 Creating Tensors with arange()

The torch.arange() function creates a 1D tensor with a sequence of numbers. We can reshape this tensor using the reshape() method.

X = torch.arange(8).reshape(2, 4)
print(X)
        

1.2 Creating Tensors with ones() and rand()

torch.ones() creates a tensor filled with 1s, while torch.rand() creates a tensor with random values between 0 and 1.

X = torch.ones(3)
print(X)

Y = torch.rand((3, 2, 1))
print(Y)
        

2. Tensor Expansion

Tensor expansion allows us to broadcast a smaller tensor to match the shape of a larger one. This is useful in many operations where tensors of different shapes need to be combined.

3. Visualization Techniques

Visualizing tensors and their operations is crucial for understanding and debugging. This notebook introduces techniques for creating detailed visualizations of tensor operations.


4. Tensor Broadcasting

Broadcasting allows operations between tensors of different shapes. PyTorch automatically broadcasts tensors when possible.

X = torch.rand((3, 2, 1))
Y = torch.zeros((2, 3))
Z = torch.broadcast_tensors(X, Y)
print(Z)
        

5. Autograd and Backpropagation

PyTorch's autograd feature allows automatic computation of gradients, which is crucial for training neural networks.

5.1 Simple Backpropagation

X = torch.tensor([2., 2, 4], requires_grad=True)
Y = X.mean()
Z = Y.abs()
Z.backward()
print(X.grad)
        

5.2 Multiple Backward Calls

Calling backward() multiple times without using retain_graph=True will raise an error. This can be avoided by setting retain_graph=True or using backward() with a gradient tensor.

X = torch.tensor([2., 2, 3], requires_grad=True)
Y = X.mean()
Z = Y.abs()
Z.backward()
Z.backward()
print(X.grad)