# Tensor Calculus in Machine Learning in Python

In this tutorial, we will see what tensors are and basic tensor calculus. In the tensor calculus part, we will learn tensor addition, subtraction, Hadamard product, tensor product, and division. Understanding tensors is the first step in getting started with machine learning as it is the basic data structure used by neural networks. All the inputs, weights, biases, and outputs of various layers are represented as tensors. The TensorFlow library uses tensors as the primary way of representing data. So, without any further delay, let’s get started.

**Tensors**

Tensors are multidimensional arrays that are a generalization of other concepts such as scalar, number, vector, array, matrix, and 2d-arrays. Scalar, vector, and 2d-array are the terms used in mathematical context whereas number, array and matrix are the terms used in computer science. The terms in these two groups are equivalent to each other. Scalar is the same as a number, a vector is represented as an array and a 2d-array corresponds to a matrix.

Of course, we know that tensor has a different definition in mathematics and physics, but here, in computer science, a tensor is a standard way of representing data. Numbers, arrays, and matrices are all specific instances of a tensor.

Number/Scalar – A number/scalar is a rank 0 tensor without any axes containing a single value.

Array/Vector – An array/vector is a rank 1 tensor with 1 axis containing a list of numbers.

Matrix/2d-Array – A matrix/2d-array is a rank 2 tensor with 2 axes.

Nd-Array – An Nd-array is a rank “N” tensor with “N” number of axes.

The rank of the tensor corresponds to the number of axes, also called “dimensions”, hence a rank 2 tensor is called a 2d-array. In Python, tensors are represented as N-dimensional arrays or ndarray using the NumPy library. Let’s see these three basic instances of tensor which we have discussed now.

import numpy as np rank1_array = np.array([1,2,3,4,5,6,7,8,9,10]) rank0_num = rank1_array[0] rank2_array = np.array([[1,2,3,4,5],[6,7,8,9,10]]) print("Number/Scalar: {}\nNo. of dimensions: {}\nShape: {}\n".format(rank0_num, rank0_num.ndim,rank0_num.shape)) print("Array/Vector: {}\nNo. of dimensions: {}\nShape: {}\n".format(rank1_array, rank1_array.ndim,rank1_array.shape)) print("Mtarix/2d-Array: {}\nNo. of dimensions: {}\nShape: {}".format(rank2_array, rank2_array.ndim,rank2_array.shape))

Number/Scalar: 1 No. of dimensions: 0 Shape: () Array/Vector: [ 1 2 3 4 5 6 7 8 9 10] No. of dimensions: 1 Shape: (10,) Mtarix/2d-Array: [[ 1 2 3 4 5] [ 6 7 8 9 10]] No. of dimensions: 2 Shape: (2, 5)

**Tensor Calculus**

Let us first create two tensors of 2 dimensions each and then we will perform tensor operations on them.

A = np.ndarray(shape = (3,3), dtype = int, buffer = np.random.randint(0,10,size = (10)), offset = 0) B = np.ndarray(shape = (3,3), dtype = int, buffer = np.random.randint(0,10,size = (10)), offset = 0) print("A: {}\n\nB: {}".format(A,B))

A: [[7 6 4] [6 5 1] [5 1 9]] B: [[4 7 7] [6 5 2] [5 6 3]]

**Tensor Addition**

Addition of two tensors of the same dimension results in a third tensor of the same dimension with each element being the element-wise addition of the scalars of the parent tensors.

Addition = A + B print("A: {}\n+\nB: {}\n=\nAddition:\n{}".format(A,B,Addition))

A: [[7 6 4] [6 5 1] [5 1 9]] + B: [[4 7 7] [6 5 2] [5 6 3]] = Addition: [[11 13 11] [12 10 3] [10 7 12]]

**Tensor Subtraction**

Subtraction of one tensor from another, both being of the same dimensions, results in a third tensor of the same dimension with each element being the element-wise subtraction of the scalars of the parent tensors.

Subtraction = A - B print("A: {}\n+\nB: {}\n=\nSubtraction:\n{}".format(A,B,Subtraction))

A: [[7 6 4] [6 5 1] [5 1 9]] + B: [[4 7 7] [6 5 2] [5 6 3]] = Subtraction: [[ 3 -1 -3] [ 0 0 -1] [ 0 -5 6]]

**Tensor Hadamard Product**

Multiplication of two same dimensional tensors results in a third same dimensional tensor whose each element is the element-wise multiplication of the numbers of the parent tensors. The operator for the Hadamard product is represented as “o” and “*” is used for Hadamard multiplication in NumPy.

Hadamard_mult = A * B print("A: {}\n*\nB: {}\n=\nHadamard Multiplication:\n{}".format(A,B,Hadamard_mult))

A: [[7 6 4] [6 5 1] [5 1 9]] * B: [[4 7 7] [6 5 2] [5 6 3]] = Hadamard Multiplication: [[28 42 28] [36 25 2] [25 6 27]]

**Tensor Product**

Tensor product of a tensor with m dimensions with another tensor of n dimensions results in a third tensor of m+n dimensions. The operator for the tensor product is “o with x in the center”. In NumPy, np.tensordot with axes set to 0, is used for the tensor product.

Let’s understand tensor product using 2 tensors of 1 dimension each and then 2 tensors of 2 dimensions each. Then we can extend it to tensors of higher dimensions. The resulting tensors will have 2 and 4 dimensions respectively.

X = [x1 x2] Y = [y1 y2] TP1 = (x1 * [y1 y2] x2 * [y1 y2]) TP1 = ([[x1*y1, x1*y2], [x2*y1, x2*y2]]) P = [p11 p12 p21 p22] Q = [q11 q12 q21 q22] TP2 = (p11 * [q11 q12 , p12 * [q11 q12 q21 q22] q21 q22] p21 * [q11 q12 , p22 * [q11 q12 q21 q22] q21 q22] ) TP2 = ([[[[p11*q11, p11*q12], [p11*q21, p11*q22]], [ [p12*q11, p12*q12], [p12*q21, p12*q22] ] ], [ [ [p21*q11, p21*q12], [p21*q21, p21*q22] ], [ [p22*q11, p22*q12], [p22*q21, p22*q22] ] ] ] )

Now let’s see this in code.

Tensor_pro = np.tensordot(A,B,axes = 0) print("A: {}\n*\nB: {}\n=\nTensor product:\n{}\nResulting tensor dimension:{}".format(A,B,Tensor_pro,Tensor_pro.ndim))

A: [[7 6 4] [6 5 1] [5 1 9]] * B: [[4 7 7] [6 5 2] [5 6 3]] = Tensor product: [[[[28 49 49] [42 35 14] [35 42 21]] [[24 42 42] [36 30 12] [30 36 18]] [[16 28 28] [24 20 8] [20 24 12]]] [[[24 42 42] [36 30 12] [30 36 18]] [[20 35 35] [30 25 10] [25 30 15]] [[ 4 7 7] [ 6 5 2] [ 5 6 3]]] [[[20 35 35] [30 25 10] [25 30 15]] [[ 4 7 7] [ 6 5 2] [ 5 6 3]] [[36 63 63] [54 45 18] [45 54 27]]]] Resulting tensor dimension:4

There are three cases for the parameter “axes” of np.tensordot –

- axes = 0: Gives tensor product of two tensors
- axes = 1: Gives tensor dot product of two tensors
- axes = 2: Gives tensor double contraction product of two tensors

**Tensor Division**

Division of one tensor by another tensor, both being of the same dimension, results in a third tensor of the same dimension. Each element of the resulting tensor is obtained by the element-wise division of the scalars of the parent tensors.

Division = A / B print("A: {}\n*\nB: {}\n=\nDivision:\n{}".format(A,B,Division))

A: [[7 6 4] [6 5 1] [5 1 9]] * B: [[4 7 7] [6 5 2] [5 6 3]] = Division: [[1.75 0.85714286 0.57142857] [1. 1. 0.5 ] [1. 0.16666667 3. ]]

These were the basic tensor calculus concepts. There are many more operations like different types of tensor products but the ones mentioned here are the most commonly used.

Want to add your thoughts? Need any further help? Leave a comment below and I will get back to you ASAP 🙂

## Leave a Reply