# Introduction and understanding of tensors | TensorFlow

Though many have knowledge about what TensorFlow is, many out of those neither know what a tensor is nor how to use them. So continue reading this article if you want to learn about the data structure, tensor and how to use tensors in TensorFlow.

## Introduction

TensorFlow is a very widely used library in the domain of machine learning and deep learning. It is developed by Google and is one of the most widely used on GitHub as well. The main aim of this tutorial is to give a simple introduction to tensors. Click here to refer the official documentation of TensorFlow. If you are already familiar with tensors and looking for a real-time implementation for a project, click here.

A tensor is basically an n-dimensional vector. This is the simplest definition of a tensor. Similar to lists in python, tensors can also be present in multiple dimensions. Now let us look into the implementation of this. The first step is to install the very famous library

```# Install tensorflow module in your laptop
!pip install tensorflow```

After you have installed the library, import it into your notebook/program. Now, let us start with creating tensors. The perfect way to create tensors is using the tf.variable keyboard. In this statement, we have to provide the data we want to store, along with the data type. The sample code has been provided below.

## Implementation

```first_value = tf.Variable(12,tf.int16)
decimal_value = tf.Variable(12.45,tf.float32)
text_value = tf.Variable(“Hello. Let us learn tensorflow!”,tf.string)```

Printing these tensors is just like printing normal data in python. We can also find the shape and rank of these tensors. The shape tells us about the number of elements present in each dimension of the tensor and ranking tells us about the deepest level of nesting within a tensor. The below code will tell you about the practical approach to the above two concepts.

```shape_val=tf.shape(first_val)
print(shape_val)
```

The output of the above code would be tf.Tensor([],shape=(0,), dtype=int32). The shape would be 0 because the data is scalar and not a 1D or nested list. In such a case, the shape, i.e, the number of dimensions is displayed.

Coming to the rank, it tells you about the depth, i.e, the amount of nesting that is present in the tensor. The practical implementation to find the rank of a tensor is in the below snippet of code.

```tensor3 = tf.Variable([[1,12,7,13],[11,93,110,121],[11,15,111,222],[161,127,128,192]],tf.int16)tensor2 = tf.Variable([[11,232,23],[41,53,65]],tf.int16)
print(tf.rank(tensor2))

#The output is (2,3) which states that there are 2 lists, each containing 3 elements.
```

The next big thing you will see about a tensor is their accessibility of reshaping(until it is mathematically approachable). This is done using the tf.reshape. The explanation is provided in the snippet below

```tensor3 = tf.Variable([[11,12,23,31],[12,91,1304,141],[11,353,114,222],[146,127,418,139]],tf.int16)
new_tensor=tf.reshape(tensor3,[2,8,1])
# The above line of code will reshape the tensor in such a way that it will make 2 lists, each with 8 elements```

These tensors are also slice-able. Slicing is helpful for you if you want to take a part of the tensor that you have created or using. To further understand slicing look at the snippet below.

```tensor4= tf.ones([4,4]) #tf.ones creates a tensor with ones
sliced_tensor=tensor4
# the output will give the first set of tensors in tensor4
```