Cartoonizing An Image Using Machine Learning

Cartoonizing An Image Using Machine Learning

Now a days, we can see that people use many applications and software in order to edit the pictures . The applications have many features for editing . As we see in TV's, many movies are animated due to converting the images into cartoons and giving life to those images. Haven't you all dreamt of your self as a cartoon? Your dream can become true now itself. Here in machine learning ,we use image processing techniques inorder to convert an image into a cartoon.

STEPS IN CONVERTING AN IMAGE INTO A CARTOON

The steps in converting an image into a cartoon are as follows: 1.Importing Libraries and Loading Input Image 2.Creating Edge Mask 3.Reducing Colour Palette 4.Combining Edge Mask with the Colored Image

OpenCV

OpenCV is a python library which is used to solve computer vision problems. OpenCV makes use of Numpy, which is also a library of Python. All the OpenCV array structures are converted to and from Numpy arrays. It can be used to recognize objects, detect, and produce high-resolution images.

There are many tools in cartoonizing an image ,so one of the tool i.e. Google Colab is taken to convert the input image to a cartoon image.

1.IMPORTING LIBRARIES AND LOADING INPUT IMAGE

This involves in importing the required python libraries and loading the image.

  • Importing Libraries: OpenCV and Numpy Libraries are imported as shown in the below code snippet.

program .png

  • Loading Input Image: The image is loaded by defining the read_file function, which includes the cv2_imshow to load the selected image in Google Colab. The created function is called to load the image.

program 2.png

program 3.png

I took a picture of a flower which I photographed as the input for cartoonizing.

image.png

2.CREATING AN EDGE MASK

When creating an edge mask, the thickness of the edges in an image is given the first priority. The edge in an image can be detected by using the cv2.adaptiveThreshold() function. We use cv2.adaptiveThreshold() function to calculate the threshold for smaller regions of the image. In this way, we get different thresholds for different regions of the same image. It will emphasize black edges around objects in the image.

program 4.png

Now the image is converted into grayscale. The noise is compressed from the image to reduce the number of detected edges that are not required .

cv2.adaptiveThreshold() defines the line size of the edge. A larger line size means the thicker edges that will be emphasized in the image.

The grayscale output is obtained as follows.

program 5.png

image.png

3.REDUCING COLOUR PALETTE

This method will reduce the number of colors in the image and it will create a cartoon-like effect. Color quantization is performed by using the K-means clustering algorithm for displaying output with a limited number of colors. K-means is an unsupervised machine learning algorithm in which clustering process occurs.

program 6.png

From the word, K means number of clusters and and Means refers to the variance. Different values for K will determine the number of colors in the output picture. So, here for the present image the number of colors is reduced to 9.

program 7.png

image.png

Bilateral Filter

The next method for reducing noise in the image is bilateral filter. It reduces the blurriness and sharpness in the image.

Consider a bilateral filter in 3D when it is processing an edge area in the image. Bilateral filter replaces each pixel value with a weighted average of nearby pixel values. It takes the variation of pixel intensities in order to preserve edges. Two nearby pixels that occupy nearby spatial locations must have some similarity in the intensity levels.

image.png

There are three specifications which are important for bilateral filtering. They are:

  • d :Diameter of each pixel neighborhood
  • sigmaColor :the standard deviation of the filter in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color.
  • sigmaSpace: the standard deviation of the filter in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough.

program 8.png

image.png

4. COMBINING EDGE MASK WITH THE COLORED IMAGE

Finally the edge mask is combined with the color-processed image. Here cv2.bitwise_and function is used. Bitwise operations are performed on the image to get the output.

image.png

Folks, the output of the input image is shown below.

image.png

Now you can see how an image can be converted into a cartoon. So, come on and have a try by converting your images into a cartoon. It will be fun and thrilling !!! Refer the below link for the whole code used here.https://colab.research.google.com/driv..

SnapPic Collage (1).jpg

REFERENCES

  1. towardsdatascience.com/turn-photos-into-car..

2.datahacker.rs/002-opencv-projects-how-to-ca..