A GAN powered project to colorize anime sketches without any colour que.
Project
This is a Conditional Generative Adversarial Network which accepts a 256x256 px black and white sketch image and predicts the colored version of the image without knowing the ground truth. The model is trained on the Anime Sketch-Colorization Pair Dataset available on Kaggle which contains 14.2k pairs of Sketch-Color Anime Images.
- Githubhttps://github.com/metal0bird/GAN-powered-sketch-to-colour-model
- StackPython, TensorFlow
- BlogpostWrite and add
Ideation
This project stems from the idea of leveraging machine learning to automate the process of coloring black and white sketches. Conditional Generative Adversarial Networks (CGANs) offer a powerful approach to image-to-image translation, making them ideal for this task. By training a model on pairs of sketches and their corresponding colored images, the model can learn the relationship between lines and shapes in the sketch and the colors they represent in the final image.
Building
The project utilizes TensorFlow 2.x, a popular deep learning framework, to construct and train the CGAN model. I drew inspiration from the TensorFlow Pix2Pix tutorial, which demonstrates how to build a model for predicting building photos from facade labels. This existing project provided a foundation for understanding the necessary architecture and training procedures for the sketch-to-color task.
Learning
This project has taught me about GAN's and their capabilities in image manipulation. The process of training and evaluating the model along with data preprocessing, network architecture optimization. Furthermore, the future possibility of this technology to be used for video colorization and web-based applications.