tinyfilter 1 is the computer vision equivalent to Andrej Karpathy's micrograd. It converts images into ASCII art using the principles of CNNs (convolutional neural networks).
Unlike other tools of its type, which map pixel darkness to an ASCII character, tinyfilter uses filters and convolution to detect features in an image and prints ASCII characters that correspond to them. This leads to much better results compared to other libraries, especially for smaller images.
To install tinyfilter locally run the command below. When installing python packages such as tinyfilter, I recommend using a virtual environment, but this is optional.
pip install tinyfilter
To print an image as ASCII characters using tinyfilter run the following command in your terminal. (Replace "image.png" with the name of your image.)
tinyfilter image.png
You can also import tinyfilter inside a python file or interpreter to do the same thing:
from tinyfilter import tiny_print
tiny_print('image.png')
NOTE: You do not need to specify how many columns wide you want your image to be. tinyfilter automatically prints the image with the exact amount of columns wide your terminal window was at the time the function was called.
While other python packages have features that tinyfilter doesn't yet support, tinyfilter clearly does win at one thing: recognizing the important features in an image and focusing on those. In the example above tinyfilter and Ascii-magic bother print images that are 80 columns wide. The difference is that tinyfilter's output is based on where there are edges in the image while Ascii-magic only focuses on where the image is dark and where is it bright.
The numbers at the top of the images show how many columns the output is in ASCII characters. The example shows how despite losing large amounts of detail as the image gets smaller, tinyfilter is able to retain important elements of the original.
The balloon dog is a good example of edge detection (notice tinyfilter doesn't print anything when the balloon is solid purple but prints a line when the image transfers to white).
The Einstein image is a good example of how tinyfilter can scale to large images.
To make sense of the terms in this section you will need a little background on CNNs (convolutional neural networks). The design of tinyfilter is based on the technique these networks use called convolution. Reading the first half of this source from IBM should get you up to speed.
The most important part of an image is the lines. Thats what tinyfilter detects using only 5 filters which I hard coded as numpy arrays (shown below). When the filters are applied to an image, tinyfiler calculates if the feature the filter is detecting for is present. If it is, tinyfilter prints the ASCII character that corresponds to the feature.
BACKSLASH_FILTER = np.array([[3, -1, -1], [-1, 3, -1], [-1, -1, 3]], dtype="int32")
FORWARDSLASH_FILTER = np.array([[-1, -1, 3], [-1, 3, -1], [3, -1, -1]], dtype="int32")
VERTICAL_BAR_FILTER = np.array([[-1, 3, -1], [-1, 3, -1], [-1, 3, -1]], dtype="int32")
HYPEN_FILTER = np.array([[-1, -1, -1], [3, 4, 3], [-1, -1, -1]], dtype="int32")
UNDERSCORE_FILTER = np.array([[-1, -1, -1], [-1, -1, -1], [3, 4, 3]], dtype="int32")
For more information about resources and their licenses, visit THANKS.txt under this repository.
- Pillow is a dependency for tinyfilter
- numpy is a dependency for tinyfilter
- mypy was used for type checking
- Black was used for python code formatting
- This MIT lecture is a great resource for learning about CNNs and filters. I learned a lot from it and this project would not have been possible without it.
Footnotes
-
For consistency, the first letter in "tinyfilter" is always lowercase, even when it begins a sentence. ↩