Abstract
We propose a deep neural network for mapping the 2D pixel coordinates in an image to the corresponding RGB color values. The neural network is termed
CocoNet, i.e.
coordinates-to-
color
network. During the training process, the neural network learns to encode the input image within its layers, i.e. it learns a continuous function that approximates the discrete RGB values sampled over the discrete 2D pixel locations. At test time, given a 2D pixel coordinate, the neural network will output the RGB values of the corresponding pixel. By considering every 2D pixel location, the network can actually reconstruct the entire learned image. We note that we have to train an individual neural network for each input image, i.e. one network encodes a single image. Our neural image encoding approach has various low-level image processing applications ranging from image denoising to image resampling and image completion. Our code is available at
https://github.com/paubric/python-fuse-coconet.