Color constancy is the ability of the human visual system that ensures the perception of the color of objects to remain relatively constant under varying illumination conditions. In achieving computational color constancy it is necessary first to estimate the scene illumination and then to perform the chromatic adaptation, which adjusts the scene colors so that they look as they would under a desired illumination (mostly daylight). If it is assumed that the scene illumination is uniform, then the global illumination can be estimated for the whole scene. However, if this is not the case, then the illumination has to be estimated locally.
Here we offer the source code of our illumination estimation, tone mapping, and brightness adjustment methods. The code available here is research code and is therefore only of prototype quality.
The links to the papers that describe all these methods are available together with their brief descriptions that follow. Their pre-print versions are also available at the bottom of this page.
Additionally, here you can download the Cube benchmark dataset for illumination estimation.
The Cube dataset
The Cube dataset contains 1365 exclusively outdoor images taken with a Canon EOS 550D camera in parts of Croatia, Slovenia, and Austria during various seasons. The image ordering with respect to their creation time has been shuffled. In the lower right corner of each image the SpyderCube calibration object is placed. Its two neutral 18% gray faces were used to determine the ground-truth illumination for each image. Due to the angle between these two faces, for images with two illuminations, e.g. one in the shadow and one under the direct sunlight, it was possible to simultaneously recover both of them and they are provided for each image. In all dataset images with two distinct illuminations, one of them is always dominant so that the uniform illumination assumption effectively remains valid. To correctly identify the dominant illumination, for each image its two possible chromatically adapted versions were manually checked and after this has been done for all images, the final ground-truth illumination was created. The black level, i.e. the intensity that has to be subtracted from all images in order to use them properly, equals 2048. To make a conclusion about the maximum allowed intensity values of non-clipped pixels in the dataset images, histograms of intensities for various images were observed. If m is the maximum intensity for a given dataset image in any of its channels, then the best practice is to discard all image pixels that have a channel intensity that is greater than or equal to m-2. Finally, before an image from the dataset is used to test the accuracy of an illumination estimation method, the calibration object has to be masked out to prevent a biased influence. A simple way to do this is to mask out the lower right rectangle starting at row 1050 and column 2050.
The Cube dataset includes raw CR2 images, minimally processed PNG images, and JPG images. The main part are the PNG images, which were obtained from raw images by using dcraw with options -D -4 -T and then applying only simple subsampling for debayering.
Green stability assumption
Although in most on illumination estimation statistics-based methods are treated as if they require no learning, they do have parameters whose tuninig affects the methods' accuracy. If there is no calibrated dataset with known ground-truth illuminations available, the tuning cannot be done by minimizing the angular error. The green stability assumption is a heuristic that enables the tuning of parameter values in cases where only a non-calibrated dataset is available. It chooses the parameters that for a given dataset minimize the standard deviation of the green chromaticity components of illumination estimations on the dataset's images. The experimental results show that this gives high accuracy. The code for recreating the numerical results from the arXiv pre-print is available at the bottom of the page.
Unsupervised learning for color constancy
Creating calibrated datasets for illumination estimation methods is a time-consuming process, which involves a
significant amount of manual work and preferrably, but not necessary, it is done for each sensor individually. This allows state-of-the-art learning-based illumination estimation methods to fine tune their parameter values and to achieve high accuracy. However, if the ground-truth is not available, the learning-based methods cannot be easily trained. A solution is to use some kind of unsupervised learning with one of the examples being the Color Tiger method proposed in our VISAPP 2018 paper. Its extended version is publicly available as an arXiv pre-print and it also describes the Color Bengal Tiger method for unsupervised learning for inter-camera color constancy. The paper also describes the Cube dataset. Additionally, at the bottom of this page in the part with the source code you can find the Matlab code for reproduction of all results from the paper.
Flash and Storm
Flash and Storm is a tone mapping operator based on an adjusted version Naka-Rushton equation. Flash is global and Storm is a local tone mapping operator. Both of them are designed to be hardware-friendly and to have a low complexity while simultaneously producing high-quality results. They are explained in detail in VISAPP 2018 paper "
Flash and Storm: Fast and Highly Practical Tone Mapping based on Naka-Rushton Equation", which is available here.
The Puma algorithm is a tone mapping operator based on an adjusted version Naka-Rushton equation and the Smart
Smart Light Random Memory Sprays Retinex
The Color Ant algorithm is a relatively simple learning-based algorithm that uses the k-NN algorithm in its core to perform illumination estimation. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.
Smart Color Cat
The Smart Color Cat algorithm is a learning-based algorithm that represents the upgrade of the Color Cat algorithm. It uses simpler features and it can be trained and tested significantly faster. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.
The Color Dog algorithm is a learning-based algorithm that alters the illumination estimation of other methods by using the information available from the illumination chromaticity distribution. Even though relatively simple, the method outperforms most other methods on any kind of images. It is explained in detail in the VISAPP 2015 paper "Color Dog: Guiding the Global Illumination Estimation to Better Accuracy" which is about to be published.
The Color Cat algorithm is a learning-based algorithm that uses color distribution to perform illumination estimation. It is explained in detail in the IEEE Signal Processing Letters paper "Color Cat: Remebering Colors for Illuminiation Estimation", which can be downloaded here.
The Color Badger algorithm is an extension and improvement of the Light Random Sprays Retinex (LRSR) algorithm intended to overcome LRSR's weaknesses in tone mapping. Additionally, it can also be used as a local white balancing algorithm. It is explained in detail in the ICISP 2014 paper "Color Badger: A Novel Retinex-based Local Tone Mapping Operator", which can be downloaded here. The OpenCV C++ implementation of the TMQI that was used in the testing is also available.
The Color Rabbit algorithm is essentially a modification and a more accurate upgrade of the Color Sparrow algorithm. It is explained in detail in the DSP 2014 paper "Color Rabbit: Guiding the Distance of Local Maximums in Illumination Estimation", which can be downloaded here.
Improved White Patch
The Improved White Patch algorithm is an improvement of the White Patch algorithm by means of image pixel sampling. It is of greater accuracy but still of the same speed as the White Patch algorithm. The paper in which it is explained in detail was presented at ICIP 2014 and it can be downloaded here.
Light Random Sprays Retinex
The Color Sparrow algorithm is essentially a derivative of the Random Sprays Retinex, but it is nevertheless as fast as other well-known global illumination estimation algorithms. It is explained in detail in the 2nd Croatian Computer Vision Workshop paper "Using the Random Sprays Retinex algorithm for global illumination estimation", and which can be downloaded here.