Color constancy

Introduction

Color constancy is the ability of the human visual system that ensures the perception of the color of objects to remain relatively constant under varying illumination conditions. In achieving computational color constancy it is necessary first to estimate the scene illumination and then to perform the chromatic adaptation, which adjusts the scene colors so that they look as they would under a desired illumination (mostly daylight). If it is assumed that the scene illumination is uniform, then the global illumination can be estimated for the whole scene. However, if this is not the case, then the illumination has to be estimated locally.

Here we offer the source code of our illumination estimation, tone mapping, and brightness adjustment methods. The code available here is research code and is therefore only of prototype quality.

The links to the papers that describe all these methods are available together with their brief descriptions that follow. Their pre-print versions are also available at the bottom of this page.

Additionally, here you can download the Cube benchmark dataset for illumination estimation.

The Cube dataset

The Cube dataset contains 1365 exclusively outdoor images taken with a Canon EOS 550D camera in parts of Croatia, Slovenia, and Austria during various seasons. The image ordering with respect to their creation time has been shuffled. In the lower right corner of each image the SpyderCube calibration object is placed. Its two neutral 18% gray faces were used to determine the ground-truth illumination for each image. Due to the angle between these two faces, for images with two illuminations, e.g. one in the shadow and one under the direct sunlight, it was possible to simultaneously recover both of them and they are provided for each image. In all dataset images with two distinct illuminations, one of them is always dominant so that the uniform illumination assumption effectively remains valid. To correctly identify the dominant illumination, for each image its two possible chromatically adapted versions were manually checked and after this has been done for all images, the final ground-truth illumination was created. The black level, i.e. the intensity that has to be subtracted from all images in order to use them properly, equals 2048. To make a conclusion about the maximum allowed intensity values of non-clipped pixels in the dataset images, histograms of intensities for various images were observed. If m is the maximum intensity for a given dataset image in any of its channels, then the best practice is to discard all image pixels that have a channel intensity that is greater than or equal to m-2. Finally, before an image from the dataset is used to test the accuracy of an illumination estimation method, the calibration object has to be masked out to prevent a biased influence. A simple way to do this is to mask out the lower right rectangle starting at row 1050 and column 2050.

The Cube dataset includes raw CR2 images, minimally processed PNG images, and JPG images. The main part are the PNG images, which were obtained from raw images by using dcraw with options -D -4 -T and then applying only simple subsampling for debayering.

You can download the Cube dataset here. For any questions do not hesitate to send an e-mail to nikola.banic@fer.hr.

Unsupervised learning for color constancy

Creating calibrated datasets for illumination estimation methods is a time-consuming process, which involves a significant amount of manual work and preferrably, but not necessary, it is done for each sensor individually. This allows state-of-the-art learning-based illumination estimation methods to fine tune their parameter values and to achieve high accuracy. However, if the ground-truth is not available, the learning-based methods cannot be easily trained. A solution is to use some kind of unsupervised learning with one of the examples being the Color Tiger method proposed in our VISAPP 2018 paper. Its extended version is publicly available as an arXiv pre-print and it also describes the Color Bengal Tiger method for unsupervised learning for inter-camera color constancy. The paper also describes the Cube dataset. Additionally, at the bottom of this page in the part with the source code you can find the Matlab code for reproduction of all results from the paper.

Puma

The Puma algorithm is a tone mapping operator based on an adjusted version Naka-Rushton equation and the Smart

Light Random Sprays Retinex algorithm. It is explained in detail in the EUSIPCO 2016 paper "Puma: A High-Quality Retinex-Based Tone Mapping Operator", which is available here.

Smart Light Random Memory Sprays Retinex

Smart Light Random Memory Sprays Retinex is an image enhancing program local brightness adjustment and color correction. It is relatively fast and it produces images of high quality. It is based on the Light Random Sprays Retinex algorithm, but many of its flaws and weaknesses are fixed. The paper which describes has been accepted for publication in the Journal of the Optical Society of America A. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Firefly

The Firefly is a brightness adjustment algorithm designed to be very fast, suitable for hardware implementation, and to produce high-quality results. The paper which describes it has been accepted for publication as part of ICIP 2015. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Color Ant

The Color Ant algorithm is a relatively simple learning-based algorithm that uses the k-NN algorithm in its core to perform illumination estimation. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.

Smart Color Cat

The Smart Color Cat algorithm is a learning-based algorithm that represents the upgrade of the Color Cat algorithm. It uses simpler features and it can be trained and tested significantly faster. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.

Color Dog

The Color Dog algorithm is a learning-based algorithm that alters the illumination estimation of other methods by using the information available from the illumination chromaticity distribution. Even though relatively simple, the method outperforms most other methods on any kind of images. It is explained in detail in the VISAPP 2015 paper "Color Dog: Guiding the Global Illumination Estimation to Better Accuracy" which is about to be published.

Color Cat

The Color Cat algorithm is a learning-based algorithm that uses color distribution to perform illumination estimation. It is explained in detail in the IEEE Signal Processing Letters paper "Color Cat: Remebering Colors for Illuminiation Estimation", which can be downloaded here.

Color Badger

The Color Badger algorithm is an extension and improvement of the Light Random Sprays Retinex (LRSR) algorithm intended to overcome LRSR's weaknesses in tone mapping. Additionally, it can also be used as a local white balancing algorithm. It is explained in detail in the ICISP 2014 paper "Color Badger: A Novel Retinex-based Local Tone Mapping Operator", which can be downloaded here. The OpenCV C++ implementation of the TMQI that was used in the testing is also available.

Color Rabbit

The Color Rabbit algorithm is essentially a modification and a more accurate upgrade of the Color Sparrow algorithm. It is explained in detail in the DSP 2014 paper "Color Rabbit: Guiding the Distance of Local Maximums in Illumination Estimation", which can be downloaded here.

Improved White Patch

The Improved White Patch algorithm is an improvement of the White Patch algorithm by means of image pixel sampling. It is of greater accuracy but still of the same speed as the White Patch algorithm. The paper in which it is explained in detail was presented at ICIP 2014 and it can be downloaded here.

Light Random Sprays Retinex

The Light Random Sprays Retinex algorithm is an extension and improvement of the Random Sprays Retinex algorithm, which allows a much lower execution time and higher resulting image quality. It is explained in detail in the IEEE Signal Processing Letters paper "Light Random Sprays Retinex: Exploiting the Noisy Illumination Estimation", which can be downloaded here. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Color Sparrow

The Color Sparrow algorithm is essentially a derivative of the Random Sprays Retinex, but it is nevertheless as fast as other well-known global illumination estimation algorithms. It is explained in detail in the 2nd Croatian Computer Vision Workshop paper "Using the Random Sprays Retinex algorithm for global illumination estimation", and which can be downloaded here.


Publications

Source code