Color constancy

Introduction

Color constancy is the ability of the human visual system that ensures the perception of the color of objects to remain relatively constant under varying illumination conditions. While there are some similarities between human vision color constancy (HVCC) and computer vision i.e. computational color constancy (CVCC), these two also differ significantly. This page contains material and resources related to computational color constancy. In achieving computational color constancy it is necessary first to estimate the scene illumination and then to perform the chromatic adaptation, which adjusts the scene colors so that they look as they would under a desired illumination (mostly daylight). If it is assumed that the scene illumination is uniform, then the global illumination can be estimated for the whole scene. However, if this is not the case, then the illumination has to be estimated locally.

Here we offer the source code of our illumination estimation, tone mapping, and brightness adjustment methods. The code available here is research code and is therefore only of prototype quality.

The links to the papers that describe all these methods are available together with their brief descriptions that follow. Their pre-print versions are also available at the bottom of this page.

Additionally, here you can download the Cube and Cube+ benchmark datasets for illumination estimation.

The Cube+ dataset

     

     

The Cube+ dataset is an extension of the Cube dataset described below. It contains all 1365 images of the Cube dataset and additional 342 images. The new images include indoor images but also nighttime outdoor images. The main reason for extending the dataset was to include more diversity to the ground-truth illuminations and thus to make the newly extended dataset more challenging than the original Cube dataste. The number of new illuminations was set to make the overall distribution of illuminations in the extended dataset similar to the one found in the NUS datasets.

The calibration object put in the image scenes and the camera used the take the images are the same ones that were used to create the Cube dataset. The taking of new images and determining their ground-truth illumination was carried out by following the same methodology that was used when creating the Cube dataset. For more information on black level subtraction and clipped pixels removal, the same instructions that were given for the Cube dataset apply here.

The Cube+ dataset includes raw CR2 images, minimally processed PNG images, and JPG images. The main part are the PNG images, which were obtained from raw images by using dcraw with options -D -4 -T and then applying only simple subsampling for debayering.

You can download the Cube+ dataset here. The file names are not zero-padded, which may cause confusion with the image ordering. The first line in the ground-truth file(s) corresponds to the image stored in 1.png, the second line corresponds to the image in 2.png, etc. For any questions do not hesitate to send an e-mail to nikola.banic@fer.hr.

The Cube dataset