Optical coherence tomography (OCT) images of the retina provide a structural representation and give an insight into the pathological changes present in neovascular age-related macular degeneration (nAMD). Due to the three-dimensionality and complexity of the images, manual analysis of pathological features is difficult, time-consuming, and prone to subjectivity. For this reason, the analysis of the expert ophthalmologist is limited only to qualitative aspects like the detection of pathological changes. Computer analysis of 3D OCT images is necessary to enable automated quantitative measuring of the features, objectively and repeatedly. During our research, the current lack of openly available databases of annotated images made the comparison of individually developed models difficult. Furthermore, we consider that a publicly available database of labeled retinal structures, that would combine images from different devices and for patients with various diseases, as well as for healthy patients, is necessary in order to develop an algorithm robust enough to implement software for OCT devices.
In collaboration with Sestre milosrdnice University Hospital Center (Zagreb, Croatia), images were annotated for 25 patients with nAMD. Macular SD-OCT volumes were recorded with the Zeiss Cirrus HD OCT 4000 device. Each OCT volume consisted of 128 B-scans with a resolution of 1024 x 512 pixels (pixel size 1.96 x 11.74 μm). The images are anonymized and do not contain any patient data.
Retinal fluids and layers are annotated for 1136 B-scans (the total amount of B-scans for 25 patients is 3200 B-scans). Annotations are not done for each B-scan, in case the adjacent B-scans are similar, the annotations are skipped. Of the fluids, the following are annotated: pigment epithelial detachment (PED), subretinal fluid and subretinal hyperreflective material (marked jointly as SRF), and intraretinal fluid (IRF).
Also, four boundaries between the layers are annotated. The retina is a layer tissue inside the eye and usually, it is divided into ten layers. We selected boundaries that can be readily determined in virtually all images and that are also relevant for localizing the observed fluids: internal limiting membrane – ILM, boundaries between the inner plexiform layer and inner nuclear layer (IPL/INL), retinal pigment epithelium (RPE), and Bruch membrane (BM).
For the purpose of calculating inter-observer error, annotations were made by an additional expert for 75 B-scans. To calculate intra-observer error, the same expert made a re-annotation with no access to previous annotations for 75 B-scans.
Data collection adhered to the tenets of the Declaration of Helsinki and the standards of Good Scientific Practice of Sestre milosrdnice University Hospital Center. The presented study was approved by the Ethics Committee of the Sestre milosrdnice University Hospital Center (Zagreb, Croatia) and the Faculty of Electrical Engineering and Computing (Zagreb, Croatia).
Using the database
The images included in the AROI database can be used, free of charge, for research and educational purposes. Copy, redistribution, and any unauthorized commercial use are prohibited. Any researcher reporting results that use the framework should acknowledge the framework by providing citing the following publication in any resulting publications using the dataset:
Martina Melinščak, Marin Radmilović, Zoran Vatavuk & Sven Lončarić (2021) Annotated retinal optical coherence tomography images (AROI) database for joint retinal layer and fluid segmentation, Automatika, 62:3, 375-385, DOI: 10.1080/00051144.2021.1973298
In addition, we would like to hear about any publications that use the database. The person to contact for further information is Martina Melinščak.
How to download the database
Please send an e-mail to Martina Melinščak to request access to the dataset.
We would like to thank the Department of Ophthalmology, Sestre milosrdnice University Hospital Center (Zagreb, Croatia). Special thanks to Prof. Zoran Vatavuk, MD, and to Marin Radmilović, MD who did image labeling.