DOE Artificial Retina Project

Archive Site Provided for Historical Purposes

Funding for this work ended in FY 2011.

About the Project
How the A.R. Works
Patient Stories
Project Collaborators
A.R. Newsletter

Performance Goals
Research Publications

News Headlines
Multimedia Presentations
Retinal Diseases
Team Highlights
Meetings Archive

Site Index

Search This Site

Lab Spotlight: California Institute of Technology
Seeing is Processing

A novel software system not only processes incoming images in real time but also enhances what retinal implant recipients perceive

ARVIS palette

Typical palette of Artificial Retinal Implant Vision Simulator (ARVIS) image-processing modules that are applied in real time to the video camera stream driving the artificial retina. [Credit: California Institute of Technology]. Click on image to enlarge.

The human retina is not just a detector of light that sends optical information to the brain. It also performs complex image processing to provide the brain with optimized visual information. Replacing diseased photoreceptors with the electrodes of an artificial retina thus not only reduces the number of pixels, it also disrupts this necessary image processing.

To restore that lost function, researchers at the California Institute of Technology’s Visual and Autonomous Exploration Systems Research Laboratory under the direction of Wolfgang Fink are developing software to pre-process the information from implant patients’ miniature cameras before it is fed to their retinal prostheses. Dubbed the Artificial Retinal Implant Vision Simulator (ARIVS), this software system provides real-time image processing and enhancement to improve the limited vision afforded by the camera-driven device. The preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details like object texture.

Since predicting exactly what blind subjects may be able to perceive is difficult, ARIVS offers a wide variety of image processing filters. They include contrast and brightness enhancement, grayscale equalization for luminance control under severe lighting conditions, user-defined grayscale levels for reducing the data volume transmitted to the visual prosthesis, blur algorithms, and edge detection (see graphic at right). These filters are not unlike what a person experiences in a regular eye exam during which a battery of tests is performed to determine the proper eyeglass prescription. In this case, retinal implant recipients can choose among these different filters to further fine tune, optimize, and customize their individual visual perception by actively manipulating parameters of individual image-processing filters or altering the sequence of these filters.

An incomparably greater challenge exists in predicting how to electrically stimulate the retina of a blind subject via the retinal prosthesis to elicit a visual perception that matches an object or scene as captured by the camera system that drives the prosthesis. This requires the efficient translation of the camera stream, pre-processed by ARIVS, into patterns of electrical stimulation of retinal tissue by the implanted electrode array. The Caltech researchers on the U.S. Department of Energy’s team are addressing this challenge by developing and testing multivariate optimization algorithms based on evolutionary principles. These algorithms are used to modify the electrical stimulation patterns administered by the electrode array to optimize visual perception. Operational tests with Argus™ I users currently are under way.


The Artificial Retina Project was part of the
Biological and Environmental Research Program
of the U.S. Department of Energy Office of Science
Funding for this work ended in FY 2011.


Contact Webmaster
Base URL:

Last modified: Monday, July 10, 2017