We will develop an Optogenetic Brain System—or OBServe—to optimally activate visual responses in the primary visual cortex (V1) from a video camera. There are no current methods for generating optimized spatiotemporal cortical patterns that evoke naturalistic visual responses and perception. There are also no cortical prosthetic systems that account for the effects of eye movements, which are known to drive more than half of V1’s neural activity. And there are no visual prosthetic systems that operate at peak visual acuity, with maximal contrast sensitivity, and full stereoscopic control. Visual prosthetic microstimulation has been limited by the deleterious effect of co-activating antagonistic neurons that cancel each other, resulting in poor perceived contrast. These gaps in knowledge have prevented the development of prosthetics that help patients with retinal ganglion cell damage or optic nerve/tract dysfunction. We have developed the first innovative approach to avoid mixed ON/OFF visual pathway activation—and unwanted mixing of excitatory/inhibitory cells—in human cortex. We propose transformative advances in viral transfection and imaging methodology, computational theory, and cortical prosthetic neuroengineering design, to overcome these gaps. Our approach will be to optogenetically stimulate LGN neuronal boutons in layer 4 of V1, which are purely excitatory inputs. We will identify the ON/OFF and ocular dominance preference of each LGN input at each retinotopic stimulation position. This will optimize targeting of the appropriate inputs and achieve maximal contrast at the highest attainable acuity, with stereoscopic fidelity. We will build a computational model of how LGN inputs spatiotemporally activate V1 when viewing a stimulus in a functioning visual system, and then optogenetically stimulate the LGN inputs to transform images from a video camera into the appropriate naturalistic responses to achieve the same perceptual response— shaped by our computational model—in real-time. We will also account for the effects of eye-movements with a novel model of oculomotor influence on early visual responses. We will also employ an innovative new approach to read-out visual neurons for full feedback control of the prosthetic input—to continually adjust the parity between stimulation and cortical activation state—which has shown to be critical in previous implantable cortical prosthetic systems. We will develop novel transfection of V1 pyramidal neurons to label them with one of 10,000 stochastically chosen color-codes, from an array of bioluminescent calcium-activated proteins. We will then monitor colored photons over time—to determine neural activity of each neuron with a color lookup table—using a hyperspectral CCD similar to a mobile phone camera. Our breakthrough dimensionality reducing similarity matching method will then interpret the streamed data from this chip in real-time, to decode it for real-time feedback to achieve full read/write control of 1000+ neurons initially. The device will be fully encapsulated within a 1 cm3 case—with no percutaneous wiring—and will communicate externally using telemetry, powered by a battery, charged via an induction loop.

The goal of this project will be to provide a viable Optogenetic Brain System (OBServe), ready for FDA IDE approval. OBServe will be based on an innovative coplanar LED/hyperspectral imager Application-Specific Integrated Circuit (ASIC) that will optically stimulate the lateral geniculate nucleus (LGN) and read out the resultant activity from primary visual cortex (V1) for realtime feedback control. It will combine a novel optogenetic approach, viral (genetic) therapy, and computational technologies.The self-contained, biocompatible implant will not require percutaneous connections for data or power. Of the ~39 million people with blindness worldwide, only about half can potentially be helped by retinal implants. OBServe will thus serve as a critical new therapeutic in neuroophthalmology, providing a treatment pathway to the ~19 million patients with no other recourse —such as those with severe traumatic ocular injury, macular degeneration, diabetic retinopathy, or glaucoma.

We will leverage experience in the retinal implant field—to enhance OBServe technology at the cortical level—and to optimize contrast sensitivity, acuity, and form vision. OBServe will, for the first time, provide stereoscopic activation and will also address the prodigious effects of eye-movements, which drive more than half of the visual activity in V1. We will develop the methods and computational theory necessary to map visual cortical circuits precisely, in individual patients with blindness—using solely prosthetic visual stimulation—an achievement never before attained. This will allow us, for the first time, to transform visual images from a camera into optogenetic cortical stimulation—optostim—to input vision prosthetically into at least 1000 neurons initially, to then increase this number by orders. We will also read those neurons’ activity in real-time with a hyperspectral CCD chip, using a breakthrough multi-color bioluminescent viral calcium indicator system to provide streaming feedback control.

OBServe will be an FDA-approved medical brain implant that will allow blind patients to read or view newspapers, books, magazines, TV, movies, or cell phone screens, restoring normal oculomotor and visual function in the activated field of view (FOV). Once achieved, we will validate, manufacture, and produce OBServe as a clinical medical device, and develop the surgical and regulatory manufacturing protocols. We will also test OBServe’s precise spatial and stereoscopic acuity, its contrast sensitivity, its utility in correctly discriminating complex objects and foveally fixated written symbols, and its functionality across different ambient environmental conditions, in non-human primate (NHP) preclinical testing, to prepare for First In Human (FIH) clinical tests.