Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR ARTIFICIAL SIGHT PROSTHETICS
Document Type and Number:
WIPO Patent Application WO/2020/191408
Kind Code:
A1
Abstract:
Systems and methods for artificial sight in accordance with embodiments of the invention are illustrated. One embodiment includes a retinal prosthesis system including an external controller, a scene imager, an eye imager, an implanted controller, and a stimulation interface in communication with the implanted controller, where the stimulation interface is positioned to stimulate a plurality of retinal ganglion cells (RGCs) of the eye, where the external controller is configured to obtain image data describing a scene from the scene imager, obtain eye position data from the eye imager, determine a field of view (FOV) in the scene based on the eye position data; where the implanted controller is configured to obtain the FOV from the external controller, continuously select stimulation pulses from a dictionary based on the FOV, and stimulate the plurality of RGCs using the stimulation interface in accordance with the selected stimulation pulses.

Inventors:
SHAH NISHAL (US)
CHICHILNISKY EDUARDO (US)
Application Number:
PCT/US2020/024298
Publication Date:
September 24, 2020
Filing Date:
March 23, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV LELAND STANFORD JUNIOR (US)
International Classes:
A61N1/36; A61F2/14; A61N1/05
Foreign References:
US20060058857A12006-03-16
US20120143080A12012-06-07
US20120143080A12012-06-07
Other References:
ASHER ET AL.: "Image processing for a high-resolution optoelectronic retinal prosthesis", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 54.6, 2007, pages 993 - 1004, XP011181882, Retrieved from the Internet DOI: 10.1109/TBME.2007.894828
SHAH ET AL.: "Optimization of electrical stimulation for a high-fidelity artificial retina", 2019 9TH INTERNATIONAL IEEE /EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 20 March 2019 (2019-03-20), pages 714 - 718, XP033551652, Retrieved from the Internet
See also references of EP 3941572A4
Attorney, Agent or Firm:
FINE, Isaac, M. (US)
Download PDF:
Claims:
Exemplary claims:

1. A retinal prosthesis system comprising:

an external controller;

a scene imager in communication with the external controller;

an eye imager in communication with the external controller;

an implanted controller in communication with the external controller via a first communication channel; and

a stimulation interface in communication with the implanted controller via a second communication channel, where the stimulation interface is positioned to stimulate a plurality of retinal ganglion cells (RGCs) of the eye, and the stimulation interface comprises a dense electrode grid;

where the external controller is configured to:

obtain image data describing a scene from the scene imager; obtain eye position data from the eye imager;

determine a field of view (FOV) in the scene based on the eye position data; and

transmit the FOV to the implanted controller;

where the implanted controller is configured to:

obtain the FOV from the external controller;

continuously select stimulation pulses from a dictionary based on the FOV; and

stimulate the plurality of RGCs using the stimulation interface in accordance with the selected stimulation pulses.

2. The system of claim 1 , wherein the stimulation of the RGCs is performed at between 10kFIz to 100kHz.

3. The system of claim 1 , wherein the dictionary is a master dictionary; and the external controller is further configured to transmit the master dictionary to the implanted controller.

4. The system of claim 3, wherein to compile the master dictionary, the external controller is configured to:

stimulate the plurality of RGCs using the stimulation interface via the implanted controller;

record responses of the plurality of RGCs based on the stimulation using the stimulation interface, via the implanted controller; and

generate a set of dictionary elements indicating neural activity in response to specific stimulation provided by the stimulation interface.

5. The system of claim 3, wherein the master dictionary is a reduced master dictionary.

6. The system of claim 1 , wherein:

the dictionary is a FOV dictionary;

wherein to compile the FOV dictionary, the external controller is configured to select elements from a master dictionary which are most relevant to the FOV; and

the external controller is further configured to transmit the FOV dictionary to the implanted controller.

7. The system of claim 6, wherein to select elements from the master dictionary, the external controller is configured to:

calculate a vector

selecting dictionary elements of q* which are non-zero.

8. The system of claim 6, wherein the internal controller is further configured to use a precompiled dictionary in an interim period between receiving the FOV and receiving the FOV dictionary.

9. The system of claim 1 , wherein the first communication channel is a wireless communication channel utilizing a near-field magnetic communication technology.

10. The system of claim 1 , wherein the first communication channel is a wireless communication channel utilizing a radio frequency communication technology conforming to the Bluetooth standard.

1 1 . The system of claim 1 , wherein the second communication channel is a wired communication channel configured to power the dense electrode grid.

12. The system of claim 1 , wherein the second communication channel is a wireless communication channel.

13. The system of claim 1 , wherein stimulation of the plurality of RGCs occurs with sufficient speed to trigger the temporal dithering effect.

14. The system of claim 1 , wherein to select stimulation pulses from the dictionary and the FOV, the implanted controller is further configured to utilize a greedy algorithm.

15. The system of claim 1 , wherein to select stimulation pulses from the dictionary and the FOV, the implanted controller is further configured to utilize a dynamic programming algorithm.

16. The system of claim 1 , wherein the implanted controller is configured to be implanted in the episcleral layer of an eye.

17. The system of claim 1 , wherein the implanted controller is incorporated into a contact lens.

18. A method for providing artificial sight, comprising:

obtaining image data describing a scene from a scene imager;

obtaining eye position data from an eye imager;

determining a field of view (FOV) in the scene based on the eye position data using a retinal prosthesis;

continuously selecting stimulation pulses from a dictionary and the FOV using the retinal prosthesis; and

stimulating a plurality of retinal ganglion cells using a stimulation interface of the retinal prosthesis in accordance with the selected stimulation pulses.

19. The method of claim 18, wherein the dictionary is a FOV dictionary which is compiled by selecting elements from a master dictionary which are most relevant to the FOV.

20. The method of claim 16, wherein selecting stimulation pulses from the dictionary based on the FOV is achieved using a greedy algorithm.

Description:
Systems and Methods for Artificial Sight Prosthetics

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] The current application claims the benefit of and priority under 35 U.S.C. § 1 19(e) to U.S. Provisional Patent Application No. 62/821 ,763 entitled “Systems and Methods for Dictionary-Based Artificial Sight Prosthetics” filed March 21 , 2019. The disclosure of U.S. Provisional Patent Application No. 62/821 ,763 is hereby incorporated by reference in its entirety for all purposes.

STATEMENT OF FEDERALLY SPONSORED RESEARCH

[0001] This invention was made with Government support under contract 1430348 awarded by the National Science Foundation and under contract EY021271 awarded by the National Institutes of Health. The Government has certain rights in the invention.

FIELD OF THE INVENTION

[0002] The present invention generally relates artificial sight, and, more specifically, to artificial retina vision restoration apparatuses using dictionary-based image reconstruction and temporal dithering.

BACKGROUND

[0003] The eye is a highly complex biological organ that grants an organism the ability to receive visual information by converting light into electrical impulses (spikes) in neurons. In the human eye, light passes through the cornea, then through a dilated pupil, and then through an adjustable lens that can be used to manipulate focus. The focused light travels through a cavity filled with vitreous humor until it hits photoreceptors, the photosensitive cells of the retinas (i.e. rods and cones). Depending on the amplitude and wavelength of the light, the photoreceptors generate and transmit electrical signals to the brain via retinal ganglion cells, where they are processed to produce the sense of“sight.” The inability to see is called blindness. Blindness may result from any number of different sources including, but not limited to, damage to the photoreceptors, the eye, the optic nerve, brain damage, and/or other forms of damage to the structures associated with sight. [0004] Digital cameras are image-forming optical systems that are capable of receiving light and generating an image of a scene. Cameras include at least one lens and a photosensitive sensor. Computational processing techniques are used to resolve the received light into an image.

SUMMARY OF THE INVENTION

[0005] Systems and methods for artificial sight in accordance with embodiments of the invention are illustrated. One embodiment includes a retinal prosthesis system including an external controller, a scene imager in communication with the external controller, an eye imager in communication with the external controller, an implanted controller in communication with the external controller via a first communication channel, and a stimulation interface in communication with the implanted controller via a second communication channel, where the stimulation interface is positioned to stimulate a plurality of retinal ganglion cells (RGCs) of the eye, and the stimulation interface comprises a dense electrode grid, where the external controller is configured to obtain image data describing a scene from the scene imager, obtain eye position data from the eye imager, determine a field of view (FOV) in the scene based on the eye position data; and transmit the FOV to the implanted controller, where the implanted controller is configured to obtain the FOV from the external controller, continuously select stimulation pulses from a dictionary based on the FOV, and stimulate the plurality of RGCs using the stimulation interface in accordance with the selected stimulation pulses.

[0006] In another embodiment, the stimulation of the RGCs is performed at between 10kHz to 100kHz.

[0007] In a further embodiment, the dictionary is a master dictionary; and the external controller is further configured to transmit the master dictionary to the implanted controller.

[0008] In still another embodiment, to compile the master dictionary, the external controller is configured to, stimulate the plurality of RGCs using the stimulation interface via the implanted controller, record responses of the plurality of RGCs based on the stimulation using the stimulation interface, via the implanted controller, and generate a set of dictionary elements indicating neural activity in response to specific stimulation provided by the stimulation interface. [0009] In a still further embodiment, the master dictionary is a reduced master dictionary.

[0010] In yet another embodiment, the dictionary is a FOV dictionary, wherein to compile the FOV dictionary, the external controller is configured to select elements from a master dictionary which are most relevant to the FOV, and the external controller is further configured to transmit the FOV dictionary to the implanted controller.

[0011] In a yet further embodiment, to select elements from the master dictionary, the external controller is configured to calculate a vector q * = max f(S, Dq) with q ³ 0; q t £ B, and selecting dictionary elements of q * which are non-zero.

[0012] In another additional embodiment, the internal controller is further configured to use a precompiled dictionary in an interim period between receiving the FOV and receiving the FOV dictionary.

[0013] In a further additional embodiment, the first communication channel is a wireless communication channel utilizing a near-field magnetic communication technology.

[0014] In another embodiment again, the first communication channel is a wireless communication channel utilizing a radio frequency communication technology conforming to the Bluetooth standard.

[0015] In a further embodiment again, the second communication channel is a wired communication channel configured to power the dense electrode grid.

[0016] In still yet another embodiment, the second communication channel is a wireless communication channel.

[0017] In a still yet further embodiment, stimulation of the plurality of RGCs occurs with sufficient speed to trigger the temporal dithering effect.

[0018] In still another additional embodiment, to select stimulation pulses from the dictionary and the FOV, the implanted controller is further configured to utilize a greedy algorithm.

[0019] In a still further additional embodiment, to select stimulation pulses from the dictionary and the FOV, the implanted controller is further configured to utilize a dynamic programming algorithm.

[0020] In still another embodiment again, the implanted controller is configured to be implanted in the episcleral layer of an eye. [0021] In a still further embodiment again, the implanted controller is incorporated into a contact lens.

[0022] In yet another additional embodiment, a method for providing artificial sight, including obtaining image data describing a scene from a scene imager, obtaining eye position data from an eye imager, determining a field of view (FOV) in the scene based on the eye position data using a retinal prosthesis, continuously selecting stimulation pulses from a dictionary and the FOV using the retinal prosthesis, and stimulating a plurality of retinal ganglion cells using a stimulation interface of the retinal prosthesis in accordance with the selected stimulation pulses.

[0023] In a yet further additional embodiment, the dictionary is a FOV dictionary which is compiled by selecting elements from a master dictionary which are most relevant to the FOV.

[0024] In yet another embodiment again, selecting stimulation pulses from the dictionary based on the FOV is achieved using a greedy algorithm.

[0025] Additional embodiments and features are set forth in part in the description that follows, and in part will become apparent to those skilled in the art upon examination of the specification or may be learned by the practice of the invention. A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings, which forms a part of this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] The description and claims will be more fully understood with reference to the following figures and data graphs, which are presented as exemplary embodiments of the invention and should not be construed as a complete recitation of the scope of the invention.

[0027] FIG. 1 conceptually illustrates an artificial sight system in accordance with an embodiment of the invention.

[0028] FIG. 2 conceptually illustrates an artificial sight system in accordance with an embodiment of the invention. [0029] FIG. 3 is a block diagram illustrating an implanted controller of an artificial sight system in accordance with an embodiment of the application.

[0030] FIG. 4 illustrates a scene broken down into three FOVs based on fixation points in accordance with an embodiment of the invention.

[0031] FIG. 5 is a flow chart illustrating an artificial sight process for generating master dictionaries in accordance with an embodiment of the invention.

[0032] FIG. 6 is a flow chart illustrating an artificial sight process for selecting dictionary elements for stimulating retinal cells in accordance with an embodiment of the invention.

[0033] FIG. 7 is a chart illustrating a timeline of events during an artificial sight process in accordance with an embodiment of the invention.

[0034] FIG. 8 illustrates six charts demonstrating a stimulation frequency of electrodes, the performance of a greedy algorithm as compared to an optimal algorithm, a sample target stimulus, and the varying fidelity of artificial sight given activation of different amounts of electrodes in accordance with an embodiment of the invention.

DETAILED DESCRIPTION

[0035] Sight is one of the most important senses available to a person, if not the most important, as vision is the primary way in which humans perceive their environment. For example, when communicating with another person, a significant amount of information is obtained via-nonverbal communication (e.g. body language). Indeed, sight is valuable for nearly every human activity. Systems and methods described herein are capable of granting or restoring sight to individuals that have at least partially intact retinas but are otherwise partially or completely blind.

[0036] Retinal prosthetics are devices that use electrodes to stimulate the retinal ganglion cells in a specific manner to evoke visual perception of the surrounding environment in a blind patient. In order to create high quality artificial vision, retinal prosthetics need to accurately reproduce the diverse and specific firing pattern of approximately 20 different retinal ganglion cell (RGC) types, each conveying a distinct visual feature to the brain. However, a single stimulation electrode can simultaneously activate multiple adjacent cells, regardless of their type, making it difficult to simulate the asynchronous neural code employed by the human physiology. Systems and methods described herein can address these problems by identifying the RGCs of different types in a patient’s retina, measuring their responses to electrical stimulation, and using this information to optimize the artificial visual signal by real-time control of stimulation using personalized dictionaries and algorithms.

[0037] Further, while modern retinal prosthetic systems perform visual signal processing on external control devices, the lag time for sending stimulation information from outside the body to inside the body can cause significant degradation in sight quality, particularly because normal vision is closely locked to rapid eye movements which may disrupt wireless communication between implanted and external devices. However, computational processing generates significant amounts of heat, and excessive amounts of heat can damage the structure of the eye, making it difficult to perform significant computations in an implanted device. Systems and methods described herein are capable of assigning processing tasks across external and implanted controllers in order to optimize processing and communication speeds to decrease lag time and heat load applied to the eye and retina. In numerous embodiments, implanted components do not require more than 1 milliwatt per millimeter squared on the surface of the retina (1 mW/mm 2 ).

[0038] While current artificial retina systems have achieved limited success with enabling patients to generally navigate some areas via artificial sight, higher fidelities have not been achieved. Systems and methods described herein attempt to resolve these problems using efficient visual stimulation strategies based on tuning the device to the neural circuitry of the retina and using dictionary-based approaches for rapid computation. Turning now to the drawings, systems for artificial sight are discussed below.

Artificial sight systems

[0039] Systems for artificial sight are retinal prosthetic systems that can generate dictionaries for RGC stimulation. In numerous embodiments, systems for artificial sight utilize both implanted and external controllers to increase the overall efficiency of the system and increase visual fidelity. Systems for artificial sight can balance the heat generated by processing with the need for rapid processing and rapid communication between components. [0040]Turning now to FIG. 1 , a system for artificial sight in accordance with an embodiment of the invention is illustrated. System 100 includes an external controller 1 10. In many embodiments, external controllers carry processing circuitry necessary to perform various artificial sight processes. In numerous embodiments, external controllers are capable of receiving image information describing a scene from a scene imager, eye position information from an eye imager. In many embodiments, the eye imager is capable of measuring and/or tracking the pupil and/or iris of a patient, fiduciary marks on a contact lens, blood vessels within the eye, the stimulation interface and/or the implanted controller in order to accurately measure the eye position information. External controllers can use image information and eye position information to compute field of view in a visual scene and compile dictionaries based on the received image information and eye position information. External controllers can transmit and receive data via at least one input/output interface. External controller input/output interfaces can transmit information across at least one communication channel. Channels can be either wired or wireless. External controllers are capable of utilizing at least two channels, where one channel is used to communicate with an input/output interface of an implanted controller, and where one channel is used to communicate with imagers. In some embodiments, external controllers are capable of communicating with configuration systems via a third channel. However, in some embodiments, communication to and from configuration systems can be accomplished via the same channel as used by the imagers. In a variety of embodiments, the external controller uses a wireless communication channel for exchanging data with the implanted controller, and a wired communication channel for exchanging data with the imagers. The wireless communication channel can be any number of different wireless channels, such as, but not limited to, radio communication (e.g. via the Bluetooth standard, or any other wireless communication method), magnetic communication methods (e.g. near-field magnetic induction communication), ultrasound communication, optical communication, and/or any other low power communication method that is safely usable in conjunction with an implanted controller adjacent to the human eye. In numerous embodiments, the external controller is further capable of transmitting power wirelessly to an implanted controller. [0041] External controller 1 10 is connected to a scene imager 120 via a communication channel 1 12. As discussed above, communication channel 1 12 can be wired or wireless. Communication channel 1 12 is capable of relatively high-bandwidth communications to enable external controller 1 10 to receive images of the scene at a high frame rate of approximately 10-240 Hz. Scene imagers can be any image sensor capable of capturing images of an external environment. In numerous embodiments, the scene imager is a video camera. In a variety of embodiments, scene imagers are pointed in an outward facing direction from the patient such that the imager captures images of scenes in front of the patient. In many embodiments, additional sensors are used to augment scene imagers with alternate types of data. For example, GPS coordinates, LIDAR, alternate photosensors, sonar, and/or any other sensor configuration can be used as appropriate to the requirements of specific applications of embodiments of the invention. In some embodiments, scene imagers include at least two cameras located at a fixed distance apart in order to acquire depth information regarding the scene.

[0042] External controller 1 10 is also connected to an eye imager 130 via a communication channel 1 14. Communication channel 144 is similar to communication channel 1 12, and enables relatively high-bandwidth communication between the external controller and the eye imager. Eye imagers can be any image capture device capable of accurately recording the motion of an eye. In numerous embodiments, the eye imager is a video camera capable of recording sufficient data as to be used for motion tracking of the eye and/or the point of focus of the eye. This data is collectively described as eye position data, and can include raw image data to be processed by an external controller, and/or processed eye position data depending on the capabilities of the specific implementation of the eye imager architecture. In numerous embodiments, eye position data can further include pupil diameter, eye accommodative state, and/or focal depth. In many embodiments, eye imagers are capable of high-resolution tracking. However, in various embodiments, eye imagers are incapable of measuring small, micro-saccadic eye movements at high frequency. In this case, external controllers and/or eye imagers can measure saccade and add simulated micro-saccades which statistically approximate natural micro-saccadic eye movement. In numerous embodiments, both the scene imager and eye imager are implemented on the same platform as the external controller. For example, in some embodiments, a pair of glasses can be used to house an outward facing scene imager, an inward facing eye imager, and the external controller, as well as any necessary cables and/or transmitters/receivers necessary to implement the communication channels. However, any number of different physical platform configurations can be utilized as appropriate to the requirements of specific applications of embodiments of the invention.

[0043] External controller 1 10 is connected to implanted controller 140 via a communication channel 1 16. In numerous embodiments, communication channel 1 16 is a relatively lower-bandwidth communication channel. Due to the low power requirements of the implanted controller, as well as because the implanted controller may be subject to movement due to natural biological movements (e.g. eye movements, muscle movement, etc.), it can sometimes be difficult to establish a high-bandwidth connection that has reliably low latency. In a variety of embodiments, communication channel 1 16 is implemented using a low-power Bluetooth connection. In numerous embodiments, communication channel 1 16 is implemented using a near field communication channel, such as, but not limited to, a near-field magnetic induction communication channel, and/or a radio frequency based communication channel and/or ultrasound channel.

[0044] Implanted controllers are capable of implementing numerous different artificial sight processes. In many embodiments, implanted controllers are implanted into a patient by mounting the implanted controller onto the exterior of the eye. In some embodiments, the implanted controller is implanted into the episcleral layer of the eye. In a variety of embodiments, the implanted controller is integrated into a contact lens or an intraocular lens. Implanted controllers can obtain dictionaries from external controllers as well as field of view information, and use the received data to continuously select stimulation pulses based on the dictionary. In numerous embodiments, implanted controllers are implemented as an application-specific integrated circuit (ASIC). However, implanted controllers can be implemented using field-programmable gate arrays (FGPAs), or as energy efficient, low-heat general purpose processing devices equipped with machine- readable instruction.

[0045] Implanted controller 140 is connected to stimulation interface 150 via a communication channel 142. Communication channel 142 is relatively high-bandwidth and can be implemented using wired or wireless communications methods. In some embodiments, power is transmitted across communication channel 142, or via an alternative power transmission channel. In many embodiments, stimulation interfaces include a dense grid of small electrodes that are surgically connected to the RGC layer of the retina. Where many retinal prosthetics use sparse grids (e.g. 60 electrodes) of large electrodes (e.g. 200 pm), dense electrode grids may have on the order of 1000 electrodes per square millimeter. In some embodiments, the electrodes are placed in a rectangular and/or hexagonal arrangement, where each electrode is between 8 and 15 micrometers in diameter, and each electrode is spaced between approximately 10-50 micrometers apart.

[0046] In a variety of embodiments, the electrodes may have diameters of 5-20 pm and spacing of 10-60 pm. In numerous embodiments, the grid is connected to the retina with a semi-regular microwire bundle of approximately the same density as the electrode grid itself. In many embodiments, an interposer device is used to“zoom in” the density of electrodes on the interface to a higher density.

[0047] In numerous embodiments, stimulation interfaces include recording circuitry enabling them to record the response of RGCs to electrical stimulation provided by the stimulation interface, and/or their spontaneous electrical activity. These recordings can be used to create the dictionary entries that specify the probability of firing of each RGC in response to a pattern of stimulation. To accomplish this, in some embodiments, the recording circuitry is capable of recording voltages with approximately 10 bits of resolution over a range on the order of hundreds of microvolts, with approximately 10 microvolts of front-end noise, and at a sampling rate of approximately 20kHz. However, the sensitivity and capabilities of the recording circuitry can be modified, and/or any of a number of different electrode arrangements can be used as appropriate to the requirements of specific applications of embodiments of the invention.

[0048] Interfaces can selectively apply variable stimulation to any of the electrodes in the electrode array based on instructions received from the implanted controller. Stimulation pulses can be monophasic, biphasic, triphasic, or multiphasic in time, with amplitude from approximately 0.1 -10 mA, and duration of approximately 25-100 psec. In numerous embodiments, interfaces also are capable of recording electrical impulses from adjacent RGCs and transmit the information back to the implanted controller, which in turn can transmit the recorded responses to the external controller. In some embodiments, the implanted controller and the stimulation interface are implemented as part of the same piece of hardware, and therefore the implanted controller is inside the eye itself, rather than internal to the body but on the external face or side of the eye.

[0049] External controller 1 10 is further connected to a network 160 via a communication channel 1 18 that gives access to a configuration server 170. Communication channel 1 18 is any communication channel that can be used to access configuration server 170. For example, in numerous embodiments, communication channel 1 18 is a connection to the Internet, which enables the passage of configuration data from configuration server 170 to the external controller 1 10. However, any number of different network and communication channel infrastructures can be used to connect the external controller to the configuration server as appropriate to the requirements of specific applications of embodiments of the invention. Configuration servers are used to provide updates to external controllers, eye imagers, or scene imagers as needed. In some embodiments, updates can be provided to implantable controllers as well. Received updates can include, but are not limited to, pre-processed dictionaries, calibration information for any component, or any other information required by the system as appropriate to the requirements of specific applications of embodiments of the invention. In some embodiments, the configuration server generates an initial global dictionary which can be used to generate smaller, faster, and/or more personalized dictionaries. In a variety of embodiments, the configuration server obtains pre-processed dictionaries from other devices. In numerous embodiments, the configuration server utilizes electrical recording and stimulation data to generate dictionaries.

[0050] While a specific implementation of an artificial sight system in accordance with an embodiment of the invention is illustrated in FIG. 1 , any number of different implementations, such as, but not limited to, those that utilize alternate communication methods, can be utilized as appropriate to the requirements of specific applications of embodiments of the invention. For example, an alternative artificial sight system in accordance with an embodiment of the invention is illustrated in FIG. 2. Specific implementations of implanted controllers are discussed below. Implanted controllers

[0051] Implanted controllers are controllers that are implanted into the patient’s body abutting or proximal to the eye. Implanted controllers, as discussed above, are able to quickly send instructions to stimulation interfaces based on received dictionaries. Turning now to FIG. 3, a conceptual illustration of an implantable controller in accordance with an embodiment of the invention is illustrated. Implantable controller 300 includes a processor 310. Processer 310 can be any type of processing circuitry capable of implementing logic operations. Implantable controller 300 further includes a higher bandwidth input/output interface 320. The higher bandwidth input/output interface can be used for communicating with other implanted components such as stimulation interfaces. Implantable controller 300 further includes a lower bandwidth input/output interface 330. Lower bandwidth input/output interfaces are used for communicating with external devices, such as external controllers. The terms“higher bandwidth” and“lower bandwidth” reflect relative channel capacities, as currently it is difficult to implement high bandwidth, stable communications to internal devices using communication systems that conform to the power and space requirements associated with processing in and near the human eye. Further, due to eye movements disrupting communication over the channel, latency can be introduced. In many embodiments, the lower bandwidth channel has variable latency which is addressed by the communication systems. However, new communication methods can be integrated into the system which may increase the overall channel capacity and/or reduce latency of the low bandwidth channel, while maintaining the system architecture and methods described herein.

[0052] Implanted controller 300 further includes a memory 340. Memory 340 can be implemented using a volatile or non-volatile memory storage method. Memory 340 contains a retinal stimulation application 342 capable of configuring the processor to perform artificial sight processes. In many embodiments, memory 340 contains pre compiled dictionaries 344 received from the external controller. While a specific implementation of an implanted controller as a general processing device in accordance with an embodiment of the invention is illustrated in FIG. 3, similar processing capabilities can be achieved with specialized circuitry designed to implement pre-determ ined logic operations similar to those operations described by the retinal stimulation application described, for example, as an ASIC, in order to increase energy efficiency. A description of artificial sight processes is found below.

Artificial sight processes

[0053] Artificial sight processes can replace the function of degraded neural circuitry by providing electrical stimulation patterns to RGCs that are understood by the patient’s brain as coherent, accurate visual stimuli reflecting the surrounding environment. In numerous embodiments, different steps of artificial sight processes are distributed across at least implanted controllers and external controllers. In many embodiments, a master dictionary of stimulation patterns and corresponding evoked neural responses is generated for a user. By rapidly selecting elements from the dictionary based on a field of view (FOV) and providing stimulation to the user’s RGCs’, neural responses that reflect a desired image can be transmitted to the user’s brain, thereby enabling them to“see” the FOV. As used herein, FOV refers to a region of a scene around a fixation point of the viewer which is capable of being rendered by the stimulation interface. An example of an image of a scene and various FOVs as selected by a user’s eye movements in accordance with an embodiment of the invention are illustrated in FIG. 4. As the user’s eye focuses on different points in the scene, different FOVs can be selected based on the fixation points. Providing the stimulation at a high rate (e.g. approximately 10kHz), the user’s brain will integrate multiple stimulations as arriving at the same time, referred to as“temporal dithering,” which enables multiple dictionary elements to be effectively combined to create richer visual sensations. In many embodiments, master dictionaries are generated using a calibration system in a clinical setting.

[0054] In a variety of embodiments, master dictionaries contain too many elements be searched quickly enough to achieve temporal dithering. In these cases, the computation can be simplified to reduce computational complexity. For example, the master dictionary can be reduced by removing redundant or infrequently used elements and/or the master dictionary can broken into individual FOV dictionaries that are designed for evoking a particular set of neural responses corresponding to a particular FOV. In many embodiments, external controllers can be provided the master dictionary for the particular patient, and/or can generate FOV dictionaries of the master dictionary by selecting entries that are appropriate and useful for the current FOV of the patient. In many embodiments, the stimulation interface does not cover every RGC, and therefore the FOV may be considerably smaller than the size of the image captured by the scene imager. These smaller dictionaries can increase the efficiency of the system by reducing the time to loop over dictionary entries and determine the next entry to use during a given stimulation period, typically using a“greedy” or a dynamic programming approach. Further, to provide additional computational complexity reduction, the resolution of the target image can be reduced, and/or computations can be split over multiple controllers. Processes for generating master dictionaries are described below.

A. Master dictionaries

[0055] A master dictionary can be generated by providing stimulation through a particular electrode in a stimulation interface, recording neural activity using the stimulation interface, calibrating the neural activity, and obtaining from this calibration the probability of spiking by each RGC that is recorded. This probability can constitute a single dictionary entry. Given that there can be a dictionary entry for every possible current level through each electrode in the stimulation interface and all combinations thereof, there may not be sufficient time to generate all possible entries for a particular patient. Flowever, with sufficient dictionary entries, performance of the system is not, or minimally, impeded.

[0056] Turning now to FIG. 5, a process for generating personalized master dictionaries in accordance with an embodiment of the invention is illustrated. Process 500 includes implanting (510) a stimulation interface. In many embodiments, the stimulation interface has already been implanted. Process 500 further includes stimulating (520) RGCs using at least one electrode of the stimulation interface. Responses to the stimulus from RGCs nearby the stimulating electrode can be recorded using electrodes in the stimulation interface. If a sufficient number of responses has been collected (540), a personalized master dictionary based on the recorded responses can be generated (550). In many embodiments, the master dictionary is a data structure that stores the expected response to a particular stimulus pattern. If an insufficient amount of data has been collected (540), additional stimuli can be performed and additional responses can be recorded. Once generated, the master dictionary can be transferred (560) to an external controller.

[0057] In some embodiments, the master dictionary and/or dictionaries (e.g. sub dictionaries discussed below as“dictionaries”) can be pruned. In various embodiments, a smaller version of the master dictionary is computed that works across many natural images. For example, by selecting the most frequently used electrodes across a battery of many common targets, a workable reduced master dictionary can be generated. The number of elements selected for the reduced master dictionary can be based on a tunable threshold value based on performance, computational availability, the common scenes for a particular patient, and/or any other metric as appropriate to the requirements of specific applications of embodiments of the invention.

B. Selecting dictionary elements for stimulation

[0058] Elements from a dictionary can be selected to visualize a particular FOV. Sight can be restored by enabling a user to view what they are“looking” at by artificially capturing the scene in front of them (or any arbitrary scene) and choosing the dictionary elements to recreate the neural responses that would have been generated by a healthy eye. Turning now to FIG. 6, a process for selecting dictionary elements for stimulation in accordance with an embodiment of the invention is illustrated.

[0059] Process 600 includes obtaining (610) image data describing a scene. Image data can be obtained from scene imagers. An FOV is determined (620) from the scene based on the gaze of the user. In numerous embodiments, the gaze of the user can be determined using the eye position data from the eye imager and any gaze tracking process. In numerous embodiments, the FOV is based on the amount of the scene around the fixation point of the user that can successfully be converted into electrical stimulation by a stimulation interface. A single image can have multiple FOVs based on the range of the user’s eye movements. In some embodiments, different FOVs are identified in the same scene.

[0060] In some embodiments, the eye imager cannot record the eye-position at a very high accuracy to measure microsaccades. In this case, simulated microsaccades can be sampled with statistics that match the statistics of microsaccades in healthy patients but not necessarily the exact underlying trajectory. The FOV can be moved around the recorded eye location with these simulated microsaccades. In other embodiments, a similar strategy can be applied for other kinds of eye-movements such as jitter, which is also hard to measure. In other embodiments, the simulated eye-movements can be chosen to simultaneously obey the statistics of eye-movements in healthy patients and make the resulting FOV easy to reproduce in a prosthesis.

[0061] As noted above, in many embodiments, a master dictionary can be reduced in size to sufficiently achieve temporal dithering. Flowever, in many embodiments, FOV dictionaries are used to achieve additional performance. In these situations, a FOV dictionary can be generated (630) from a master dictionary using the external controller. To create a FOV dictionary, the degree of matching between a stimulus and the desired response (e.g.“seeing” the FOV) can be calculated using the following notations and assumptions:

[0062] Notations

• S: target visual stimulus (dimensions: #pixels x 1 ), where S is a static frame of the visual stimulus (FOV)

• R: neural responses (dimensions #cells x 1 )

• f{ S,R) is the Utility Function reflecting degree of matching between S and R

• A: Reconstruction matrix (dimensions #pixels x #cells). Columns of A correspond to the reconstruction filters of different cells.

• M: responses to artificial stimulation

• D = {(e d , <¾, d )}^ D| : stimulation dictionary, where each element d consists of stimulation electrodes (e d ), current amplitudes (p d ), and response probability vector ( p d e [0,l] #ceiisx1 ) · Response for cell /, R d l -Bernoulli(p d l ) when current pattern d is stimulated.

[0063] Assumptions

• A dynamic stimulus can be approximated with a sequence of static frames.

• A sequence of retinal responses, produced in rapid succession by electrical stimulation, is summed by the brain to produce a single perceived image. Therefore, the reconstructed stimulus for the sequence R I , ... ,R T , is given by R = • Responses generated by a stimulation pattern, d t , are independent of the responses to the previous stimulation patterns, dt, ... , dt-i, as long as no stimulus is delivered to a cell during its spiking refractory period.

[0064] In general, a well selected stimulation pattern is one which has a high degree of matching between stimulus S and neural response R, given by f{ S,R). f{ S,R) can be calculated in any number of ways depending on the requirements of a specific applications of an embodiment of the invention. In the response space, f(S,R) can be calculated as f(S, R) = o(h(S), R), where o is a notation of similarity between responses and h(S) is an encoding model that predicts neural responses for a given stimulus. In various embodiments, o can be calculated using an indicator function, Hamming distance, Victor-Purpura distance, deep learning methods using neural data, and/or any other method as appropriate to the requirements of specific applications of embodiments of the invention. Further, h(S) can be a linear/non-linear Poisson model, a generalized linear model, encoding models with spatial nonlinearities and adaptation, deep learning models, and/or any other model as appropriate to the requirements of specific applications of embodiments of the invention.

[0065] In the stimulus space, f{S,R) can be calculated as f(S, R ) = p(S, g(R)), where p is a notation of similarity between stimuli, and g(R) is a decoding model that predicts how a brain might interpret the neural responses. In numerous embodiments, p is calculated as negative mean-squared error (MSE): p(S, S) = -||S - || j .The perceived visual image can be approximated by linearly reconstructing neural responses: g(R ) = S = AR, with A an optimal linear filter. In various embodiments, p is calculated as a structural similarity index (SSIM), or another measure of perceptual similarity of images as appropriate to the requirements of specific applications of embodiments of the invention. In numerous embodiments, p can be calculated based on a particular task, for example, when walking on a street, the similarity of edges that indicate pavement can be measured. g{R) can be calculated using an inverted encoding model or a neural network. In various embodiments g{R) can be calculated using an optimal linear filter. However, any number of different methods of calculating p and g{R) can be used as appropriate to the requirements of specific applications of embodiments of the invention. Further, when the stimulus or response space is not known, f{S,R) can be calculated as f(S, R ) = < m(S), n(R) >, where the visual stimulus is transformed to the latent space by m(.), neural responses are transformed to the latent space by n(.), and is the inner product in Euclidean space.

[0066] A FOV dictionary can then be generated by selecting a number of elements from the master dictionary that best fit the FOV. In order to select the elements, the number of times each element of the master dictionary that could be used in the FOV can be counted. For example, let q be a vector of the number of times each dictionary element is called; D be a matrix (# cells x # dictionary elements), where each column is the probability of stimulation of different cells for the corresponding dictionary element; Dq the expected responses; and B the desired size of the FOV dictionary, then:

q * = max f(S, Dq) with q > 0; q t £ B. The FOV dictionary can then be given by all stimulation elements which have a non-zero entry in q * . Once generated, the FOV dictionary can be transmitted (640) to the internal controller.

[0067] The internal controller then continuously selects (650) dictionary elements based on the FOV of the user. Given the size of the stimulation interface, it may not be possible to recreate an entire FOV. Indeed, the scene imager itself may have a larger field of view that the user. Based on the gaze of the user, an FOV is selected for reproduction, and the dictionary elements selected recreate that portion of the scene. In numerous embodiments, the internal controller stores the master dictionary or reduced master dictionary. In various embodiments, the internal controller instead stores a FOV dictionary. Flowever, in many embodiments, the internal controller can store more than one type of dictionary. The problem of which dictionary elements to select reduces to choosing a sequence of dictionary elements for stimulation dr in a way that temporal dithering can be exploited to provide multiple independent stimulations within a single biological integration window to trigger a perception of the FOV. Generally, this can be formalized as:

where E is an expected value function. [0068] One approach to solving the above problem is with a“greedy” algorithm, which can be formalized as:

The above equation can be implemented in any number of ways depending on the utility function selected to be maximized. For example, when the utility function is selected to negative MSE, and g{R) = AR, a simulation choice at time step t is given by:

[0069]Where the expectation is over Ri, ... ,R t -i, the responses elicited in previous time steps (with Ri ~ Bernoulli(p di )vZ e {0, ... , t - 1}). For the choices of linear reconstruction and mean-squared error, the greedy algorithm can be implemented by decomposing the objective function value into its bias and variance components. Because the response at each time step is assumed to be independent of preceding responses, the total variance decomposes as the sum of the variance in all time steps:

where L is the /th row of A\ and p dt and v d are the contribution to mean and variance in perception for the c/th dictionary element; 5 t (= t -i + A Pdt ) and V t (= + v dt ) are the cumulative mean and variance in perception due to stimulation patterns chose in time steps {1 , ... ,/}. To satisfy the independence assumption, stimulating a cell during its refractory period is generally avoided. To accomplish this, dictionaries generated using the greedy algorithm can be restricted to stimulation patterns that do not activate cells that were already targeted in the preceding 5ms. In some embodiments, the window is between 1 ms and 10 ms. In numerous embodiments, generated dictionary elements that are determined to have low value are discarded.

[0070] However, in numerous embodiments, instead of utilizing a greedy algorithm, a dynamic programming approach can be taken in order to solve for multiple steps at a time. For example, in order to calculate k-steps ahead, assume stimulation at time step t, then T(k, d) the best utility for a stimulation sequence that ends at time step t+k, and stimulation cf; and P(k, d) the cumulative responses R t ) The induction step can then be calculated as:

T(k, d) = max d , E f(S, P(k— l, d') + R d )

P(k, d ) = max d , [P(k— 1 , d') + R d ]

After iterating up to step k = K, choose the K-step stimulation sequence that attains max d T(K, d). This can be quickly looked up by keeping track of the maximizing d’ for each d and step k. However, as one of ordinary skill in the art can appreciate, any number of different methods can be used to resolve d lt d 2 ,— , d T =

as appropriate to the requirements of specific applications of an embodiment of the invention.

[0071] In numerous embodiments, at every time step of approximately 10 kHz, an appropriate electrical stimulation pattern is selected from the dictionary. In many embodiments, dictionary elements (electrical stimulation patterns) are selected at each time step (roughly 0.1 ms), and that element is selected from the currently used dictionary. The element selected is the most effective for visual perception among the choices in the dictionary (see equations above). This is a form of temporal dithering because the biological visual system integrates information over much longer time scales (e.g. 10-100 ms) and therefore many stimulation patterns (dictionary elements) can be presented and summed to produce a perceived image in the patient.

[0072] This process can result in an“unlocked frame rate” of between approximately 10 and 240Hz. In numerous embodiments, the temporal dithering of the stimulation is on the order of 10-100kHz. By rapidly selecting dictionary elements and summing the corresponding stimulation, patients can see in much higher fidelity in real time. In numerous embodiments, this also means that the vast number of possible stimulation patterns do not need to be fully explored during calibration (e.g. when time constraints may be present due to having the patient in a clinic). For example, with 1000 electrodes and 100 current levels on each electrode, the number of possible dictionary elements is 100 1000 , which would be currently infeasible and/or impractical given the computation and storage requirements. However, with temporal dithering as described herein, calibration using just 100 * 1 ,000=100,000 entries is achievable, and this dictionary combined with temporal dithering can be used to capture a large fraction of the possible images that a patient can see. The selected dictionary elements can be used to stimulate (660) RGCs using a stimulation interface to give the impression of vision of the FOV.

[0073] As noted above, scenes can be subdivided into multiple FOVs depending on the gaze of the user. However, in some embodiments, for some scenes, generation of a FOV dictionary can cause lag. That is, the FOV dictionary is not ready in time to reproduce smooth vision. During this delay, the implanted controller can utilize a stored, precomputed dictionary corresponding to a precomputed target with closest location to recorded FOV and/or a generic match for the FOV (e.g. a neutral, uniform head model instead of a person in the FOV). After the external controller generates the more accurate dictionary, it can be sent to the implanted controller to continue the stimulation and fine- tune visual perception. A timeline illustrating the above concept in accordance with an embodiment of the invention is illustrated in FIG. 7. In many embodiments, for every new scene, there are multiple eye movements, each lasting for a variable duration. For each eye-movement, there will be a sequence of stimulation patterns delivered from the dictionary, in rapid sequence (e.g. 10 kHz). In some embodiments, there are two stages of stimulation. In the first stage, stimulation can initially be performed from a precomputed dictionary that is not ideal, while a more accurate FOV dictionary is computed. In the second stage, the stimulations can be performed from the more accurate dictionary.

[0074] Precomputed dictionaries used during the period that the correct dictionary for the FOV is being determined can be selected based on an approximate FOV, and can rely on a low-resolution version of the approximate image. This is analogous to transmitting low spatial frequency image information initially, and then later transmitting the high spatial frequency information to complete the image, in compression schemes such as JPEG.

[0075]Turning now to FIG. 8, exemplary data in accordance with an embodiment of the invention is illustrated. Chart A illustrates the frequency of stimulating different electrodes (size of circles), overlaid with axons (lines) and somas (colored circles) inferred from electrophysiological imaging. Chart B illustrates reconstruction error as a function of the fraction of electrodes included in the dictionary for multiple targets and averaged over 20 targets (top, thick line), the lower bound on error of any algorithm for the subsampled dictionaries for individual targets and averaged across targets (bottom, thick line). Chart C illustrates a target stimulus. Charts D-F illustrate reconstruction using the dictionary with most frequently used 20%, 60% and 100% of electrodes, respectively.

[0076] The results indicate that efficient hardware can be designed by limiting stimulation to the most frequently stimulated electrodes. For example, in many embodiments, an artificial sight process includes evaluating which electrodes are most useful, and ignoring the electrodes that are not, thus cleaning out many entries the master dictionary, and therefore entries in FOV dictionaries, to increase downstream efficiency. In many embodiments, at least 50% of electrodes can be ignored out at the start, thus permanently shrinking the dictionary.

[0077]Across multiple target images, the distribution of stimulated electrodes was spatially non-uniform. More frequently chosen electrodes had larger number of axons nearby (Chart A). These electrodes typically stimulated multiple cells simultaneously, indicating that the algorithm exploits non-selective stimulation patterns. This suggests that previous approaches to optimal stimulation based on maximizing selectivity are not always the most effective. To understand if this finding can guide efficient hardware design, the dictionary was restricted to the most frequently chosen electrodes. The greedy algorithm was applied with the reduced dictionary, and reconstruction error was averaged across 20 random targets. A minimal increase in error was observed even after reducing the number of available electrodes by 50%. On reducing the number of available electrodes further, the error increased gradually. This increase was not due to the greedy stimulation choices, as a lower bound for the best algorithm showed similar behavior (Chart B, lower curve). These results suggest that, as noted above, an efficient device could maintain a reduced dictionary in which approximately 50% of the stimulating units are turned off, reducing memory access and static power. Note that all stimulating units are required during the initial calibration phase to select the most frequently used subset of electrodes.

[0078] While the above processes have been described with specific respect to sight, the same processes could be applied to any of the different senses. For example, in numerous embodiments, instead of a visual stimulus, an auditory, olfactory, or somatosensory stimulus in a system connected to neurons associated with the respective sense. Further, the above processes are not restricted to operation on individual cell spiking. In many embodiments, neural responses are measured as aggregate activity of large populations such as, but not limited to, EEG recordings, LFP recordings, ECOG recordings, optical recordings, and/or any other neural response recording method as appropriate to the requirements of specific applications of embodiments of the invention. Additionally, in a variety of embodiments, the stimulation need not be electrical in nature. Indeed, any number of neural stimulation methods can be used such as, but not limited to .optical magnetic, sonic, and/or any other neural stimulation method that can be used to stimulate neurons. One of ordinary skill in the art can appreciate that the described temporal dithering methods can be utilized in any of a number of different ways using a variety of application specific inputs without departing from the scope or spirit of the invention.

[0079] Although specific systems and methods for artificial vision are discussed above, many different modifications can be implemented in accordance with many different embodiments of the invention. It is therefore to be understood that the present invention may be practiced in ways other than specifically described, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.