Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND EQUIPMENT FOR CENTRAL NERVOUS SYSTEM CHARACTERIZATION FROM RETINA OCT IMAGING DATA
Document Type and Number:
WIPO Patent Application WO/2018/127815
Kind Code:
A1
Abstract:
Data processing method and computer equipment for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT, said method comprising: processing data from the collected OCT data to compute a texture parameter or parameters from collected fundus imaging data; classifying the computed texture parameter or parameters into a central nervous system health status for characterizing said parametric indicator. The method is able to analysed optical coherence tomography data of one or more tissues of the human and animal central nervous system. The presented method overcomes the need for expensive and complex imaging facilities to assess the health status of the central nervous system in humans and animals in health and disease. It allows for the classification of healthy controls and patients into the correct group and to monitor changes over time in a fraction of the time and of the cost. Moreover, the technique may be spread because of the low cost and compact nature of the acquisition device as compared to currently used instrumentations, the magnetic resonance imaging and computer tomography devices.

Inventors:
DIAS CORTESÃO DOS SANTOS BERNARDES RUI MANUEL (PT)
DE SÁ E SOUSA DE CASTELO BRANCO MIGUEL (PT)
ROSA GOMES AMBRÓSIO ANTÓNIO FRANCISCO (PT)
RIBEIRO SILVA GILBERTO MIGUEL (PT)
Application Number:
PCT/IB2018/050046
Publication Date:
July 12, 2018
Filing Date:
January 03, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV DE COIMBRA (PT)
International Classes:
A61B3/10; A61B3/12; G06T7/40; G06T7/529; G16H50/20; G16H50/30; A61B5/00
Foreign References:
EP2763103A22014-08-06
US20120150029A12012-06-14
US20140122029A12014-05-01
Other References:
BOGLÁRKA ENIKO VARGA ET AL: "Investigating Tissue Optical Properties and Texture Descriptors of the Retina in Patients with Multiple Sclerosis", PLOS ONE, 1 November 2015 (2015-11-01), San Francisco, XP055462171, Retrieved from the Internet [retrieved on 20180323], DOI: 10.1371/journal.pone.0143711
BERNARDES, R.; SANTOS, T.; MADURO, C.; SERRANHO, P.; LOBO, C.; CUNHA-VAZ, J.: "OCT for monitoring retinal leakage", 10TH EURETINA CONGRESS, 2010
BERNARDES, R.; SANTOS, T.; SANTOS, A.; LOBO, C.; CUNHA-VAZ, J.: "Blood-retinal Barrier Function Assessment Using High-definition Spectral Domain OCT", 2010 ISIE/IMAGING CONFERENCE, 2010
BERNARDES, R.; SERRANHO, P.; RODRIGUES, P.; GONGALVES, V.; CUNHA-VAZ, J.: "Blood-retinal barrier function status from OCT data", ACTA OPHTHALMOLOGICA 89, 2011, pages 248
BERNARDES, R.: "Optical Coherence Tomography: health information embedded on OCT signal statistics", 33RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE EMBS, IN 33RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE EMBS, 2011
BERNARDES, R.; SANTOS, T.; SERRANHO, P.; LOBO, C.; CUNHA-VAZ, J.: "Noninvasive Evaluation of Retinal Leakage Using Optical Coherence Tomography", OPHTHALMOLOGICA, vol. 226, no. 2, 2011, pages 29 - 36
BERNARDES, R.; SANTOS, T.; SERRANHO, P.; HOME, M.; DURBIN, M.; CUNHA-VAZ, J.: "Information on Neuronal Ageing from OCT Evaluation of the Retina", ARVO 2012 ANNUAL MEETING, 2012
BERNARDES, R.; CUNHA-VAZ, J.: "In Optical Coherence Tomography", 2012, SPRINGER BERLIN HEIDELBERG, article "Evaluation of the Blood-Retinal Barrier with Optical Coherence Tomography", pages: 157 - 174
SERRANHO; PEDRO: "Optical Coherence Tomography - Automatic Retina Classification Through Support Vector Machines", EUROPEAN OPHTHALMIC REVIEW, vol. 6, no. 4, 2012, pages 200 - 203
BERNARDES, R.; SANTOS, T.: "Optical Coherence Tomography: signal signature on neuronal ageing and blood-retinal barrier status", EVER 2012, 2012
BERNARDES, RUI; CORREIA, ANTONIO L; D'ALMEIDA, OTILIA; BATISTA, S.; SOUSA, LIVIA; CASTELO-BRANCO, MIGUEL.: "Optical properties of the human retina as a window into systemic and brain diseases", ANNUAL MEETING OF THE ASSOCIATION FOR RESEARCH IN VISION AND OPHTHALMOLOGY, IN INVESTIGATIVE OPTHTALMOLOGY & VISUAL SCIENCES, 2014
DUQUE, C.; JANUARIO, C.; LEMOS, J.; FONSECA, P.; CORREIA, A.; RIBEIRO, L.; BERNARDES, R.; FREIRE, A.: "Optical coherence tomography in LRRK2-associated Parkinson Disease", NEUROLOGY, vol. 84, no. 14, 6 April 2015 (2015-04-06)
R. M. HARALICK; K. SHANMUGAM: "Textural features for image classification", IEEE TRANSACTIONS ON SYSTEMS, 1973, pages 610 - 621, XP011192771, DOI: doi:10.1109/TSMC.1973.4309314
L.-K. SOH; C. TSATSOULIS: "Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices", IEEE TGRS, vol. 37, no. 2, 1999, pages 780 - 795, XP011021254
D. A. CLAUSI: "An analysis of co-occurrence texture statistics as a function of grey level quantization", CANADIAN JOURNAL OF REMOTE SENSING, vol. 28, 2002, pages 45 - 62
Attorney, Agent or Firm:
TEIXEIRA DE CARVALHO, Anabela (PT)
Download PDF:
Claims:
C L A I M S

1. Data processing method for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT, said method comprising: processing data from the collected OCT data to compute a texture parameter or parameters from collected fundus imaging data;

classifying the computed texture parameter or parameters into a central nervous system health status for characterizing said parametric indicator.

2. Method according to any of the previous claim wherein the processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters comprises segmenting the collected OCT data by retinal layer for classifying the computed texture parameter or parameters from said segmented retinal layers.

3. Method according to the previous claim wherein the segmented retinal layers comprise the ganglion cell layer, in particular the retinal layers for classifying consist of the ganglion cell layer.

4. Method according to any of the previous claims wherein the processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters comprises the steps of:

segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions; calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions;

characterizing said parametric indicator for central nervous system health status by statistical feature calculation from said co-occurrence matrix texture parameter or parameters.

5. Method according to the previous claim wherein the processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters comprises the steps of:

segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions;

calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions;

obtaining OCT histogram data for the entire scanned retina, for each of the individual layers and for sets of consecutive layers;

characterizing said parametric indicator for central nervous system health status by statistical feature calculation from said OCT histogram data and said cooccurrence matrix texture parameter or parameters.

6. Method according to the previous claim wherein the obtaining OCT histogram data further comprises, for each histogram, the steps of:

fitting a sum of Gaussian curves and determining respective parameters to said each histogram, in particular the amplitude, the centre and the standard deviation;

computing skewness and kurtosis from said each histogram; computing root-mean-square-error as an indicator of the goodness of fit.

7. Method according to claim 4 or 5 wherein the co-occurrence matrix is a Gray-Level Co-Occurrence Matrix, GLCM.

8. Method according to claim any of the claims 2-7 wherein the segmenting the collected data by retinal layer comprises splitting the collected data into aggregates sharing common characteristics belonging to the same anatomic layer of the retina.

9. Method according to any of the claims 2-8 wherein the computing a fundus image comprises averaging the collected OCT data for each segmented retinal layer for each A-scan in.

10. Method according to any of the previous claims wherein the central nervous system health status comprises the assessment of drug treatment effect to the central nervous system.

11. Method according to any of the previous claims wherein the central nervous system health status comprises the distinction between different stages of disease of the central nervous system.

12. Method according to any of the previous claims wherein the central nervous system health status comprises the distinction between healthy ageing and unhealthy ageing of the central nervous system.

13. Method according to any of the previous claims wherein the central nervous system health status comprises the distinction between healthy and unhealthy central nervous system.

14. Method according to any of the previous claims wherein the central nervous system health status comprises Parkinson, Multiple Sclerosis and/or Alzheimer disease status.

15. Method according to the previous claim wherein the central nervous system health status comprises Multiple Sclerosis and the texture parameter or parameters comprise Sum Of Squares.

16. Method according to the previous claim wherein the texture parameter or parameters comprise Cluster Shade.

17. Method according to any of the claims 15-16 wherein the texture parameter or parameters comprise Sum of Variances.

18. Method according to any of the claims 15-17 wherein the texture parameter or parameters comprise Maximum Probability.

19. Method according to any of the claims 15-18 wherein the texture parameter or parameters comprise Sum Average.

20. Method according to any of the claims 15-19 wherein the texture parameter or parameters comprise Cluster Prominence.

21. Computer equipment for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT, said device comprising data processing means arranged for calculating said parametric indicator by:

processing data from the collected OCT data to compute a texture parameter or parameters from collected fundus imaging data;

classifying the computed texture parameter or parameters into a central nervous system health status for characterizing said parametric indicator.

22. Computer equipment according to the previous claim, wherein the data processing means are arranged for processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters by the steps of:

segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions;

calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions;

characterizing said parametric indicator for central nervous system health status by statistical feature calculation from said co-occurrence matrix texture parameter or parameters.

23. Computer equipment according to the previous claim, wherein the data processing means are arranged for processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters by the steps of:

segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions;

calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions; obtaining OCT histogram data for the entire scanned retina, for each of the individual layers and for sets of consecutive layers;

characterizing said parametric indicator for central nervous system status by statistical feature calculation from said OCT histogram data and said cooccurrence matrix texture parameter or parameters.

24. Computer equipment according to the previous claim, wherein the data processing means are arranged for obtaining OCT histogram data by, for each histogram, the steps of:

fitting a sum of Gaussian curves and determining respective parameters to said each histogram, in particular the amplitude, the centre and the standard deviation;

computing skewness and kurtosis from said each histogram;

computing root-mean-square-error as an indicator of the goodness of fit.

25. Computer equipment according to any of the claims 22-23, wherein the cooccurrence matrix is a Gray-Level Co-Occurrence Matrix, GLCM.

26. Computer equipment according to any of the claims 21-25, wherein the data processing means are arranged for segmenting the collected data by retinal layer by splitting the collected data into aggregates sharing common characteristics belonging to the same anatomic layer of the retina.

27. Computer equipment according to any of the claims 21-26, wherein the data processing means are arranged for computing a fundus image by averaging the collected OCT data for each segmented retinal layer for each A-scan.

Description:
METHOD AND EQUIPMENT FOR CENTRAL NERVOUS SYSTEM CHARACTERIZATION FROM RETINA OCT IMAGING DATA

Technical field

[0001] The present disclosure relates to a data processing method and computer equipment for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT.

Background

[0002] The central nervous system (CNV) changes over time either due to the natural ageing (healthy ageing) or due to CNV diseases. Alzheimer, Parkinson and Multiple Sclerosis are common diseases of the CNV with a significant impact in the life of patients and family. Animals models of disease and healthy ageing allow to test therapies and to monitor changes in the CNV.

[0003] The retina is the visible part of the CNV and provides a window into the brain non-invasively and contactless. Changes in the retina tissue may translate changes in the brain and vice-versa. Imaging the brain of humans or animals require the use of complex and expensive MRI (magnetic resonance imaging) and CT (computer tomography) equipment. Optical coherence tomography (OCT) is a relative recent technique that allows imaging the human and animal eyes in vivo and in situ.

[0004] Prior related works in the field of the invention resort to the analysis of histograms only, do not account for the different retinal layers and do not make use of pattern analysis of herein computed fundus references. References to related prior works include:

1. Bernardes, R.; Santos, T.; Maduro, C; Serranho, P.; Lobo, C; Cunha-Vaz, J.. OCT for monitoring retinal leakage, 10th EURETINA Congress, Paris, 2010 (Comunication) 2. Bernardes, R.; Santos, T.; Santos, A.; Lobo, C; Cunha-Vaz, J.. Blood-retinal Barrier Function Assessment Using High-definition Spectral Domain OCT,2010 ISIE/lmaging Conference, Fort Lauderdale, FL, 2010 (Poster).

3. Bernardes, R.; Serranho, P.; Rodrigues, P.; Gongalves, V.; Cunha-Vaz, J.. 2011. "Blood-retinal barrier function status from OCT data", Acta Ophthalmologica 89, S248: 0 - 0. doi: 10.1111/j.1755-3768.2011.4115.x

4. Bernardes, R.. "Optical Coherence Tomography: health information embedded on OCT signal statistics", 33rd Annual International Conference of the IEEE EMBS, In 33rd Annual International Conference of the IEEE EMBS, Boston, Massachusetts USA, 2011 (Poster).

5. Bernardes, R.; Santos, T.; Serranho, P.; Lobo, C; Cunha-Vaz, J.. 2011. "Noninvasive Evaluation of Retinal Leakage Using Optical Coherence Tomography", Ophthalmologica 226, 2: 29 - 36. doi: 10.1159/000326268

6. Bernardes, R.; Santos, T.; Serranho, P.; Home, M.; Durbin, M.; Cunha-Vaz, J.. Information on Neuronal Ageing from OCT Evaluation of the Retina, ARVO 2012 Annual Meeting, Fort Lauderdale, FL,2012 (Poster).

7. Bernardes, R.; Cunha-Vaz, J.. 2012. Evaluation of the Blood-Retinal Barrier with Optical Coherence Tomography. In Optical Coherence Tomography, ed. Rui Bernardes and Jose Cunha-Vaz, 157 - 174. ISBN: 978-3-642-27409-1. Berlin, Heidelberg: Springer Berlin Heidelberg, doi: 10.1007/978-3-642-27410-7_8

8. Serranho, Pedro. 2012. "Optical Coherence Tomography - Automatic Retina Classification Through Support Vector Machines", European Ophthalmic Review 6, 4: 200 - 203.

9. Bernardes, R.; Santos, T.. Optical Coherence Tomography: signa l signature on neuronal ageing and blood-retinal barrier status, EVER 2012, Nice, 2012 (Comunication).

10. Bernardes, Rui; Correia, Antonio L; d'Almeida, Otilia; Batista, S.; Sousa, Livia; Castelo-Branco, Miguel.. "Optical properties of the human retina as a window into systemic and brain diseases", Annual Meeting of the Association for Research in Vision and Ophthalmology, In Investigative Opthtalmology & Visual Sciences, Orlando, Florida, EUA, 2014 (Poster)

11. Duque, C; Januario, C; Lemos, J.; Fonseca, P.; Correia, A.; Ribeiro, L; Bernardes, R.; Freire, A.. "Optical coherence tomography in LRRK2-associated Parkinson Disease", Neurology April 6, 2015 vol. 84 no. 14 Supplement P2.147

12. R. M. Haralick and K. Shanmugam, "Textural features for image classification," IEEE Transactions on systems, SMC-3, 610-621 (1973).

13. L.-K. Soh and C. Tsatsoulis, "Texture analysis of SAR sea ice imagery using gray level co-occurrence matrices," IEEE TGRS, 37(2), 780-795 (1999).

14. D. A. Clausi, "An analysis of co-occurrence texture statistics as a function of grey level quantization," Canadian Journal of remote sensing, 28, 45-62 (2002).

[0005] These references are hereby incorporated in their entirety.

[0006] These facts are disclosed in order to illustrate the technical problem addressed by the present disclosure.

General Description

[0007] The present disclosure relates to a data processing method and computer equipment for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT.

[0008] The present disclosure also relates to a method for the analysis of optical coherence data, particularly optical coherence tomographic data generated by means of optical coherence tomography systems, to provide information on the healthy status of the imaged human or animal central nervous system (CNV), healthy ageing changes in the CNV, changes associated to CNV diseases, change due to drug administration, changes due to treatment of the CNV, to provide a means to discriminate between unhealthy humans or animals and respective healthy controls and to provide a means to discriminate between different CNV diseases. [0009] The present disclosure also relates to a method for the characterization of the healthy status, disease staging, disease progression, disease monitoring and assessment drug and treatment effects of the central nervous system from optical coherence tomography data in humans and animals.

[0010] The present disclosure also relates to a method able to analysed optical coherence tomography data of one or more tissues of the human and animal central nervous system. The presented method overcomes the need for expensive and complex imaging facilities to assess the status of the central nervous system in humans and animals in health and disease. It allows for the classification of healthy controls and patients into the correct group and to monitor changes over time in a fraction of the time and of the cost. Moreover, the technique may be spread because of the low cost and compact nature of the acquisition device as compared to currently used instrumentations, the magnetic resonance imaging and computer tomography devices.

[0011] The disclosure comprises the analysis of gathered OCT data by means of statistical analysis and classification methods.

[0012] Unless otherwise defined herein, scientific and technical terms used in connection with the present invention have the meanings commonly understood by those of ordinary skill in the art. Further, unless otherwise required by context, singular terms include pluralities and plural terms include singular.

[0013] The term "retina" as used herein refers to the tissue in the ocular fundus extending from the end of the vitreous to the anterior of the retinal pigment epithelium ( PE) both from humans and animals. The retina is the visible part of the central nervous system in humans and animals.

[0014] The term "data" as used herein refers to the information provided by the OCT at every individual or collective imaged sites of the central nervous system.

[0015] The term "OCT" as used herein refers to the both the technique and equipment used to collect data from the central nervous system. [0016] The term "scan" as used herein refers to the gathering of data from the imaged region of the central nervous system by the OCT.

[0017] The term "OCT volume" as used herein refers to data gathered in a scan by the OCT.

[0018] The term "histogram" as used herein refers to the distribution of data for the imaged region of the central nervous system, in particular from the retina or from specific layers of the retina.

[0019] The term "segmentation" as used herein refers to the split of data into aggregates sharing common characteristics, e.g. belong to the same anatomic layer of the retina.

[0020] The term "classification" as used herein refers to the attribution of a label to a case, e.g. classification of a retina to the group of Alzheimer means the case presents mostly characteristic of the Alzheimer group.

[0021] The term "software" as used herein refers to the set of computer instructions executed by a computer.

[0022] The term "fundus image" as used herein refers to an image computed from OCT gathered data by means of mathematical operations.

[0023] The term "A-scan" as used herein refers to the values of the OCT data along the direction of the OCT laser beam.

[0024] The term "B-scan" as used herein refers to the set of A-scans along the fastest scanning direction of the OCT.

[0025] The abbreviations used herein have their usual meaning in the art. For clarity, abbreviations are as follows: "OCT" means optical coherence tomography; "MRI" means magnetic resonance imaging; "CT" means computer tomography; "ILM" means inner limiting membrane; "RPE" means retinal pigment epithelium and "AD" means Alzheimer disease.

[0026] It is disclosed a data processing method for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT, said method comprising:

processing data from the collected OCT data to compute a texture parameter or parameters from collected fundus imaging data;

classifying the computed texture parameter or parameters into a central nervous system health status for characterizing said parametric indicator.

[0027] In an embodiment, the processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters comprises segmenting the collected OCT data by retinal layer for classifying the computed texture parameter or parameters from said segmented layers.

[0028] In an embodiment, the segmented layers comprise the ganglion cell layer, in particular the segmented layers consist of the ganglion cell layer.

[0029] In an embodiment, the processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters comprises the steps of: segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions;

calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions;

obtaining OCT histogram data for the entire scanned retina, for each of the individual layers and for sets of consecutive layers;

characterizing said parametric indicator for central nervous system health status by statistical feature calculation from said OCT histogram data and said cooccurrence matrix texture parameter or parameters. [0030] In an embodiment, the texture parameter or parameters may be calculated using other well known methods for calculating texture parameter or parameters, e.g. based on Wavelet or Fourier transforms.

[0031] In an embodiment, the segmenting the collected data by retinal layer comprises splitting the collected data into aggregates sharing common characteristics belonging to the same anatomic layer of the retina.

[0032] In an embodiment, the computing a fundus image comprises averaging the collected OCT data for each segmented retinal layer for each A-scan.

[0033] In an embodiment, the co-occurrence matrix is a Gray-Level Co-Occurrence Matrix, GLCM.

[0034] In an embodiment, the obtaining OCT histogram data further comprises, for each histogram, the steps of:

fitting a sum of Gaussian curves and determining respective parameters to said each histogram, in particular the amplitude, the centre and the standard deviation;

computing skewness and kurtosis from said each histogram;

computing root-mean-square-error as an indicator of the goodness of fit.

[0035] In an embodiment, the central nervous system health status comprises the assessment of drug treatment effect to the central nervous system. In an embodiment, the central nervous system health status comprises the distinction between different stages of disease of the central nervous system. In an embodiment, the central nervous system health status comprises the distinction between healthy ageing and unhealthy ageing of the central nervous system. In an embodiment, the central nervous system health status comprises the distinction between healthy and unhealthy central nervous system. In an embodiment, the central nervous system health status comprises Parkinson, Multiple Sclerosis and/or Alzheimer disease status.

[0036] In an embodiment, the texture parameter or parameters comprise Sum Of Squares. In an embodiment, the texture parameter or parameters comprise Cluster Shade. In an embodiment, the texture parameter or parameters comprise Sum of Variances. In an embodiment, the texture parameter or parameters comprise Maximum Probability. In an embodiment, the texture parameter or parameters comprise Sum Average. In an embodiment, the texture parameter or parameters comprise Cluster Prominence. In an embodiment, the central nervous system health status comprises Multiple Sclerosis and the texture parameter or parameters comprise Sum Of Squares.

[0037] It is also disclosed a computer equipment for the characterization of a parametric indicator for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT, said device comprising data processing means arranged for calculating said parametric indicator by:

processing data from the collected OCT data to compute a texture parameter or parameters from collected fundus imaging data;

classifying the computed texture parameter or parameters into a central nervous system health status for characterizing said parametric indicator.

[0038] In an embodiment, the data processing means are arranged for processing data from collected non-invasive retina imaging data to compute a texture parameter or parameters by the steps of:

segmenting the collected OCT data by retinal layer;

computing a fundus image by averaging the collected OCT data for each segmented retinal layer;

decimating the computed fundus images to account for difference in sampling spacing along OCT B-scans and the spacing between consecutive OCT B-scans; splitting the decimated images into geometric rectangular regions;

calculating texture parameter or parameters from a co-occurrence matrix to identify texture patterns in the image of each geometric rectangular region at a plurality of scales and directions;

obtaining OCT histogram data for the entire scanned retina, for each of the individual layers and for sets of consecutive layers; characterizing said parametric indicator for central nervous system health status by statistical feature calculation from said OCT histogram data and said cooccurrence matrix texture parameter or parameters.

Brief Description of the Drawings

[0039] The following figures provide preferred embodiments for illustrating the description and should not be seen as limiting the scope of invention.

[0040] Figure 1 shows optical coherence tomography data. Top-left: B-scan (#66) of the right eye of a patient diagnosed with multiple sclerosis. Bottom: Plot of the A-scan (#100) earmarked (B-scan above). A-scan values up to sample 364 correspond to OCT readings within the vitreous. A-scan values from sample 366 to 524 correspond to OCT readings within the retina and A-scan values from sample 526 to 1024 correspond to OCT readings within the choroid. The light travels from top to bottom (B-scan) and left to right (A-scan). Top-right: inset of the earmarked area (B-scan). Top-left and top-right images were directly exported from the OCT Explorer software and show the twelve interfaces segmented defining eleven layers.

[0041] Figure 2 shows the computed fundus image (mean value fundus) from the volumetric macular cube scan of the right eye of a healthy control subject. Each of the 7 x 7 blocks show the individually analysed areas which results were later aggregated into larger regions (shaded areas). Image axes are: x-axis (horizontal) - temporal (left) to nasal (right) and y-axis (vertical) superior (top) to inferior (bottom). All left eyes were horizontally flipped to match the right ones and to allow metrics to keep the same relative position.

Detailed Description

[0042] The present disclosure relates to a data processing method and computer equipment for the characterization of a parameter for central nervous system health status based on data collected from non-invasive retina imaging by optical coherence tomography, OCT. [0043] The present disclosure also relates to a method for the analysis of optical coherence data, particularly optical coherence tomographic data generated by means of optical coherence tomography systems, to provide information on the healthy status of the imaged human or animal central nervous system (CNV), healthy ageing changes in the CNV, changes associated to CNV diseases, change due to drug administration, changes due to treatment of the CNV, to provide a means to discriminate between unhealthy humans or animals and respective healthy controls and to provide a means to discriminate between different CNV diseases.

[0044] According to an embodiment, data from OCT scans is exported to be made available for treatment and analysis by means of a computer and software running on the computer.

[0045] According to an embodiment, each OCT volume is processed by a computer software to segment data by retinal layer.

[0046] For each retinal layer a fundus image is computed through the averaging of OCT data for each A-scan.

[0047] According to an embodiment, each of the computed fundus images is further decimated to account for differences in sampling spacing along the B-scans and the spacing between consecutive B-scans.

[0048] According to an embodiment, each decimated image is further split into geometric rectangular regions.

[0049] According to an embodiment, each individual region is analysed by means of the co-occurrence matrix to identify patterns in the image at different scales and directions.

[0050] Computed parameters from the co-occurrence matrix include the energy, contrast and homogeneity. In addition, ratios of those parameters with regard to the different directions are computed to detect the preferred direction of each.

[0051] The parameters computed from the co-occurrence matrix express the anatomical organization of the respective layer of the retina. [0052] According to an embodiment, histograms of OCT data are computed for the entire scanned retina, for each of the individual layers and for sets of consecutive layers.

[0053] According to an embodiment, from each of the above histograms a sum of Gaussian curves is fitted and respective parameters determined, i.e. the amplitude, the centre and the standard deviation. Skewness and kurtosis are computed from the histogram. The root-mean-square-error is computed as an indicator of the goodness of fit.

[0054] According to an embodiment, the full set of features computed from the whole set of histograms and from the full set of computed fundus images are specific for each retina, either human or animal.

[0055] According to an embodiment, the full or part of the full set of features allow to distinguish between healthy controls and patients, allow to distinguish between patients suffering from different neurological diseases, allow to distinguish between patients in different stages of the same disease, allow to distinguish between healthy controls at different stages of the ageing and allow to distinguish patients at different stages of ageing, all of the above for humans and animals.

[0056] According to an embodiment, the full or part of the full set of features allow to identify changes in the structural arrangement of the central nervous system, either associated to the healthy ageing or associated to neurological diseases.

[0057] According to an embodiment, the full or part of the full set of features may be used to compare an eye, and consequently the central nervous system, with that of the healthy population, of the Parkinson, Multiple Sclerosis and Alzheimer diseases, or any central nervous system diseases for which a normative database was established comprising the said set of features, or any other features extracted from the optical coherence tomography data following similar processes.

[0058] An embodiment comprises a method for the analysis of optical coherence tomography data of the human or animal central nervous system using an instrument capable of emitting light wherein said light is directed to the retina or any other part of the central nervous system.

[0059] An embodiment comprises a method wherein one or more tissues are selected from the human or animal eye or any other part of the central nervous system.

[0060] An embodiment comprises a method to compute parameters from the histograms of collected data from the central nervous system.

[0061] An embodiment comprises a method wherein one or more regions of the central nervous system are used.

[0062] An embodiment comprises a method to compute texture parameters of fundus images computed from collected data from the central nervous system.

[0063] An embodiment comprises a method wherein one or more regions of the central nervous system are used.

[0064] An embodiment comprises a method to distinguish between healthy and unhealthy central nervous system. An embodiment comprises a method to distinguish between healthy ageing and unhealthy ageing of the central nervous system. An embodiment comprises a method to distinguish between different stages of diseases of the central nervous system. An embodiment comprises a method to assess drug and treatment effects on the central nervous system.

[0065] Data from 39 patients diagnosed with MS and 38 healthy controls, imaged by the Cirrus SD-OCT 5000 (Carl Zeiss Meditec, Dublin, CA, USA) were gathered from a database.

[0066] As opposed to other texture methods applied to OCT, we apply the texture analysis to mean value fundus (MVF) images calculated as the average of A-scan values between two retinal layer interfaces. The analysis, characterization and classification of objects can make use of texture because it is an intrinsic property of any objects. Furthermore, we address complementary insights of those obtained using thickness analysis only, which will help to characterise and correlate structural alterations of axonal damage with neurological diseases. Besides thickness analysis, OCT can acquire a significant amount of data that has not been fully exploited, namely the complete structural characterization of the retinal layers in health and disease.

[0067] Data from 54 healthy controls were gathered from the database and split into two groups, by age, to demonstrate that texture difference between MS and healthy controls is unique to the disease. We carried out two experiments: relapse-remitting multiple sclerosis (RRMS), with or without optic neuritis (ON), versus healthy controls and healthy participants only, to assess changes due to the natural ageing process.

[0068] The following pertains to RRMS and matched healthy controls. We analysed data from both eyes of 39 patients and 38 age-matched healthy controls. For demographics see table 1. Altogether, we handled 76 eyes and scans on each group. Eighteen eyes (23.7%) from the RRMS group were affected with ON.

Table 1. Demographic data of the multiple sclerosis study

MS HC

N 39 38

Age - mean (std) (yrs) 38.8(7.3) 36.3(9.2)

Age - min (max) (yrs) 27(54) 21(55)

Male (Female) 14(25) 11(27)

Right (Left) Eyes 39(37) 38(38)

Optic Neuritis - yes(no) 18(58) -

Total acquisitions 76 76

MS - multiple sclerosis; HC - healthy controls

[0069] Inclusion and exclusion criteria were defined. All patients had a definite diagnosis of MS according to the 2010 McDonald criteria, and relapsing-remitting disease course. Exclusion criteria for all participants were a history of neurological (other than MS in the patient group) or systemic disease, a significant visual impairment or other ocular or medical conditions with known effects on the retina. For MS patients, a relapse or steroid treatment within eight weeks preceding evaluation were also considered as exclusion conditions. All patients were under treatment with disease- modifying drugs. One MS patient eye's scan was rejected because of the poor scan quality and another patient's eye scan was randomly selected to replace it, hence the difference in the number of people between the two groups and the unbalanced number of right and left eyes in the MS group.

[0070] The following pertains to study on the healthy ageing. We retrieved data from a total of 54 healthy participants divided by age into two groups (table 2).

Table 2. Demographic data of the healthy ageing study

Group 1 Group 2

N 27 27

Age - mean (std) (yrs) 31.9(7.5) 63.2(9.0)

Age - min (max) (yrs) 23(46) 50(79)

Male(Female) 14(13) 13(14)

Right(Left) Eyes 27(27) 27(27)

Total acquisitions 54 54

[0071] The following pertains to optical coherence tomography. The acquisition protocol established the use the 512 x 128 and the 200 x 200 macular cube protocols to scan the 6000 x 6000 x 2000 μιη3 volume centred in the fovea. The 512 x 128 macular cube protocol data was used in this work because it is the one most used in the clinical practice.

[0072] The following pertains to layer segmentation. The segmentation process was performed automatically by the OCT Explorer software (Retinal Image Analysis Lab, Iowa Institute for Biomedical Imaging, Iowa City, IA, USA) (20; 21; 22). The software segments accurately twelve interfaces (Fig. 1) leading to eleven retinal layers: 1) RNFL, 2) GCL, 3) IPL, 4) inner nuclear layer (INL), 5) outer plexiform layer (OPL), 6) outer nuclear layer (ONL), 7) inner segment/outer segment junction (IS/OS), 8) outer segment (OS), 9) outer photoreceptor (OPR), 10) subretinal virtual space and, 11) retinal pigment epithelium (RPE). Furthermore, all the segmentations were visually inspected and manually corrected, whenever necessary. [0073] The following pertains to texture analysis. Herein, we apply texture analysis based on grey-level co-occurrence matrix (GLCM) [12] calculated from projection images computed for each retinal layer following the approach in to identify which retinal layers present the biggest differences to the healthy population. We computed the mean value fundus images based on the macular cube for each of the layers aforementioned. MVF images were sampled to present the same sampling rate in both the vertical and horizontal directions, eliminating the bias due to distinct sampling rates. All left eyes were horizontally flipped to match the right ones and to allow metrics to keep the same relative position across eyes.

[0074] Due to the potential difference in orientation of structures in the different retinal areas around the fovea, as is the case for the RNFL layer, computed fundus images were split into 7 x 7 blocks where the central block is centred in the fovea (Fig. 2). Grey-level co-occurrence matrices were calculated for each block after reducing the number of grey-levels from the original 256 to 16 levels. Four distinct orientations were considered for the GLCM - 0, 45, 90 and 135 degrees - as in [12], by considering the symmetry on, that is, 180 degrees apart angles to be the same. Also, the distance was made unitary because exploratory analysis demonstrated this to be the distance providing the biggest texture metrics' values for the images at hand. For each metric and block, the supremum in the four orientations was considered as the specific metric for the block.

[0075] Twenty texture parameters were computed from each GLCM as follows: 1) homogeneity, 2) contrast, 3) correlation, 4) energy, 5) sum average, 6) sum of squares, 7) sum of variances, 8) sum entropy, 9) difference variance, 10) difference entropy, 11) information measure of correlation 1, 12) information measure of correlation 2, 13) autocorrelation, 14) cluster prominence, 15) cluster shade, 16) dissimilarity, 17) entropy, 18) maximum probability, 19) inverse difference normalized and, 20) inverse difference moment normalized. The definition of texture features #1 to #12 can be found in [12], features #13 to #18 in [13] and features #19 to #20 in [14]. The average of each texture parameter was calculated considering the nine blocks (3 x 3) composing each quadrant of the image (Fig. 2), leading to a total of 80 parameters per image.

[0076] The following pertains to classification. The rationale for the use of a supervised classification approach in this work is that even though individual features (texture metrics) may not allow distinguishing between MS patients and healthy controls, a combination of two or more of these features may eventua lly do. In this case, it would mean that there is embedded information related to the disease within the retinal tissue as imaged by the OCT. The higher the accuracy in correctly classifying individual cases, the distinct the information is. Because we aim to identify which retinal layer(s) is (are) more affected, the classification is carried out only at the individual layer level, that is, only texture features of a particular retinal layer are considered at a time.

[0077] Herein, we use support vector machines (SVM) with radial basis function ( BF) kernel. Each variable is z-score transformed at the preprocessing stage to eliminate differences in scale that may interfere with the system performance.

[0078] To assess the performance of the system to classify unknown cases a 2-fold cross-validation was used. As such, cases for each of the classes were split into two groups, with 50% of the cases being used for training and 50% for testing. The determined class for each case of the testing set is compared to the real (known) class, and the accuracy of the system is computed accordingly.

[0079] A backwards elimination process was used to determine the set of features that carry most of the information.

[0080] Because of the many possible combinations to create groups for training and testing, using 50% of the available cases, the classification process was run multiple times for each layer to provide a glimpse on the distribution of the system performance (accuracy) and the number of parameters required.

[0081] The following pertains to results. The supervised classification system used in this work, and the results achieved, are intended primarily to identify which of the retinal layers are more affected in patients diagnosed with MS. The rationale for the use of a classification system is that the higher the accuracy achieved the distinct data should be between the MS and the healthy control groups.

[0082] From the initial 80 parameters computed for each of the fundus images, for the RNFL, GCL, IPL, INL, OPL and ONL, a backwards elimination approach was followed to determine the set of features providing the best accuracy for each layer. The number of features required to achieve the best classification accuracy is also of importance. The smaller the number of features required the best, as it indicates the selected features are sufficiently distinct to allow to discriminate between groups. Results for the discrimination between the MS and the healthy control groups are presented in table 3.

[0083] The accuracy of the discrimination is over 75% for the GCL and under 72% for any of the remaining layers (table 3). The maximum accuracy for the classification based on the GCL is of 78.9% which was achieved using seven features, while the accuracy achieved using the least number of features (4) was 76.2%. The particular importance of these data is conveyed by the fact that nerve fibres indeed demonstrate their changes as expressed by the 68.45% (median) accuracy, over that of the IPL, INL, OPL and ONL, nevertheless nearly 9% below that of GCL. It shows that changes within the GCL in MS are way more profound than the loss of fibres.

Table 3. Discrimination by retinal layer between eyes from patients diagnosed with multiple sclerosis and healthy controls

Accuracy (%) Features (#)

Layer

min Ql Q2 Q3 max min Ql Q2 Q3 max

RNFL 66.6 67.63 68.45 70.00 71.2 10 14.00 16.00 30.50 36

GCL 75.1 75.68 77.35 78.18 78.9 4 6.25 12.50 19.00 28

IPL 64.3 65.98 66.70 67.18 67.5 2 3.25 5.00 5.75 27

INL 63.8 64.98 67.45 68.15 68.8 2 3.00 5.00 11.25 21

OPL 57.9 59.93 60.50 61.38 63.0 3 16.50 19.00 33.25 42

ONL 59.1 60.35 61.30 61.55 64.3 8 13.50 17.00 25.75 36

[0084] It is also important to notice the age matching between the MS and the healthy groups. Even though this age-match allows to put aside the age factor for the results above, we verified if achieved results are specific for the disease and if results could be dependent on the age considering the relatively large amplitude for both the MS and healthy control groups, respectively 27 to 54 and 21 to 55 years. In consequence, a study was carried out considering only healthy controls in two distinct groups. Results can be found in table 4.

[0085] The distribution of the maximum accuracy achieved and the number of features used, after running the classification process multiple times, by a support vector machine (SVM) with radial basis function (RBF) is presented. The performance was tested using the k-fold cross-validation with k=2. RNFL- retinal nerve fibre layer, GCL - ganglion cell layer, IPL - inner plexiform layer, INL - inner nuclear layer, OPL - outer plexiform layer and, ONL- outer nuclear layer, min - minimum, max - maximum, and Qi - quartile i.

Table 4. Discrimination by retinal layer between eyes from healthy controls of two age groups

Accuracy (%) Features (#)

Layer

min Ql Q2 Q3 max min Ql Q2 Q3 max

RNFL 77.8 81.63 82.10 82.20 83.3 6 8.00 9.50 11.00 12

GCL 66.3 68.00 68.70 69.40 70.0 3 5.00 9.00 12.50 18

IPL 72.6 73.20 73.80 74.95 76.7 6 14.00 18.00 22.00 28

INL 74.6 75.63 78.00 79.05 81.5 8 11.25 15.50 16.00 18

OPL 76.7 77.25 78.05 79.13 81.5 8 13.00 14.50 20.75 25

ONL 84.6 85.58 86.30 86.70 87.6 9 9.50 12.00 12.75 15

[0086] The distribution of the maximum accuracy achieved and the number of features used, after running the classification process ten times, by a support vector machine (SVM) with radial basis function (RBF) is presented. The performance was tested using the k-fold cross-validation with k=2. RNFL - retinal nerve fibre layer, GCL - ganglion cell layer, IPL - inner plexiform layer, INL - inner nuclear layer, OPL - outer plexiform layer and, ONL - outer nuclear layer, min- minimum, max - maximum, and Qi - quartile i. [0087] The accuracy on the discrimination range from 66.3% to 84.6% (minimum accuracy found for ten runs of the classification process), respectively for the ganglion cell and outer nuclear layers (GCL and ONL). Furthermore, it is of particular importance to notice the spread signature of the ageing effect by the different retinal layers. Besides, the GCL, the layer allowing to better discriminate between patients diagnosed with MS and controls, is the steadiest one in the healthy ageing. These results do state the particular incidence of changes in the GCL on MS. While the GCL allowed for a classification accuracy of over 75.0% between the MS and healthy controls for an average age difference between groups of 2.5 years, it allows for a maximum accuracy of 70.0% for the discrimination between healthy control groups with a mean age difference of 31.3 years.

[0088] The classification processes were run 100 times to confirm found differences, for the GCL only, and forcing the backwards elimination up to the last feature. The maximum accuracy was chosen among the last six features to avoid over-fitting. Results can be found in table 5. These confirm the ability to discriminate between patients diagnosed with multiple sclerosis and healthy controls, and that these changes in the GCL are specific for the disease, as confirmed by the less ability to discriminate between two groups of healthy controls even with an average age difference over 30 years.

Table 5. Discrimination accuracy based on ganglion cell layer

Features (#) MS vs HC HC (Group 1 vs Group 2)

1 73.0% - 73.2% (n =4) 65.2% - 66.5% (n=4)

2 - 65.0% - 68.7% (n=8)

3 72.9% - 77.4% (n =10) 64.6% - 70.6% (n=22)

4 73.7% - 77.8% (n =13) 64.4% - 70.2% (n=15)

5 73.6% - 78.9% (n =28) 66.3% - 70.4% (n=24)

6 72.0% - 79.6% (n =45) 64.3% - 70.7% (n=27)

[0089] Discrimination accuracy based on ganglion cell layer texture analysis between patients diagnosed with multiple sclerosis and age matched healthy controls (MS vs HC) (see table 1), and between two healthy control groups (HC (Group 1 vs Group 2)) (see table 2). Classification achieved by a support vector machine (SVM) with radial basis function ( BF) is presented. The performance was tested using the k-fold cross- validation with k=2.

[0090] There is still a long way ahead to understand all the links between what can be seen and measured in the retina and the real meaning for the diseased brain. Because of the significant impact of diseases such as Parkinson's, Alzheimer's and MS, these are the main targets of the present disclosure.

[0091] Our research group started to explore new ways of extracting information from the volumetric scans of OCT for the diabetic retinopathy, first, and later we begin to study the application of the similar concept to neurological disorders. These approaches were based on the distribution of OCT readings within the retina, that is, from the inner limiting membrane (ILM) to the retinal pigment epithelium (RPE), overall for the central macula.

[0092] The human retina is becoming an important source of information on changes undergoing in the central nervous system. Moreover, imaging the retina is far easier than imaging the brain, that is not directly accessible by optical means. Despite the accumulated evidence on measurable changes in the retina associated with neurological disorders, the traditional approach of relying on the thickness measurements, either the full retina thickness or the thickness of some of the retina's layers, independently or in an aggregated way, seems to be leaving out potentially relevant information. In this disclosure, we show that well-known metrics in the field of computer vision can be applied to exploit data gathered from the ocular fundus by the OCT. Surprisingly, only three features computed from grey-level co-occurrence matrices of the ganglion cell layer are sufficient to discriminate between eyes of patients diagnosed with MS and eyes of healthy controls with an accuracy up to 77.4%. On the other hand, we demonstrated that this layer is the one most preserved in the healthy ageing, which suggests it should be further studied in MS because it may convey information on the onset of the disease and response to treatments. [0093] It is to be appreciated that certain embodiments of the disclosure as described herein may be incorporated as code (e.g., a software algorithm or program) residing in firmware and/or on computer useable medium having control logic for enabling execution on a computer system having a computer processor, such as any of the servers described herein. Such a computer system typically includes memory storage configured to provide output from execution of the code which configures a processor in accordance with the execution. The code can be arranged as firmware or software, and can be organized as a set of modules, including the various modules and algorithms described herein, such as discrete code modules, function calls, procedure calls or objects in an object-oriented programming environment. If implemented using modules, the code can comprise a single module or a plurality of modules that operate in cooperation with one another to configure the machine in which it is executed to perform the associated functions, as described herein.

[0094] The term "comprising" whenever used in this document is intended to indicate the presence of stated features, integers, steps, components, but not to preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. The disclosure should not be seen in any way restricted to the embodiments described and a person with ordinary skill in the art will foresee many possibilities to modifications thereof. The above described embodiments are combinable. The following claims further set out particular embodiments of the disclosure.