Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATIC BORDER DELINEATION AND DIMENSIONING OF REGIONS USING CONTRAST ENHANCED IMAGING
Document Type and Number:
WIPO Patent Application WO/1996/038815
Kind Code:
A1
Abstract:
The present invention is a novel system and method for automatically identifying borders of regions of interest within an image of a patient's organ or tissue. The system generates images - before, during and after the administration of a contrast agent. Once the set of images have been taken, the system begins automatic processing of the images. The steps of the processing include the identification of baseline image frames, identification of baseline intensities for each given pixel in the ROI, baseline subtraction on a per-pixel basis, determining a probability of signal-to-noise ratio for each pixel, and thresholding each pixel to determine if a pixel belongs to an area inside the border region or an area outside the border region. To exactly determine which pixels that are at the border, the method refines the set by locally minimizing a total cost function that relates a low value to points typically found on a contrast enhanced image. The border of the region of interest is thereby determined.

Inventors:
LEVENE HAROLD (US)
Application Number:
PCT/US1996/008257
Publication Date:
December 05, 1996
Filing Date:
May 30, 1996
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MOLECULAR BIOSYSTEMS INC (US)
LEVENE HAROLD (US)
International Classes:
A61B5/055; A61B8/00; A61B6/03; G06T1/00; G06T5/00; G06V10/28; (IPC1-7): G06T5/00
Domestic Patent References:
WO1991019457A11991-12-26
Foreign References:
EP0521559A11993-01-07
US4802093A1989-01-31
Other References:
SEBASTIANI G ET AL: "Analysis of dynamic magnetic resonance images", IEEE TRANSACTIONS ON MEDICAL IMAGING, JUNE 1996, IEEE, USA, vol. 15, no. 3, ISSN 0278-0062, pages 268 - 277, XP000600099
UNSER M ET AL: "Automated extraction of serial myocardial borders from M-mode echocardiograms", IEEE TRANSACTIONS ON MEDICAL IMAGING, MARCH 1989, USA, vol. 8, no. 1, ISSN 0278-0062, pages 96 - 103, XP000117589
MAES L ET AL: "Automated contour detection of the left ventricle in short axis view and long axis view on 2D echocardiograms", PROCEEDINGS. COMPUTERS IN CARDIOLOGY (CAT. NO.90CH3011-4), CHICAGO, IL, USA, 23-26 SEPT. 1990, ISBN 0-8186-2225-3, 1991, LOS ALAMITOS, CA, USA, IEEE COMPUT. SOC. PRESS, USA, pages 603 - 606, XP000222135
Download PDF:
Claims:
IN THE CLAIMS:
1. A method for automatically determining the border of a patient's tissue found in an operatorselected region of interest, said border determined from a set of contrast enhanced, grey scale images, the steps of said method comprising: A) obtaining a set of grey scale images of the patient's tissue, some of which contain contrast agent for image enhancement; B) identifying a region of interest in which the patient tissue is located; C) from the set of grey scale images collected from step (A), obtaining a baseline intensity value; D) subtracting the baseline intensity value from the contrast enhanced images; E) establishing a threshold based on signal to noise ratio F) establishing a reference point as a first border point in the region of interest; G) from a set of candidate points adjacent to said reference point found in step (E), automatically selecting which candidate point is most likely to be a border point; and H) Substituting the selected candidate point in step (F) as the new reference point and continuing with step (F) until the entire border is determined.
2. The method as recited in claim 1 wherein said patient tissue is the endocardium.
3. The method as recited in claim 2 wherein step (A) further comprises: (A)(i) selecting a point in the cardiac cycle at which to obtain a set grey scale images; and (A)(ii) obtaining a set of grey scale images, some of which contain contrast agent for image enhancement.
4. The method as recited in claim 3 wherein step (A)(ii) further comprises: ~ (A)(ii)(a) obtaining a set of grey scale images prior to the introduction of contrast agent; (A)(ii)(b) obtaining a set of grey scale images during the introduction of contrast agent; and (A)(ii)(c) obtaining a set of grey scale images after the contrast agent has been introduced.
5. The method as recited in claim 1 wherein step (A) further comprises: (A)(i) obtaining a set of grey scale images of the patient's tissue, some of which contain contrast agent for image enhancement; and (A)(ii) correcting for motion of patient' tissue in the set of grey scale images obtained in step (A)(i).
6. The method as recited in claim 1 wherein the identifying step of step (B) is operatorselected.
7. The method as recited in claim 1 wherein the baseline intensity value of step (C) is obtained on a pixelbypixel basis.
8. The method as recited in claim 1 wherein the baseline intensity value of step (C) is obtained by an operator selecting baseline image frames by visual inspection.
9. The method as recited in claim 7 wherein the baseline intensity value of step (C) is obtained by an operator selecting baseline image frames from a graph of mean pixel intensity within the region of interest over time.
10. The method as recited in claim 7 wherein the baseline intensity value of step (C) is automatically obtained by an performing linear regression analysis on pixel intensity over time.
11. The method as recited in claim 2 wherein step (D) further comprises: (D)(ii) subtracting the baseline intensity value from the contrast enhanced images on a pixelbypixel basis; and (D)(ii) determining, on a pixelbypixel basis, whether a given pixel is in the heart chamber; (D)(iii) determining the center of mass of the heart chamber.
12. The method as recited in claim 11 wherein step (F) further comprises: (F)(i) locating the set of points defined by the maximum and minimum x and y coordinates of the set of points in the heart chamber; (F)(ii) picking one of the points located in step (F)(i) as a reference point.
13. The method as recited in claim 2 wherein step (G) further comprises: (G)(i) selecting a set of neighboring point to the reference point; (G)(ii) calculating a cost function for each of the neighboring point selected in step (G)(i); and (G)(iii) selecting a new reference point likely to a border point based on the cost values generated in step (G)(ii).
14. The method of claim 1 wherein end systole and end diastole points are used to determine regional wall motion.
15. The method of claim 1 wherein end systole and end diastole points are used to determine ejection fraction.
16. The method of claim 1 wherein end systole and end diastole points are used to determine fractional shortening.
17. The method of claim 1 wherein the imaging is performed from a view selected from the group consisting of sagittal, transverse, longitudinal, parasternal short axis, apical long axis, parasternal long axis, suprastemal long axis, subcostal short axis, subcostal four chamber, apical two chamber, and apical four chamber.
18. The method of claim 1 wherein the border delineates the left ventricle.
19. The method of claim 1 wherein the border delineates a venal thrombus.
20. The method of claim 1 wherein the processing is performed in real time.
Description:
AUTOMATIC BORDER DELINEATION AND DIMENSIONING OF REGIONS USING CONTRAST ENHANCED IMAGING

FIELD OF THE INVENTION

The present invention relates in general to a method for processing ultrasound

images of a patient's organs and tissue and, in particular, to a method for delineating

borders and dimensioning regions of the organs and tissues of the patient in such images.

BACKGROUND OF THE INVENTION

In medical diagnostic imaging, it is important to image regions of interest (ROIs)

within a patient and analyze these images to provide effective diagnosis of potential disease conditions. A necessary component of this diagnosis is the ability to discriminate

between various structures of the patient's tissues - including, but not limited to, organs,

tumors, vessels and the like - to identify the particular ROI for diagnosis.

The problems of structure identification are exacerbated in cases where the ROI is located in a tissue or organ that is moving significantly during the course of imaging. One

organ that experiences a good deal of movement during imaging is the heart. Several

imaging modalities are currently used. For example, it is known to use single photon emission computed tomography ("SPECT"), positron emission tomography ("PET"), computed tomography ("CT"), magnetic resonance imaging ("MRI"), angiography and

ultrasound. An overview of these different modalities is provided in: Cardiac Imaging - A

Companion to Braunwald's Heart Disease, edited by Melvin L. Marcus, Heinrich R.

Schelbert, David J. Skorton, and Gerald L. Wolf (W. B. Saunders Co., Philadelphia,

1991).

One modality that has found particular usefulness is contrast enhanced ultrasound imaging. Briefly, this technique utilizes ultrasonic imaging, which is based on the

principle that waves of sound energy can be focused upon a "region of interest" ("ROI")

and reflected in such a way as to produce an image thereof. The ultrasonic transducer utilized is placed on a body surface overlying the area to be imaged, and sound waves are directed toward that area. The transducer detects reflected sound waves and the attached scanner translates the data into video images.

When ultrasonic energy is transmitted through a substance, the amount of energy reflected depends upon the frequency of the transmission and the acoustic properties of the substance. Changes in the substance's acoustic properties (e.g. variance in the acoustic impedance) are most prominent at the interfaces of different acoustic densities and compressibilities, such as liquid-solid or liquid-gas. Consequently, when ultrasonic energy is directed through tissue, organ structures generate sound reflection signals for detection by the ultrasonic scanner. These signals can be intensified by the proper use of a contrast agent.

There are several types of contrast agents including liquid emulsions, solids, encapsulated fluids and those which employ the use of gas. The latter agents are of particular importance because of their efficiency as a reflector of ultrasound. Resonant gas bubbles scatter sound a thousand times more efficiently than a solid particle of the same size. These types of agents include free bubbles of gas as well as those which are encapsulated by a shell material.

Contrast enhanced images have the property that their presence in a particular

ROI produce a contrast visually recognizable from surrounding regions that are not

suffused with the agent. One example of this type of imaging is myocardial contrast echocardiography ("MCE"). In MCE, an intravascular injection of a contrast agent

washes into the patient's heart while, simultaneously, ultrasound waves are directed to and reflected from the heart - thereby producing a sequence of echocardiographic images. In the field of echocardiography, important diagnostic measures include: (1) analysis of regional wall motion; and (2) the determination of the ejection fraction. Abnormal systolic function is a diagnostic indication of cardiac disease; and measurements

of the ejection fraction and regional wall motion are most useful in detecting chronic

ischemia. The ejection fraction is a global measure of systolic function, while regional wall motion is a local measure.

The "ejection fraction" ("EF") is a widely used measure of the contractile ability of the ventricle. EF is defined as the ratio of the total ventricular stroke volume ("SN") to the end-diastolic ventricular volume ("EDN"). In equation form, we have:

SV EDV-ESV

EF EDV EDV

where ESN is the end-systolic ventricular volume.

Accurate determination of EF and wall motion, however, is based on a precise identification of certain heart structures of the patient, such as the left ventricle and the endocardial border in the left ventricle. Currently, identification of the endocardial border is made from non-contrast enhanced images. Endocardial borders in these non-contrast enhanced images are either manually traced by trained echocardiographers or determined

by image processing methods tailored specifically for non-contrast enhanced images. Such an image processing method is described in A Second-generation Computer-based

Edge Detection Algorithm for Short-axis. Two-dimensional Echocardiographic Images: Accuracy and Improvement in Interobserver Variability, by Geiser et al. and published in Vol. 3, No. 2, March-April 1990 issue of the Journal of the American Society of

Echocardiography (pps 79-90).

In Geiser et al.'s method, the computerized image processing starts with a human operator selecting three image frames from a cardiac cycle: the opening end-diastolic frame, the end-systolic frame, and the closing end-diastolic frame. Once selected, the operator defines the endocardial and epicardial borders on each of the three selected frames. After the borders are defined for the first three frames, they are refined and the borders in the other frames from other points within the cardiac cycle are automatically determined by Geiser et al.'s process.

The disadvantage with Geiser et al.'s process for identification of the endocardium is that it is performed without contrast enhancement of the heart's image. Without contrast enhancement, several imaging problems occur. For example, the fibers within the myocardium create more or less backscatter depending upon their orientation relative to the incident ultrasound beam - fibers that are parallel to the beam scatter less, so in these regions it is more difficult to differentiate the endocardium from the hypoechoic chamber region. These regions occur in the lateral regions of the image. Merely increasing the gain is not a satisfactory solution, because many instruments have gain dependent lateral resolution, so that the proper identification of the border is adversely affected.

One way to avoid this difficulty is to image with contrast enhancement. The use of contrast agents, such as ALBUNEX ® (a registered trademark of Molecular

Biosystems, Inc.), in echocardiograms has enhanced the image resolution of patient heart structures. By adding contrast agent into the heart's chamber, the chamber initially becomes greatly illuminated in comparison to the myocardium (including the endocardium). Later, once the agent has washed out of the chamber, the myocardium

remains illuminated relative to the chamber due to the perfusion of agent into the

myocardium tissue. In either case, the border region between the myocardium and the

chamber is greatly differentiated - even in the lateral regions where the problem of differentiation without contrast enhancement is greatest without contrast.

Although the use of contrast agents has aided in the differentiation of the endocardium border, the typical method of border delineation remains a manual process of "eyeballing" the border by a trained cardiologist. However, there are still problems with manual methods of border identification. Specifically a single frame of echocardiographic image data is selected during the time of approximate maximum ventricular opacification by the contrast agent. A trained echocardiographer then manually traces, in the echocardiographer's best judgment, what appears to be the endocardial border in that single frame. The echocardiographer's judgment is based on the perceived differences in the texture of the brightness in the image. For those frames where contrast agents have perfused into the myocardium while

agent is still in the left ventricle chamber, the difference in texture may be less apparent. Hence, this manual process leaves much to chance in accurately determining the endocardium border.

Additionally, in the single chosen frame, the ventricle may not be completely

opacified - while all areas of the left ventricle may be opacified at some point during the

injection of contrast agent, it is not likely that all areas are simultaneously opacified. For example, attenuation and the effects of shadowing may produce an image whereby one region of the left ventricle is at maximum brightness while, in other regions, no contrast is observed at all.

Either of these problems may cause a border region of the left ventricle to be difficult to identify, leading to uncertainty in the diagnosis process. Specifically, improper

identification of the border region during the end-diastole or end-systole might lead to either an over or under estimation of the motion of the ventricle. If the ejection fraction or regional wall motion are over-estimated, the cardiologist might rule out a suspicion of ischemia, when it is in fact present. On the other hand, if the ejection fraction or regional wall motion are under-estimated, then the cardiologist might suspect ischemia where none is present and sent the patient on to a more expensive diagnostic procedure (e.g. angiography or nuclear imaging) or an expensive and invasive therapeutic procedure (e.g. angioplasty).

Thus, it is desirable to develop a method for the accurate identification of the borders of patient tissues, such as the endocardial border of the heart.

It is, therefore, an object of the present invention to provide a method for such accurate border identification.

It is another object of the present invention to provide an improved method of diagnosis of ejection fraction and regional wall motion.

SUMMARY OF THE INVENTION Other features and advantages of the present invention will be apparent from the

following description of the preferred embodiments, and from the claims.

The present invention is a novel system and method for automatically identifying borders of regions of interest within an image of a patient's organ or tissue. Initially, the operator of the system identifies a given set of images that will be taken for the system to

automatically analyze. For example, if the organ in question is the heart, then the set of

images selected for analysis will usually be images that are taken at the same point in the

cardiac cycle. Once the criteria for image set inclusion is determined (e.g. images from the same point in the cardiac cycle), the system begins to generate images - before, during and after the administration of a contrast agent. Once the set of images have been taken, the system begins its automatic processing. Broadly, the steps of the processing include the identification of baseline image frames, identification of baseline intensities for each given pixel in the ROI, baseline subtraction on a per-pixel basis, determining a probability of signal-to-noise ratio for each pixel, and thresholding each pixel to determine if a pixel belongs to an area inside the border region or an area outside the border region. To exactly determine which pixels are at the border, the method refines the set by locally minimizing a total cost function that relates a low value to points typically found on a

contrast enhanced image. The border of the region of interest is thereby determined.

For a full understanding of the present invention, reference should now be made to the following detailed description of the preferred embodiments of the invention and to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS The file of this patent contains at least one drawing executed in color. Copies of

this patent with color drawings will be provided by the Patent and Trademark Office upon request and payment of the necessary fee.

Figure 1 depicts the manner in which ultrasound images are taken of a patient's heart by an ultrasound image processor that is used in accordance with the principles of

the present invention.

Figure 2 is a high level block diagram of one embodiment of an image processor unit that is used in accordance with the principles of the present invention. Figures 3-7 depict a flow chart of the presently claimed border delineation method.

Figures 8(A) and 8(B) depicts how the present system may select candidate heart chamber border pixels.

DETAILED DESCRIPTION OF THE INVENTION Although the present invention encompasses general methods for the imaging and diagnosis of any patient tissues or organs capable of being imaged, the present description will be given from the standpoint of imaging the human heart. In many ways, the problems involved with imaging the human heart for purposes of border delineation and dimensioning are more difficult than with other organs.

One reason is that the regions on both sides of the border may be contrast enhanced. However, the chief reason is motion. The human heart, in the course of normal function, moves a great deal. As most border delineation methods require a

number of images (some having the heart perfused with a contrast agent) to accurately determine the border, the movement of the heart tissue from frame-to-frame presents a problem when correlating the parts of heart tissue - especially when tissues do not

necessarily occupy the same pixel position in different frames. The present description of the method for imaging the heart may then be simplified in order to image other patient organs and tissues that do not experience such difficulties. Thus, the present invention should not be limited to merely for imaging the human heart; but encompasses all tissues

capable of being imaged.

Likewise, the present description is based upon administration of a contrast agent used with ultrasound imaging methodology. Again, the present invention should not be limited to merely ultrasound; but also encompasses other methodologies that may (or may not) use a contrast agent that is uniquely suited to that particular methodology. Ultrasound methodology is described in greater detail in co-pending and co-assigned patent application Serial Number 08/428,723 entitled "A METHOD FOR PROCESSING REAL-TIME CONTRAST ENHANCED ULTRASONIC IMAGES", filed on April 25, 1995 by Levene et al., and herein incorporated by reference.

Ultrasound imaging systems are well known in the art. Typical systems are manufactured by, for example, Hewlett Packard Company; Acuson, Inc.; Toshiba America Medical Systems, Inc.; and Advanced Technology Laboratories. These systems are employed for two-dimensional imaging. Another type of imaging system is based on

three-dimensional imaging. An example of this type of system is manufactured by, for example, TomTec Imaging Systems, Inc. The present invention may be employed with either two-dimensional or three-dimensional imaging systems.

Likewise, ultrasound contrast agents are also well-known in the art. They include,

but are not limited to liquid emulsions, solids; encapsulated fluids, encapsulated

biocompatible gases and combinations thereof. Fluorinated liquids and gases are especially useful in contrast compositions. The gaseous agents are of particular importance because of their efficiency as a reflector of ultrasound. Resonant gas bubbles scatter sound a thousand times more efficiently than a solid particle of the same size. These types of agents include free bubbles of gas as well as those which are encapsulated

by a shell material. The contrast agent may be administered via any of the known routes. These routes include, but are not limited to intravenous (IV), intramuscular (IM), intraarterial (IA), and intracardiac (IC).

It is appreciated that any tissue or organ that receives a flow of blood may have images processed in the manner of the invention. These tissues/organs may include, but are not limited to the kidneys, liver, brain, testes, muscles, and heart.

The angles and direction used to obtain views of the organs during imaging are well known in the art. For most organs, the various views used are only derived from the planes of the organ,, as there is not a problem with lungs or ribs defining an acoustic window. Therefore, the views are termed sagittal, transverse, and longitudinal.

When imaging the heart, there are three orthogonal planes, the long axis, the short axis, and the four chamber axis. There are also apical, parasternal, subcostal, or suprastemal acoustic windows. The common names for the views that are derived from

these are the parasternal short axis, apical long axis, parasternal long axis, suprastemal long axis, subcostal short axis, subcostal four chamber, apical two chamber, and apical four chamber. Short axis views may bisect the heart at different planes, at the level of the

mitral valve, at the level of the papillary muscles, or at the level of the apex, for example. Lastly, the apical four chamber view with the transducer slightly tilted gives the five chamber view, where the aorta is visualized with the usual four chambers. For a further

description of these various views, see Echocardiography. 5th edition, edited by Harvey Feigenbaum (Lea & Febiger, Philadelphia, 1994).

Referring now to Figure 1, a cut-away view of patient 30 attached to echocardiographic transducer 36 is shown. A transducer is placed on the patient,

proximate to heart muscle 32. Images may alternatively be acquired transthoracically or

transesophageally. An injection (34) of contrast agent is made into the patient's vein so that the contrast agent reaches the heart and interacts with the ultrasound waves generated by transducer 36. Sound waves reflected and detected at transducer 36 are sent as input into image processing system 38.

As the contrast agent enters into various heart regions, image processing system

38 detects an increased amplitude in the reflected ultrasound waves, which is characterized by a brightening of the image. Tissue areas that do not brighten when expected may indicate a disease condition in the area (e.g. poor or no circulation, presence of thrombus, necrosis or the like).

Referring now to Figure 2, an embodiment, in block diagram form, of image processing system 38 is depicted. Image processing system 38 comprises diagnostic ultrasound scanner 40, optional analog-to-digital converter 42, image processor 44, digital-to-analog converter 56, and color monitor 58. Ultrasound scanner 40 encompasses any means of radiating ultrasound waves to the region of interest and

detecting the reflected waves. Scanner 40 could comprise transducer 36 and a means of

producing electrical signals in accordance with the reflected waves detected. It will be appreciated that such scanners are well known in the art.

The electrical signals generated by scanner 40 could either be digital or analog. If

the signals are digital, then the current embodiment could input those signals into image processor 44 directly. Otherwise, an optional A/D converter 42 could be used to convert

the analog signals.

Image processor 44 takes these digital signals and processes them to provide video images as output. The current embodiment of image processor 44 comprises a central processing unit 46, trackball 48 for user-supplied input of predefined regions of interest, keyboard 50, and memory 52. Memory 52 may be large enough to retain several video images and store the border delineation method 54 of the present invention. CPU 44 thus analyzes the video images according to stored border delineation method 54.

After a given video image is processed by image processor 44, the video image is output in digital form to D/A converter 56. D/A converter thereby supplies color monitor 58 with an analog signal capable of rendering on the monitor. It will be appreciated that the present invention could alternatively use a digital color monitor, in which case D/A converter 56 would be optional.

Having described a current embodiment of the present invention, the border delineation method of the present invention will now be described. Figures 3-7 are flowcharts describing the border delineation method as currently embodied. The method starts at step 100 with the operator selecting a point of interest in the cardiac cycle where

the set of images to be processed will always occur. The same point in the cycle is

primarily used to image the heart at the same point in its contraction and to reduce the

amount of heart distortion and drift from frame-to-frame because the heart is presumably in the same place at the same point in the cardiac cycle.

Of all the point in the cardiac cycle, the most frequently used are the end-systolic

and the end-diastolic points. These points are particularly useful in imaging the heart because they represent the point of maximum contraction and maximum relaxation of the heart in the cardiac cycle. These cardiac points are useful because they are used to measure the contractile ability of the heart, i.e., the ejection fraction of the heart.

Once having decided the point (or points) of the cardiac cycle to capture images,

grey scale (not contrast-enhanced) ultrasound imaging is started at step 102. As images are being generated, a decision is made as to whether to process the current image. If the image is at the point of interest in the cardiac cycle, then the image is processed at steps 104 thru 108. Otherwise , it is not processed. Noncontrast enhanced imaging is continued until a sufficient number of initial baseline images are taken at step 110. These initial images, together with later images taken after the contrast agent has "washed out", form the basis of the entirety of the baseline images.

Once the requisite number of initial baseline frames have been taken, then a contrast agent is administered to the patient at step 114 and "washes into" the chambers of the heart, first, then slowly perfuses into the tissues of the heart muscles themselves. The images are then captured at the selected point(s) in the cardiac cycle until the contrast agent is no longer present in the heart's chamber at steps 116 thru 122. This could be determined by selecting a "trigger" region of interest (T-ROI) that is used to identify whether the contrast agent is the heart chamber. A most advantageous T-ROI to be

selected would be somewhere in the heart chamber because the heart chamber receives the contrast agent prior to perfusion in the heart muscle.

After the contrast agent has "washed out" of the heart, several post-contrast, baseline image frames are taken, and the ultrasound imaging is terminated at steps 124 and 126. It should be appreciated that the step of obtaining the post-contrast baseline values could be omitted in order for real time processing implementation. After obtaining

baseline frames, image motion correction is performed to improve the quality of the images at step 128. This may be done either manually or in an automated fashion. If done manually, for example, the operator would indicate on each image to what extent and in what direction one image would need to move to register with a reference image. Such a manual method is described in "Digital Subtraction Myocardial Contrast Echocardiography: Design and Application of a New Analysis Program for Myocardial Perfusion Imaging," M. Halmann et al., J. Am. Soc. Echocardiogr. 7:355-362 (1994).

Examples of automated methods are described in, for example, "Quantification of Images Obtained During Myocardial Contrast Echocardiography," A.R. Jayaweera et al., Echocardiography 11:385-396 (1994) and "Color Coding of Digitized Echocardiograms: Description of a New Technique and Application in Detecting and Correcting for Cardiac Translation," J.R. Bates et al., J. Am. Soc. Echocardiogr. 7:363-369 (1994).

After motion correction is performed, the operator then preselects a general region of interest on a given frame in order to give the process in an initial region for which to locate the border thereof at step 130. This may be accomplished by having the operator circle the region of interest with a light pen on an interactive video screen or by

drawing with a mouse or using keys. This selected region is used only to restrict the

search area for the endocardium border in order to reduce the processing time. A properly selected region should include the left ventricle surrounded by myocardial tissue. The analysis then begins on each pixel within the ROI.

The set of true baseline frames are selected from the set of initial, pre-contrast frames and the set of post-contrast frames. Steps 134, 136, and 138 depict three different ways in which this set may be formed. First, the operator could manually select all of the baseline frames. Second, the operator identifies an area clearly within the left ventricle

and the mean pixel intensity is calculated as a function of time. The operator can then

identify baseline frames from a plot of intensity versus time. Lastly, the system could automatically determine the baseline for each pixel starting at step 138. A linear regression is performed on air of the data points and the standard deviation of the fit is calculated at steps 140 and 142. The analysis may be

performed with a varying number of frames at the beginning and a varying number of frames at the end of the sequence, until the best fit is determined. It will be appreciated that such regression analysis is well known to those skilled in the art.

After the linear regression analysis has been performed, the standard deviation of the pixel intensity is calculated. For any given pixel, the data points over time are compared against the computed standard deviation in step 144. If the pixel intensity is within the standard deviation for a putative baseline value, then the pixel data point is considered a baseline value.

Otherwise, the pixel data point is outside the standard deviation, and the data point is removed from any further consideration at step 146. The linear regression

analysis is then re-calculated, including the standard deviation. This defines an iterative process for each pixel over time.

After all the baseline pixel data have been identified, then the pixels of the chamber are determined, at step 148. By clearly identifying the pixels of the chamber, the method may then discard these pixels from further consideration in delineating the border pixels.

The first step to accomplishing this goal is baseline subtraction. For each pixel in the ROI, another linear regression analysis is performed on the baseline pixel intensity over time at step 152. This provides a linear best-fit curve having a derived slope and intercept at step 154. For each non-baseline frame occurring at a given time, ti, the baseline intensity is derived from the linear curve as occurring for that particular time. The baseline value is then subtracted from the non-baseline pixel intensity at step 156.

Once this estimated baseline intensity is subtracted from the observed pixel intensity, the signal derived solely from the contrast, Si, is determined. In instances where attenuation causes shadowing in the image, the observed pixel intensity may have decreased to an extent to be less than the estimated baseline intensity. In such a case, Si is taken to be zero.

For each non-baseline or contrast frame, k, a composite signal-to-noise ratio (S/N) is determined from the signal, Sk, and the signals from the temporally adjacent heart cycles, Sk-i and Sk+i- A peak signal may arise from spurious noise, so that the signals are weighted according to the equation:

, S _ Wl * Sk-l + M>2 • Sk + W 3 * Sk+l

N σ

where [ is the calculated standard deviation of the baseline data and W j (j = 1,2,3) are the weights of the signals. It will be appreciated that more than more than three addends could be used to form the signal-to-noise ratio. The purpose of the weighting terms in the calculation of the signal-to-noise ratio is to reduce the influence of noise by performing a smoothing within a small time region. It is difficult to determine, a priori, what the optimal values for the weighting terms will be

in these calculations. The optimal values can be determined by a "receiver operating

characteristic" (ROC) analysis where, for each variation of the weighting factors, the sensitivity and specificity is determined by comparison to a "gold standard" method (e.g. where the opinion of a group of human experts form the gold standard in a given case). It will be appreciated that the methods of ROC analysis are well known to those in the art of

biomedical analysis. An exposition of ROC analysis is provided in RECEIVER OPERATING CHARACTERISTIC CURVES: A BASIC UNDERSTANDING, bv Vining et al., published in RadioGraphics, Vol. 12, No. 6 (November 1992), and herein incorporated by reference.

The signal-to-noise ratio is then treated as a standardized, normal variable and the

probability of obtaining the observed (S/N)k from random noise fluctuations, P[(S/N) k ] may be calculated as follows:

As will be appreciated, as the signal-to-noise ratio increases, the probability that the signal results from random noise decreases. For each pixel, that probability is

determined for each non-baseline frame and the minimum probability for that pixel is taken at step 162. In order to determine which pixels are then in the heart chamber, the maximum signal-to-noise ratio over the non-baseline frames is determined. Because there is a

greater degree of brightening in the ventricle than the myocardium resulting from contrast

agent enhancement, a probability threshold may be established distinguishing the two regions, with probabilities above the threshold identifying pixels in the myocardium and probabilities below the threshold identifying pixels in the left ventricle. This comparison is accomplished at step 164 and continues until all the pixels i the ROI have been analyzed.

After every pixel in the ROI has been determined as part of the heart chamber or not, it is now possible to determine, among all the pixels not in the heart chamber, which are border pixels. This can be done by any suitable technique which is known in the art. For example, see "A Novel Algorithm for the Edge Detection and Edge Enhancement of Medical Images," I. Crooks et al., Med. Phys. 20:993-998 (1993) and "Multilevel Nonlinear Filters for Edge Detection and Noise Suppression," H. Hwang et al., IEEE Trans on Signal Processing 42:249-258 (1994).

In a preferred embodiment, cost weighting is used. In that case, if a small area within the chamber near the border is misclassified, an edge detection method will have

the chamber area smaller than it should be. The cost function for those points might be high so that the border is still correctly placed.

To aid in this final determination, a binary image is made, with pixels above the brightening threshold given an intensity of zero and pixels below the threshold an intensity

of one. The center of mass of the ventricle pixels, (xi, y ), is then determined at step 172 and referred to as the center of the left ventricle:

c ι = m ∑ X i i = l

: -*∑ y,

where m is the number of ventricle pixels.

The envelope (or border) of the ventricle pixels is now determined from the binary image. The ventricular pixels are searched to find the points that have the minimum and maximum y value and the minimum and maximum x value - thus, defining a maximum of four points. It should be appreciated that the orientation of the images is not important. At each of these four locations, there may be one or more points; it is most convenient to pick a location with only one point, but is not necessary. In the case of all

four locations with multiple points, any one of the points at any of the locations will suffice as the reference point of step 178. The point is identified as the first point belonging to the border and it serves as the starting point of the envelope tracing method.

ENVELOPE TRACING METHOD

Generally speaking, the envelope is traced by determining which adjacent point,

among all the adjacent points of the reference point is most likely to be a border point. This process continues with the most recently selected adjacent point as the new reference point and continues until the border is completely traced.

For the identification of the next border point, the starting point is referred to as the reference point. The angle, 1, of the reference point, (X 2 , Y 2 ), is determined as follows:

From the reference point, a set of potential "adjacent border points" are established by putting out radial lines from the reference point. Figures 8 A and 8B depict the selection of candidate border points in the myocardium. Figure 8 A shows a color picture of a heart chamber (colored red in the Figure) surrounded by the dark myocardium. Figure 8B shows an enlarged view of the region in Figure 8 A that is bordered by the white box. As depicted in Figure 8B, as the method of the present invention advances, the border is gradually and automatically filled out (as depicted as the white solid curve). The last border point selected is depicted as the white circle. From this last border point, the radial lines are sent out to help determine the next border point. A candidate border point is found along each radial line, with the ventricular pixel nearest to the reference point chosen. Radial lines are radiated out over 180 degrees , from 1 to 1

+ 180 degrees. The cost function is then calculated for each candidate point. If the cost of all points is above a threshold cost, then the angular range of radial lines is increased. The candidate point with the lowest cost is chosen as the adjacent border point and

becomes the reference point as the tracing continues until the border forms a closed loop.

The cost function may have global and local factors. Global factors, for example, may emphasize a smoothness in the change of the area of the left ventricle over the cardiac cycle. Local factors emphasize regional border characteristics. The cost factors

are independent and weighted as follows:

Cj - IlwiCy

where is the total cost associated with candidate pixel j; Cy is the cost factor i for candidate pixel j; and w, is the weighting factor for cost factor i. As with the weighting factor mentioned above for the signal-to-noise probability computations, these weight may also be determined by the well known methods of receiver operating characteristics. Individual cost factors may, for example, include the following (which correspond to steps 180, 182, and 184):

1. Contour definition.

The distance between adjacent border points is inversely proportional to how well the contour is defined - large distances between points will make the endocardial border appear jagged. For the candidate point, this cost factor, ci, is given as:

where (xc,yc) is the candidate point; (x r ,y r ) is the reference point; and (x R ,y R ) is the previous reference point.

2. Border Sharpness.

The magnitude of the first derivative, or gradient, of the pixel intensity

about the candidate point is a measure of the change from ventricular

pixels to myocardial pixels about this point. The magnitude of the

gradient, G(ps), may be determined using the Sobel operators as defined as

follows:

10

G(P S ) X = (P 7 + 2p s + P 9 )-(p 1 + 2p 2 + p 3 )

G(Ps) y = (P 3 + 2p 6 + p g )-(p, + 2p 4 + p 7 ) 10

- l- G —(\ pJT 5 5)S = ^ v G"( \ p1- 5 5)s x x 2 + G ~(\ pr 5 s)' y y 2 | j

where ps is the candidate point and pi through p 9 are the neighboring

pixels in a matrix format. The cost factor, c 2 , for the candidate point is:

20 = Wf G 2 = 0 12

where Gi is the magnitude of the gradient for the reference point and G 2 is the magnitude of the gradient for the candidate point.

3. Contour Regularity.

The angle of the gradient of the pixel intensity about the border should be slowly changing for a smooth contour. The cost factor, c 3 , for the candidate point is given as:

c 5 = 1 + 10.\sm(φ r c - φ)\ 1

where > r is the angle of the gradient at the reference point; > c is the gradient angle for the candidate point, and > is the angle between a line from the reference point to the center of the ventricle and the candidate point. The angle of the gradient is given by:

φ - ^(G(p 5 ) /G(p s ) x )) 1

It will be appreciated that although only three cost functions are herein discussed, many other different cost functions may be employed to identify potential border point. Thus, the present invention should not be limited to the use of these particular cost functions. Indeed, the present invention encompasses any cost method that aids in the

automatic determination of a border point. Moreover, the present invention encompasses the use of any subcombination of cost functions described herein.

After the endocardial border is fully identified in this manner, a summary image

may be presented. The background of the summary image may consist of an average of the baseline frames. Superimposed upon this background is the border, which may be highlighted in a different color. A possible format to display the border is depicted in Figure 8B as the solid white border line. The border is thus shown as the continuous broad white band that encloses the left ventricle chamber.

There has thus been shown and described a novel system and method for the delineation of a border region of a patient tissue or organ which meets the objects and advantages sought. As stated above, many changes, modifications, variations and other uses and applications of the subject invention will, however, become apparent to those skilled in the art after considering this specification and accompanying drawings which disclose preferred embodiments thereof. All such changes, modifications, variations and other uses and applications which do not depart from the spirit and scope of the invention are deemed to be covered by the invention which is limited only by the claims which follow.




 
Previous Patent: CHIP-CARD

Next Patent: ANATOMICAL VISUALIZATION SYSTEM