Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICE AND SYSTEM FOR DETECTING SOUNDS FROM A SUBJECT'S BODY
Document Type and Number:
WIPO Patent Application WO/2023/281515
Kind Code:
A1
Abstract:
Device and system for detecting sounds from the subject's body are disclosed. The device may include: a support configured to be removably attached to a subject's body or a subject's clothing; an acoustic sensor connected to the support and configured to detect sounds from within the subject's body and generate an output signal; an acoustic waveguide connected to the support and configured to guide the sounds from within the subject's body to the acoustic sensor; a digital storage unit connected to the support; and a processor connected to the support and configured to: at least one of: save at least a portion of the output signal in the digital storage unit; preprocess the output signal, and analyze the output signal.

Inventors:
GOREN ALON DAVID (IL)
BEKER AMIR (IL)
HAUPTMAN YIRMI (IL)
ATTAR ELI (IL)
Application Number:
PCT/IL2022/050737
Publication Date:
January 12, 2023
Filing Date:
July 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CARDIOKOL LTD (IL)
International Classes:
A61B7/04; A61B5/00; A61B7/00; A61B7/02; G10L25/66
Domestic Patent References:
WO2020204639A12020-10-08
Foreign References:
US10159459B22018-12-25
US20190362740A12019-11-28
US10320491B22019-06-11
Attorney, Agent or Firm:
KOZLOVSKY, Pavel et al. (IL)
Download PDF:
Claims:
CLAIMS

1. A device for recording, detecting and analyzing sounds from a subject’s body, the device comprising: a support configured to be removably attached to a subject’s body or a subject’s clothing; an acoustic sensor connected to the support and configured to detect sounds from within the subject’s body and generate an output signal; an acoustic waveguide connected to the support and configured to guide the sounds from within the subject’s body to the acoustic sensor; a digital storage unit connected to the support; and a processor connected to the support and configured to: receive the output signal, and at least one of: save at least a portion of the output signal in the digital storage unit, preprocess the output signal by detecting one or more subsets of data values in the output signal indicative of one or more abnormal/pathological sound patterns and save only the detected one or more subsets of data value in the digital storage unit, and analyze the output signal to detect one or more abnormal/pathological biomarkers indicative of at least one of: a health, physical, fitness-related, or wellness-related condition of the subject and save information related to the detected abnormal/pathological biomarkers in the digital storage unit.

2. The device of claim 1 , wherein the acoustic sensor is configured to detect sounds of a patterns type defined in frequency bands and time -domain characteristics so as to detect sounds generated by different organs or processes of the subject’s body.

3. The device of claim 1, wherein the acoustic sensor is configured to detect sounds of a patterns type defined in frequency bands and time-domain characteristics so as to detect sounds generated by a specific organ or a specific subgroup of organs of the subject’s body.

4. The device of any one of claims 1-3, wherein the acoustic sensor is configured to detect sounds associated with subject’s speech or sounds being byproducts of subject’s speech.

5. The device of any one of claims 1-4, comprising two or more acoustic sensors and two or more acoustic waveguides, each of the two or more acoustic waveguides for one of the two or more acoustic sensors.

6. The device of claim 5, wherein the two or more acoustic sensors are configured to detect sounds of the same frequency range.

7. The device of claim 5, wherein each of the two or more acoustic sensors is configured to detect sounds of a different frequency range as compared to the other acoustic sensors of the two or more acoustic sensors.

8. The device of claim 7, wherein frequency ranges of the two or more acoustic sensors partly overlap with each other.

9. The device of any one of claims 5-8, wherein the two or more acoustic sensors are configured to detect sounds arriving from a same at least one direction, or location from within the subject’s body.

10. The device of any one of claims 5-8, wherein each of two or more acoustic sensors is configured to detect sounds arriving from a different at least one direction, or location from within the subject’s body as compared to other acoustic sensors of the two or more acoustic sensors.

11. The device of any one of claims 1-10, wherein the processor is configured to detect at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers based on normal sound patterns and normal biomarkers, respectively.

12. The device of claim 11, wherein the normal sound patterns and the normal biomarkers are subject- specific and are predefined based on accumulated sound data collected from the subject.

13. The device of claim 11, wherein the normal sound patterns and the normal biomarkers are specific to a population or subpopulation to which the subject being monitored belongs and are predefined based on accumulated sound data collected from a plurality of individuals belonging to the population or subpopulation.

14. The device of any one of claims 1-13, wherein the processor is configured to detect the one or more abnormal/pathological biomarkers indicative of the health condition of the subject using one or more pre-trained machine learning models.

15. The device of any one of claims 1-14, wherein the acoustic sensor is configured to continuously detect sounds from within the subject’s body.

16. The device of any one of claims 1-15, wherein the processor is configured to control the acoustic sensor to detect sounds from within the subject’s body during predetermined time intervals according to a predetermined time schedule.

17. The device of claim 16, wherein the processor is configured to update the time schedule based on at least one of occurrence and duration of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers in the output signal.

18. The device of any one of claims 1-17, further comprising a communication unit connected to the support and configured to transmit data from the digital storage unit to a remote storage device or a remote computing device or remote alarming device .

19. The device of claim 18, wherein the communication unit is configured to transmit the data on demand.

20. The device of any one of claims 18-19, wherein the communication unit is configured to transmit to a remote computing device a notification indicative of the detection of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers that require immediate attention.

21. The device of any one of claims 1-20, further comprising a notification unit connected to the support and configured to generate one or more notifications indicative the detection of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers that require immediate attention.

22. The device of claim 21, wherein the notification unit is configured to generate at least one of one or more visual notifications, one or more sound notifications and one or more mechanical notifications.

23. The device of any one of claims 1-22, wherein the processor is configured to perform a sound detection test upon attachment of the device to the subject’s body or the subject’s clothing and initiation thereof, the sound detection test comprises: analyzing the output signal from the acoustic sensor, and determining whether or not the sounds from within the subject’s body are being properly detected by the acoustic sensor.

24. The device of claim 23, wherein upon determination of improper detection of the sounds, the communication unit is configured to transmit a respective notification to a remote computing device, wherein the respective notification comprises instructions describing how to change a location of the device on the subject’s body so as to cause the device to properly detect the sounds from within the subject’s body.

25. The device of claim 23, wherein upon determination of improper detection of the sounds, the notification unit is configured to generate respective at least one of one or more visual notifications, one or more sound notifications and one or more mechanical notifications.

26. The device of claim 25, further comprising one or more additional sensors connected to the support and configured to generate one or more additional sensor output signals.

27. The device of any one of claims 1-26, further comprising a power source connected to the device and configured to supply power to electronic components of the device.

28. The device of any one of claims 1-27, comprising a frame connected to electronic components of the device and configured to removably connect the electronic components of the device to the support.

29. The device of any one of claims 1-28, further comprising a covering configured to be removably connected to the support and cover components of the device and accommodate the components between the support and the covering.

30. The device of any one of claims 1-29, comprising a clip connected to the support and configured, when actuated, to push the acoustic sensor and the acoustic waveguide towards the support.

31. The device of any one of claims 1-30, wherein acoustic sensor comprises a piezo electric element within a housing having the waveguide as one of the surfaces of the housing.

32. The device of any one of claims 31, further comprising at least one gel pad acoustically coupling the piezo electric element and the waveguide.

33. The device of any one of claims 1-30, wherein acoustic sensor comprises a microphone or within a housing having the waveguide as one of the surfaces of the housing.

34. A system for detecting sounds from a subject’s body, the system comprising: a swallowable capsule comprising an acoustic transducer configured to generate a sound signal after the swallowable capsule has been swallowed by the subject; and the device according to any one of claims 1-30, wherein the acoustic sensor of the device is configured to detect the sound signal from within the subject’s body and generate the output signal further based on the detected sound signal.

35. The system of claim 34, wherein the swallowable capsule further comprises a capsule acoustic sensor configured to detect sounds from within the subject’s body and generate a capsule output signal.

36. The system of claim 35, wherein the swallowable capsule further comprises a transmitter to transmit the capsule output signal, and wherein the communication unit of the device is configured to receive the capsule output signal.

37. A kit comprising: two or more devices according to any one of claims 1-33.

Description:
DEVICE AND SYSTEM FOR DETECTING SOUNDS FROM A SUBJECT’S BODY

FIELD OF THE INVENTION

The present invention relates to the field of devices for detecting sounds from a subject’s body, and more particularly, to wearable devices thereof.

BACKGROUND OF THE INVENTION

Continuous, long-term detection and analysis of sounds from within a subject’s body may provide information concerning biomarkers indicative of a health/physical/fitness-related/wellness-related condition of the subject. Moreover, simultaneous, continuous and long-term detection and analysis of sounds generated by different organs of the subject’s body may provide new information concerning correlation between functions of these organs to further enhance the information concerning the subject’ s health/physical/fitness-related/wellness-related condition.

SUMMARY OF THE INVENTION

Some embodiments of the present invention may provide a device for recording and detecting sounds from a subject’s body, the device may include: a support configured to be removably attached to a subject’s body or a subject’s clothing; an acoustic sensor connected to the support and configured to detect sounds from within the subject’s body and generate an output signal; an acoustic waveguide connected to the support and configured to guide the sounds from within the subject’s body to the acoustic sensor; a digital storage unit connected to the support; and a processor connected to the support and configured to: receive the output signal, and at least one of: save at least a portion of the output signal in the digital storage unit, preprocess the output signal by detecting one or more subsets of data values in the output signal indicative of one or more abnormal/pathological sound patterns and save only the detected one or more subsets of data value in the digital storage unit, and analyze the output signal to detect one or more abnormal/pathological biomarkers indicative of a health/physical/fitness- related/wellness-related condition of the subject and save information related to the detected abnormal/pathological biomarkers in the digital storage unit. In some embodiments, the acoustic sensor is configured to detect sounds of a predefined wide frequency range so as to detect sounds generated by different organs or processes of the subject’s body.

In some embodiments, the acoustic sensor is configured to detect sounds of a predefined narrow frequency range so as to detect sounds generated by a specific organ or a specific subgroup of organs of the subject’s body. In some embodiments, wherein the acoustic sensor is configured to detect subject’s speech.

In some embodiments, the device includes two or more acoustic sensors and two or more acoustic waveguides, each of the two or more acoustic waveguides for one of the two or more acoustic sensors. In some embodiments, the two or more acoustic sensors are configured to detect sounds of the same frequency range.

In some embodiments, each of the two or more acoustic sensors is configured to detect sounds of a different frequency range as compared to other acoustic sensors of the two or more acoustic sensors. In some embodiments, wavelength ranges of the two or more acoustic sensors partly overlap with each other.

In some embodiments, the two or more acoustic sensors are configured to detect sounds arriving from the same direction/location from within the subject’s body simultaneously.

In some embodiments, each of two or more acoustic sensors is configured to detect sounds arriving from a different direction/location from within the subject’s body as compared to other acoustic sensors of the two or more acoustic sensors.

In some embodiments, the processor is configured to detect at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers based on normal sound patterns and normal biomarkers, respectively.

In some embodiments, the normal sound patterns and the normal biomarkers are subject- specific and are predefined based on accumulated sound data collected from the subject.

In some embodiments, the normal sound patterns and the normal biomarkers are specific to a population or subpopulation to which the subject being monitored belongs and are predefined based on accumulated sound data collected from a plurality of individuals belonging to the population or subpopulation.

In some embodiments, the processor is configured to detect the one or more abnormal/pathological biomarkers indicative of the health condition of the subject using one or more pre -trained machine learning models.

In some embodiments, the acoustic sensor is configured to continuously detect sounds from within the subject’s body.

In some embodiments, the processor is configured to control the acoustic sensor to detect sounds from within the subject’s body during predetermined time intervals according to a predetermined time schedule. In some embodiments, the processor is configured to update the time schedule based on at least one of occurrence and duration of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers in the output signal.

In some embodiments, the device includes a communication unit connected to the support and configured to transmit data from the digital storage unit to a remote storage device or a remote computing device.

In some embodiments, the communication unit is configured to transmit the data on demand.

In some embodiments, the communication unit is configured to transmit to a remote computing device a notification indicative of the detection of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers that require immediate attention.

In some embodiments, the device includes a notification unit connected to the support and configured to generate one or more notifications indicative the detection of at least one of the one or more abnormal/pathological sound patterns and the one or more abnormal/pathological biomarkers that require immediate attention.

In some embodiments, the notification unit is configured to generate at least one of one or more visual notifications, one or more sound notifications and one or more mechanical notifications.

In some embodiments, the processor is configured to perform a sound detection test upon attachment of the device to the subject’s body or the subject’s clothing and initiation thereof, the sound detection test includes: analyzing the output signal from the acoustic sensor, and determining whether or not the sounds from within the subject’s body are being properly detected by the acoustic sensor.

In some embodiments, upon determination of improper detection of the sounds, the communication unit is configured to transmit a respective notification to a remote computing device, wherein the respective notification includes instructions concerning of how to change a location of the device on the subject’s body so as to cause the device to properly detect the sounds from within the subject’s body.

In some embodiments, upon determination of improper detection of the sounds, the notification unit is configured to generate respective at least one of one or more visual notifications, one or more sound notifications and one or more mechanical notifications.

In some embodiments, the device includes one or more additional sensor connected to the support and configured to generate one or more additional sensor output signals.

In some embodiments, the device includes a power source connected to the device and configured to supply power to electronic components of the device. In some embodiments, the device includes a frame connected to electronic components of the device and configured to removably connect the electronic components of the device to the support.

In some embodiments, the device includes a covering configured to be removably connected to the support and cover components of the device and accommodate the components between the support and the covering.

In some embodiments, the device includes a clip connected to the support and configured, when actuated, to push the acoustic sensor and the acoustic waveguide towards the support.

Some embodiments of the present invention may provide a system for detecting sounds from a subject’s body, the system includes: a swallowable capsule including an acoustic transducer configured to generate a sound signal after the swallowable capsule has been swallowed by the subject; and the device according to any one of claims 1-30, wherein the acoustic sensor of the device is configured to detect the sound signal from within the subject’s body and generate the output signal further based on the detected sound signal.

In some embodiments, the swallowable capsule further includes a capsule acoustic sensor configured to detect sounds from within the subject’s body and generate a capsule output signal.

In some embodiments, the swallowable capsule further includes a transmitter to transmit the capsule output signal, and wherein the communication unit of the device is configured to receive the capsule output signal.

Some embodiments of the present invention may provide a kit including two or more devices as described hereinabove.

BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of embodiments of the invention and to show how the same can be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings in which like numerals designate corresponding elements or sections throughout.

In the accompanying drawings:

Figs. 1A, IB and 1C are schematic illustrations of a device for detecting sounds from a subject’s body, according to some embodiments of the invention;

Fig. IE is a schematic illustration of a piezoelectric element, according to some embodiments of the invention;

Fig. 2 is a schematic illustration of a system for detecting sounds from a subject’s body, according to some embodiments of the invention; Fig. 3 is a schematic illustration of a device for detecting sounds from a subject’s body and an array of acoustic sensors connectable to the device, according to some embodiments of the invention; and Figs. 4A-4C are a schematic illustration of a piezo electric element within a housing service as the acoustic sensor according to some embodiments of the invention.

It will be appreciated that, for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.

DETAILED DESCRIPTION OF THE INVENTION In the following description, various aspects of the present invention are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention can be practiced without the specific details presented herein. Furthermore, well known features can have been omitted or simplified in order not to obscure the present invention. With specific reference to the drawings, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention can be embodied in practice.

Before at least one embodiment of the invention is explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments that can be practiced or carried out in various ways as well as to combinations of the disclosed embodiments. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting. Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining", “enhancing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. Any of the disclosed modules or units can be at least partially implemented by a computer processor.

Reference is now made to Figs. 1A, IB, 1C and ID, which are schematic illustrations of a device 100 for detecting sounds from a subject’s body, according to some embodiments of the invention. Figs. 1A and IB schematically show different views of device 100.

Device 100 may include a support 110. In some embodiments, support 110 may be flat (or substantially flat) as depicted in the Figures. Flat may mean not having protrusions or recesses. In some embodiments, support 110 may be flexible and still remain flat. In some embodiments, support 110 may be removably attachable to a subject’s body. For example, support 110 may include a flat sticky surface 112 to removably stick support 110 to the subject’s body. Support 110 may be attached to the subject’s body using components such as clip, belt, bio-compatible sticker or glue, pressure grip or any other suitable component known in the art.

In some embodiments, support 110 may be removably attachable to a subject’s clothing. For example, support 110 may include one or more fasteners (e.g., such as tape, scotch tape, stitch, stitched pocket, etc.) to removably attach support 110 to subject’s clothing. Support 110 may have different geometric shapes.

Device 100 may include an acoustic sensor 120. Acoustic sensor 120 may be connected to support 110. Acoustic sensor 120 may detect sounds from within the subject’s body. In some embodiments, acoustic sensor 120 may detect sounds from within the subject’s body in a vicinity of acoustic sensor 120. Acoustic sensor 120 may generate an output signal indicative of the detected sounds. Acoustic sensor 120 may have different geometric shapes. Acoustic sensor 120 may be of various types, such as, for example directional acoustic sensor, omnidirectional acoustic sensor, cardioid acoustic sensor, etc.

In some embodiments, device 100 may include an acoustic waveguide 122. Acoustic waveguide 122 may be connected to support 110, for example between support 110 and acoustic sensor 120. Acoustic waveguide 122 may guide the sounds from the subject’s body to acoustic sensor 120. According to some embodiments, acoustic waveguide 122 may isolate sounds from the subject’s body from ambient sounds. Acoustic waveguide 122 may achieve this isolation by restricting the transmission of energy (e.g. sounds from the subject’s body) to one direction, which may reduce losses in the energy otherwise caused by interaction with ambient sources in other directions. In some embodiments, acoustic waveguide 122 may include a sleeve. The sleeve of acoustic waveguide 122 may, for example, be made from a polymer or a metal. Acoustic waveguide 122 may have different geometric shapes. For example, acoustic waveguide 122 may have circular, elliptical, rectangular or other any shape. In some embodiments, a gel having desired acoustic properties may be used to enhance the acoustic coupling of acoustic sensor 120 and acoustic waveguide 122 to the subject’s body. For example, the gel may have an acoustic impedance similar to human tissue. The gel may displace air between the subject’s body and the acoustic sensor 120 and acoustic waveguide 122, thereby creating a vacuum effect to improve signal acquisition. According to some embodiments, a gel pad is included in the housing which couples a piezoelectric element to the housing.

In some embodiments, the gel pad acoustically coupling the piezo electric element and the waveguide can be made of one of: PZT film/crystal/ceramic/PVDF.

In some embodiments, device 100 may include an acoustic membrane (not shown) to couple the detected sounds from within the subject’s body to acoustic sensor 120. In some embodiments, the acoustic membrane is instead of acoustic waveguide 122. In some embodiments, the acoustic membrane is in addition to acoustic waveguide 122.

In various embodiments, device 100 may include a seal and/or insulator 126 (e.g. schematically shown in Fig. 1 A by dashed circle). Seal and/or insulator 126 may, for example, include sleeve, a coating layer or material or any other suitable component or device known in the art. For example, if seal and/or insulator 126 is a gel-like material, acoustic sensor 120 may be immersed in the material. Seal and/or insulator 126 may, for example, reduce noise and/or increase the signal-to-noise ratio (SNR) of signals generated by acoustic sensor 120. In some embodiments, seal and/or insulator 126 may be used instead of acoustic waveguide 122. In some embodiments, seal and/or insulator 126 may be used in addition to acoustic waveguide 122.

In some embodiments, acoustic sensor 120 may detect sounds of a predefined wide frequency range. For example, acoustic sensor 120 may be capable of sensing sounds generated by different organs of the subject’s body (e.g., heart, lungs, large intestine, etc.). For example, the wide frequency range may include 0.1 Hz to 40kH. In some embodiments, different processing type of the output signal may be required for sound frequency ranges.

In some embodiments, acoustic sensor 120 may detect sounds of a predefined narrow frequency range. For example, acoustic sensor 120 may be capable of sensing sounds generated by a specific organ or by a subgroup of organs of the subject’s body. For example, the narrow range may include any sub band of the wide frequency range of 0.1Hz to 40kH, for example, 0.1Hz to 20KHz, 10Hz to 2000Hz, etc.

In some embodiments, acoustic sensor 120 may detect subject’s speech. In some embodiments, acoustic sensor 120 may detect sounds caused by subject’s breath. In some embodiments, acoustic sensor 120 may detect sounds caused by subject’s cough. In some embodiments, device 100 may include two or more acoustic sensors 120. In some embodiments, device 100 may include two or more acoustic waveguides 122 each for one of the acoustic sensors 120. In some embodiments, two or more acoustic sensors 120 may detect sounds of the same frequency range (e.g., the same wide frequency range or the same narrow frequency range). In some embodiments, some of two or more acoustic sensors 122 may detect sounds of a different frequency range as compared to other acoustic sensors of two or more acoustic sensors 122. For example, the frequency range of each of two or more acoustic sensors 122 may be selected based on a specific organ or a subgroup of organs of the subject’s body to be sensed with the respective acoustic sensor. For example, a first acoustic sensor may be capable of detecting sounds from a subject’s heart and operate in a first frequency range of 20-200 Hz, a second acoustic sensor may be capable of detecting sounds from subject’s lungs and operate in a second frequency range of 25-1500 Hz. In some embodiments, the wavelength ranges of two or more acoustic sensors 122 may partly overlap with each other. In some embodiments, two or more acoustic sensors 120 may be configured to detect sounds arriving from the same direction from within the subject’s body. In some embodiments, some of two or more acoustic sensors 120 may be configured to detect sounds arriving from a different direction from within the subject’s body as compared to other acoustic sensors of two or more acoustic sensors 120. In some embodiments, some of two or more acoustic sensors 120 may have different shape as compared to other acoustic sensors of two or more acoustic sensors 120. In some embodiments, some of two or more acoustic sensors 120 may be of a different type as compared to other acoustic sensors of two or more acoustic sensors 120. For example, Fig. ID shows an example of device 100 having multiple acoustic sensors 120.

Having two or more acoustic sensors 120 within device 100 may have several advantages. For example, when using two or more acoustic sensors 120, two or more different organs/processes within the subject’s body can be simultaneously and/or consequently monitored and correlation between these processes can be determined and new biomarkers may be created. In another example, when using two or more acoustic sensors 120, it is possible to monitor sounds generated by, e.g., a blood flow at different locations within the subject’s body. In another example, when using two or more acoustic sensors 120, each of the two or more acoustic sensors 120 may be directed to a different direction as compared to other acoustic sensors. In another example, when using two or more acoustic sensors 120, the respective output signals may be used to determine and separate the sound sources within the subject’s body. In another example, when using two or more acoustic sensors 120, a signal-to-noise ratio (SNR) of the output signals may be enhanced.

According to some embodiments, a frequency of the output signal may be modulated. Device 100 may include electronic components such as amplifier(s), filter(s), analog -to-digital convert(s) and any other suitable electronic components known in the art.

Device 100 may include a processor 130. Processor 130 may be connected to support 110. Processor 130 may receive the output signal(s) from acoustic sensor(s) 120.

In some embodiments, processor 130 may save at least a portion of the output signal(s) in a digital storage unit 132. For example, processor 130 may compress the output signal(s) and save the compressed output signal(s) in digital storage unit 132.

In some embodiments, processor 130 may preprocess the output signal(s). In some embodiments, processor 130 may save only the preprocessed output signal(s) in digital storage unit 132. For example, processor 130 may detect one or more subsets of data values in the output signal(s) indicative of one or more abnormal/pathological sound patterns and save in digital storage unit 132 only the detected subset(s) of data values.

In some embodiments, processor 130 may analyze the output signal(s) to detect one or more abnormal/pathological biomarkers indicative of a health/physical/fitness-related/wellness-related condition of the subject. In some embodiments, processor 130 may save in digital storage unit 132 information related to detected abnormal/pathological biomarker(s). In some embodiments, processor 130 may detect the abnormal/pathological biomarker(s) in the output signal(s) using one or more pre trained artificial intelligence (AI) models. In some embodiments, processor 130 may detect the abnormal/pathological biomarker(s) in the output signal(s) using one or more pre -trained artificial machine learning models.

In some embodiments, processor 130 may detect one or more abnormal/pathological biomarkers indicative of the health condition of the subject based on the detected subject’s speech. For example, processor 130 may analyze the detected subject’s speech using one or more AI methods and/or one or more machine learning methods.

In various embodiments, processor 130 may detect the abnormal/pathological sound pattern(s) and/or the abnormal/pathological biomarker(s) in the output signal(s) based on normal sound pattern(s) and/or normal biomarker(s), respectively. In various embodiments, the normal sound pattern(s) and/or the normal biomarker(s) may be subject specific. For example, the normal sound pattern(s) and/or the normal biomarker(s) may be defined based on accumulated sound data collected from that particular subject. In various embodiments, the normal sound pattern(s) and/or the normal biomarker(s) may be specific to a population or a subpopulation to which the subject being monitored belongs. For example, the normal sound pattern(s) and/or the normal biomarker(s) may be defined based on accumulated sound data collected from a plurality of individuals belonging to this particular population or subpopulation.

In some embodiments, acoustic sensor(s) 120 may be configured to continuously detect sounds from within the subject’s body. In some embodiments, processor 130 may control acoustic sensor(s) 120 to detect sounds from within the subject’s body during predetermined time intervals according to a predetermined time schedule. In some embodiments, processor 130 may update the time schedule based on the output signal(s). For example, processor 130 may update the time schedule based on the occurrence and/or duration of the abnormal/pathological sound pattern(s), the occurrence and/or duration of the abnormal/pathological biomarker(s) in the output signal(s), etc.

Device 100 may include a power source 134. Power source 134 may be connected to support 110. Power source 134 may supply power to components of device 100. In some embodiments, power source 134 may include one or more batteries.

In some embodiments, device 100 may include a communication unit 136. Communication unit 136 may be connected to support 110. In some embodiments, communication unit 136 may be a wireless communication unit. Wireless communication unit 136 may be, for example, near-field communication (NFC)-based unit, Bluetooth-based unit, radiofrequency identification (RFID)-based unit, etc. Communication unit 136 may transmit data from digital storage unit 132 to a remote storage device or a remote computing device. In some embodiments, communication unit 136 may transmit the data on demand. For example, communication unit 136 may receive a transmission request signal and transmit the data upon receipt of the transmission request signal.

In some embodiments, the remote device may perform at least some of functions of processor 130 of device 100 as described herein.

In some embodiments, communication unit 136 may transmit to a remote computing device a notification indicative of the detection of the abnormal/pathological sound pattern(s) and/or the detection of abnormal/pathological biomarker(s) that require immediate attention. For example, the remote computing device may be a smartphone of the subject, appointed physician’s smartphone, healthcare center’s server, etc.

In some embodiments, communication unit 136 may be a wired communication unit. For example, communication unit 136 may be connected to a remote storage device or a remote computing device using a wire (e.g., universal serial bus (USB) cable, I2C, Rs232, Ethernet cable, etc.) to transmit the data from digital storage unit 132 to the remote storage device or the remote computing device.

In various embodiments, device 100 may include a remote storage unit or a remote computing unit to download and/or upload data from/to digital storage unit 132/processor 130 of device 100. In some embodiments, device 100 may include a notification unit 138. Notification unit 138 may be connected to support 110. Notification unit 138 may generate one or more notifications indicative of, for example, the detection of the abnormal/pathological sound pattern(s) and/or the detection of abnormal/pathological biomarker(s) that require immediate attention. In some embodiments, notification unit 138 may generate one or more visual notifications. For example, notification unit 138 may include a light-emitting diode (LED) configured to generate, e.g., red light if immediate attention is required. In some embodiments, notification unit 138 may generate one or more audio notifications. For example, notification unit 138 may include a speaker configured to generate, e.g., a predefined sound if immediate attention is required. In some embodiments, notification unit 138 may include a vibrating member configured to generate vibrations if immediate attention is required. Other examples of notification units 138 are also possible.

In some embodiments, processor 130 may perform a sound detection test upon attachment of device 100 to the subject’s body and initiation thereof. For example, upon attachment of device 100 to the subject’s body and initiation thereof, processor 130 may analyze the output signal(s) from acoustic sensor(s) 120 to determine whether or not the sounds are being properly detected. In some embodiments, upon determination of improper detection of the sounds, communication unit 136 may transmit a respective notification to a remote computing device. For example, communication unit 136 may transmit such notification to a subject’s smartphone. The notification may, for example, include instructions concerning of e.g., how to change a location of device 100 on the subject’s body so as to cause device 100 to properly detect the sounds. In some embodiments, upon determination of improper detection of the sounds, notification unit 138 may generate respective one or more visual or sound notifications (e.g., as described hereinabove).

In some embodiments, device 100 may include a sound transmitter 124. Sound transmitter 124 may transmit sounds into the subject’s body. Acoustic senor 122 (e.g., of device 100 or of any other device similar to device 100 placed on the subject’s body) may be synchronized to receive the sound transmitted by sound transmitter 124 and/or the reflected sound and generated the output signal related thereto. Processor 130 may analyze the output signal or cross-correlate the output signal with the sound transmitted the transmitter 124 (e.g., to identify changes in the signal's phase, power, spectral features, or any other parameters) to determine the condition of, for example, a target tissue, organ or flow (e.g., blood, fluids, peristaltic). This may be done using, for example, a single device 100 or by two or more devices similar to device 100 and placed at different positions on or in a vicinity of the subject’s body. In some embodiments, device 100 may include one or more additional sensors 140. Additional sensor(s) 140 may be connected to support 110. Additional sensor(s) 140 may, for example, include an accelerometer, electrocardiography (ECG) sensor, photoplethysmogram (PPG) sensor, temperature sensor, moisture sensor, skin conductance sensor, etc. Additional sensor(s) 140 may generate additional output signal(s) that may be further analyzed to, for example, determine correlations between different indications related to the additional output signal(s).

In some embodiments, device 100 may be disposable. In some embodiments, support 110 and optionally waveguide 122 of device 100 may be disposable while at least electronic components of device 100 may be reusable. The electronic components of device 100 may, for example, include acoustic sensor 120, processor 130, digital storage unit 132, communication unit 136, notification unit 138 and power source 140. For example, device 100 may include a frame 142 connected to the electronic components of device 100 and configured to removably connect the electronic components to support 110.

In some embodiments, device 100 may include a clip 144. Clip 144 may be connected to support 110. Clip 144 may be configured, when actuated, to push acoustic sensor 120 and acoustic waveguide 122 towards support 110 to provide a desired contact pressure of between acoustic waveguide 122/acoustic sensor 120 and the subject’s body.

In some embodiments, device 100 may include a covering 150 configured to be connected (or removably connected) to support 110 and cover components of device 100 to thereby accommodate the components between support 110 and the covering (e.g., as shown in Fig. 1C).

Device 100 has several advantages over typical commercial electronic stethoscope devices. Device 100 may include waveguide 122 to guide sounds detected from within the subject’s body to acoustic sensor 120 in contrast to typical commercial electronic stethoscope devices that typically utilize an acoustic membrane to couple the detected sounds to the acoustic sensor. Waveguide 122 occupies significantly less space as compared to the acoustic membrane. Accordingly, device 100 may have significantly smaller dimensions and weight and/or may have more acoustic sensors 120 (or additional sensors such as accelerometers 140) connected to support 110 as compared to typical commercial electronic stethoscope devices. For example, a sub-assembly of acoustic sensor 120 and waveguide 122 of device 100 may have a diameter of 0.3-0.5 cm and a height of 0.1-0.3 cm, while typical electronic stethoscope device may have a diameter of 2-4.5 cm and height of 1-2 cm. Moreover, waveguide 122 requires significantly smaller contact pressure (or requires no contact pressure at all) to efficiently guide the sounds detected from within the subject’s body to the acoustic sensor, in contrast to the acoustic membrane that requires significant contact pressure to provide sufficient coupling of the detected sounds to the acoustic sensor. Accordingly, device 100 may be removably attached to the subject’s body by relatively simple means, for example, using sticky flat flexible support 110 as described hereinabove. Furthermore, device 100 may remain attached to the subject’s body for long periods of times (e.g., days, weeks, etc.) without causing (or substantially without causing) inconvenience to the subject.

Device 100 may be removably attached to the subject’s body at various locations. For example, device 100 may be attached to the subject’s chest, back, abdomen, joints, etc. The body locations for attaching device 100 may be selected based on, for example, an organ or a subgroup of organs to be sensed with device 100.

In some embodiments, device 100 may be configured to detect sounds from different portions of a specific organ of the subject’s body. For example, device 100 configured to detect sounds generated by a subject’s heart may include a first acoustic sensors (e.g., like acoustic sensor 120) to detect sounds generated by one or more valves of the subject’s heart and a second acoustic sensor (e.g., like acoustic sensor 120) to detect cardiac murmur.

In some embodiments, device 100 may be configured to detect sounds from a subgroup of organs of the subject’s body. For example, device 100 may include a first acoustic sensor (e.g., like acoustic sensor 120) to detect sounds generated by a subject’s heart and one or more second acoustic sensors (e.g., like acoustic sensor 120) to detect sounds generated by subject’s lungs, optionally at different locations along the lungs.

Device 100 may have different shapes. The shape of device 100 may be, for example, predefined based on an organ or a subgroup of organs to be sensed with device 100. For example, device 100 configured to detect sounds generated by a subject’s large intestine may have substantially the same shape as the large intestine and may include several acoustic sensors configured to detect sounds at different locations along the large intestine.

Device 100 may be used to monitor fetal parameters such as, e.g. fetal motion, heart beat, heart rate or any other suitable fetal parameters known in the art.

One or more devices 100 may be removably attached to the subject’s body for long periods of times (e.g., days, weeks, etc.) to continuously detect sounds from within the subject’s body. Continuous, long-term detection and analysis of sounds from within the subject’s body may provide information concerning subject’s health condition. Moreover, simultaneous, continuous and long-term detection and analysis of sounds generated by different organs of the subject’s body may provide new information concerning correlation between functions of these organs to further enhance the information concerning subject’s health condition.

Some embodiments of the present invention may provide a kit including two or more devices (e.g., each like device 100) for detecting sounds from within the subject’s body. For example, the kit may include a first device (e.g., like device 100) configured to be attached to a subject’s chest to detect sounds generated by a subject’s heart, and a second device (e.g., like device 100) configured to be attached to a subject’s back and to detect sounds generated by subject’s lungs, optionally at different locations in the lungs.

Device-based recordings, being recorded continuously over many hours and in diversified settings, may provide a much broader basis for the analysis of the recorded signals than typical sporadic or routine spot-checks. Constant use of the device may, for example, serve an essential role in dramatically increasing the accuracy of voice -based analysis, by enabling a personalized fine tuning of the features, thresholds and overall models. A combined offering of the device (e.g., for an initial period or for short periods), and, for example, voice-based monitoring of the subject, may, for example, reach substantially better degrees of clinical accuracy and usability. Using the different sensing capabilities of the device (e.g., ECG, spoken voice, body sounds including heart sounds or any other sounds), and the intrinsic simultaneity of recorded data, a database of various points in the parameter space may be registered (e.g., different heart rates, breathing conditions, arrhythmias if exist, or any other parameters), together with the corresponding points and areas in the feature space, of the analyzed spoken voice. Various conditions may be correlated to their corresponding voice features, thus better defining the boundaries of “normal” and “abnormal” (e.g., AF or other pathologies) sub-spaces in the feature space of the existing model. Moreover, in some embodiments, a personalized model may be constructed for each individual subject based on this analysis and distinction. Some implementations may, for example, include (i) different heart rate conditions, (ii) different breathing rate and breathing depth conditions, (iii) sinus rhythm, AF, different arrhythmias, (iv) different motion patterns of the body, including vibrations (e.g., car ride, etc.), (v) different postures of the body. One example, may, for example, include adaptive tuning of different arrhythmia states to other parameters, e.g., heart rate. In this example, a library of different heart rate values to consequent parameter values may be created, in sinus rhythm and in Afib condition (may be extended to other arrhythmias) - at 70 bpm / 100 bpm / 140 bpm, different "fingerprints" of voice features correspond to normal vs. the Afib condition. In another example, a cohort-dependent characteristic voice feature "fingerprints", to enhance clinical accuracy and resolution of detection may be created. In this example, additional parameters (other than heart rate) will be considered for categorizing sub-populations, such as age group, CHA DS -VASC Score, basic voice features and others.

According to some embodiments device 100 has the form of a silicone pad with domes, with microphones inside the domes. For example, device 100 may have the form factor of a 2 X 2 array of four silicone hemispheres, with a microphones located in each internal cavity of the hemispheres. In some embodiments, the domes may not be perfectly spherical but may be a polygonal approximation of a hemisphere, for example a faceted 3D shape composed of 2D polygons such as triangles, squares, pentagons, hexagons and octagons, which may increase a contact surface area with the patient’s body. Reference is now made to Fig. 2, which is a schematic illustration of a system 200 for detecting sounds from a subject’s body, according to some embodiments of the invention.

System 200 may include a device 210 for detecting sounds from the subject’s body. Device 210 may be similar to device 100 described hereinabove with respect to Figs. 1A, IB and 1C. Device 210 may be removably attached to the subject’s body to detect sounds from one or more locations within the subject’s body (e.g., as described hereinabove with respect to Figs. 1A, IB and 1C).

System 200 may include a swallowable capsule 220. Swallowable capsule 220 may include an acoustic transducer 222. Acoustic transducer 222 may generate a sound signal 223. For example, acoustic transducer 222 may generate sound signal 223 after swallowable capsule 220 has been swallowed by the subject. In some embodiments, acoustic transducer 222 may generate sound signals of different frequencies. In some embodiments, acoustic transducer 222 may generate a series of sound signals, wherein each of the sound signals in the series may have a different frequency as compared to frequencies of other sound signals in the series.

Device 210 may detect by its acoustic sensor(s) 212 (e.g., like acoustic sensor 120 described hereinabove with respect to Figs. 1A, IB and 1C) sound signal 223 generated by acoustic transducer 222 of swallowable capsule 220 from within the subject’s body and generate the output signal further based on the detected acoustic transducer sound. The output signal may be used for further processing, e.g., as described above with respect to Figs. 1A, IB and 1C. The output signal generated based on the sound signals transmitted by acoustic transducer 222 of swallowable capsule 220 from within the subject’s body may, for example, provide information concerning tissues through which these sound signals have passed.

For example, acoustic transducer 222 of swallowable capsule 220 may be configured to generate a series of sound signals that may pass through the lungs of the subject. In this example, device 210 attached to the subject’s body in a vicinity of the lungs may detect the sounds signals generated by acoustic transducer 222 of swallowable capsule 220 and generate respective output signal. The output signal may be analyzed to detect biomarkers indicative of, for example, obstructions that may be indicative of, for example, tumor, polypus or any other condition.

In another example, acoustic transducer 222 of swallowable capsule 220 may be configured to generate a series of sound signals that may pass through the large intestine of the subject. In this example, device 210 attached to the subject’s body in a vicinity of the lungs may detect the sounds signals generated by acoustic transducer 222 of swallowable capsule 220 and generate respective output signal. The output signal may be analyzed to detect biomarkers indicative of, for example, pulmonary edema, which in turn may be indicative of, for example, heart failure.

The output signals may be analyzed to, for example, determine changes within the output signals along different locations within digestion system of the subject. The analysis may, for example, include comparison of the output signals to references datasets. The reference datasets may, for example, include normal and/or abnormal sets of data values. The analysis may, for example, include utilization of artificial intelligence methods.

In some embodiments, swallowable capsule 220 may include a controller 224 configured to control acoustic transducer 222.

In some embodiments, swallowable capsule 220 may include a capsule acoustic sensor 226 configured to detect sounds from within the subject’s body (e.g., sounds generated by digestive system) and generate the capsule output signal. In some embodiments, swallowable capsule 220 may include a transmitter 228 to transmit the capsule output signal. Device 210 may detect the capsule output signal by its communication unit 214 (e.g., like communication unit 136 described hereinabove with respect to Figs. 1A, IB and 1C). Processor 218 of device 210 (e.g., like processor 130 described hereinabove with respect to Figs. 1 A, IB and 1C) may treat the capsule output signal similarly to the output signal(s) being generated by its acoustic sensor(s) 212 (e.g., as described hereinabove with respect to Figs. 1A, IB and 1C).

In some embodiments, controller 224 may control acoustic transducer 222 to continuously transmit sound signals. In some embodiments, controller 224 may control transducer 222 to transmit sound signals at specified time intervals. The specified time intervals may be, for example, predefined or dynamically updated.

In some embodiments, controller 224 may control acoustic transducer 222 to transmit sound signals when swallowable capsule reaches a target organ. The time of arrival to the target organ may be, for example, predefined based on typical digestion of the subject. In another example, controller 224 may control acoustic transducer 222 to transmit a specified signal indicating that the swallowable capsule 220 has reached the target organ.

In some embodiments, controller 224 may control acoustic transducer 222 may transmit a sound signal, acoustic sensor 226 of swallowable capsule 220 may receive reflected sound signal, and controller 224 may determine the location of swallowable capsule 220 within the digestion system of the subject based on at least one of the transmitted or reflected sound signal. In some embodiments, controller 224 may control acoustic transducer 222 may transmit a sound signal indicating that swallowable capsule is about to leave the digestion system of the subject. In some embodiments, one or more devices 210 placed externally along gastrointestinal tract may monitor the real-time position of swallowable capsule 220 in the gastrointestinal tract.

Reference is now made to Fig. 3, which is a schematic illustration of a device 100 for detecting sounds from a subject’s body and an array 300 of acoustic sensors 320 connectable to device 100, according to some embodiments of the invention.

Array 300 may include a support 310 and multiple acoustic sensors 320 connected to support 310. Array 300 may include multiple acoustic waveguides (not shown), each for one of multiple acoustic sensors 320 (e.g., as described above with respect to Figs. 1A, IB, 1C and ID). Support 310 may be, for example, similar to support 110 (e.g., described above with respect to Figs. 1A, IB, 1C and ID). In some embodiments, support 310 of array 320 may be configured to be connected to support 110 of device 100. In some embodiments, support 110 of device 100 may be configured to be connected to support 310 of array 300. Acoustic sensors 320 of array 300 may be configured to be connected to electronic components of device 100 using a wired and/or wireless connection.

In some embodiments, acoustic sensors 320 may detect sounds of the same frequency range (e.g., the same wide frequency range or the same narrow frequency range). In some embodiments, some of acoustic sensors 320 may detect sounds of a different frequency range as compared to other acoustic sensors of acoustic sensors 320. For example, the frequency range of each of acoustic sensors 320 may be selected based on a specific organ or a subgroup of organs of the subject’s body to be sensed with the respective acoustic sensor. For example, a first acoustic sensor may be capable of detecting sounds from a subject’s heart and operate in a first frequency range of 20-200 Hz, a second acoustic sensor may be capable of detecting sounds from subject’s lungs and operate in a second frequency range of 25-1500 Hz. In some embodiments, the wavelength ranges of acoustic sensors 320 may partly overlap with each other. In some embodiments, acoustic sensors 320 may be configured to detect sounds arriving from the same direction from within the subject’s body. In some embodiments, some of acoustic sensors 320 may be configured to detect sounds arriving from a different direction from within the subject’s body as compared to other acoustic sensors of acoustic sensors 320. In some embodiments, some of acoustic sensors 320 may have different shape as compared to other acoustic sensors of acoustic sensors 320. In some embodiments, some of acoustic sensors 320 may be of a different type as compared to other acoustic sensors of acoustic sensors 320.

Reference is now made to Figs. 4A-4C showing further implementations of the acoustic sensor within the casing. According to some embodiments, as shown in Fig. 4A, a piezoelectric element may be in the form of a piezoelectric plate 420 and held within housing 410 (waveguide not shown here). The piezoelectric plate 420 may be circular in shape. The piezoelectric plate may be annular. According to some embodiments the piezoelectric element is only supported in sections and not along the entirety of its perimeter.

Fig. IE shows a non-limiting example of the piezoelectric element 195 which may be shaped as a star whose corners rest on a bracket in the shape of an outer ring 190.

According to some embodiments, the piezoelectric element may be at a tension that optimizes a sensitivity of the piezoelectric element.

According to some embodiments, as shown in Fig. 4B, acoustic sensor may include a microphone 430 which can be implemented as a hydrophone. In some embodiments, acoustic sensor 120 may include a piezoelectric element 420 as explained above. For example, the piezoelectric element may include a piezoelectric film or crystal such as polyvinylidene fluoride (PVDF). In some embodiments, acoustic sensor may include a supportive case (e.g., a housing). In some embodiments, acoustic sensor may be provided without a supportive case.

According to some embodiments, the piezoelectric element may be included inside the supportive case or housing of the acoustic sensor. The housing may be configured hold or otherwise support the piezoelectric element, for example to hold the piezoelectric element at a predefined tension. The housing may be configured to comprise an internal cavity, which may allow the piezoelectric element to be displaced (e.g., vibrate) within the cavity.

According to some embodiments, as shown in Fig. 4A, a gel having desired acoustic properties may be used to enhance the acoustic coupling of acoustic sensor and acoustic waveguide to the subject’s body. For example, the gel may have an acoustic impedance similar to human tissue. The gel may displace air between the subject’s body and the acoustic sensor and acoustic waveguide, thereby creating a vacuum effect to improve signal acquisition. According to some embodiments, a plurality of gel pads 440A-440D may be included in the housing 410 which couples a piezoelectric element 420 and possibly microphone 430 to the housing 410.

Aspects of the present invention are described above with reference to flowchart illustrations and/or portion diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each portion of the flowchart illustrations and/or portion diagrams, and combinations of portions in the flowchart illustrations and/or portion diagrams, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or portion diagram or portions thereof.

These computer program instructions can also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or portion diagram portion or portions thereof. The computer program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or portion diagram portion or portions thereof.

The aforementioned flowchart and diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each portion in the flowchart or portion diagrams can represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the portion can occur out of the order noted in the figures. For example, two portions shown in succession can, in fact, be executed substantially concurrently, or the portions can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each portion of the portion diagrams and/or flowchart illustration, and combinations of portions in the portion diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

In the above description, an embodiment is an example or implementation of the invention. The various appearances of "one embodiment”, "an embodiment", "certain embodiments" or "some embodiments" do not necessarily all refer to the same embodiments. Although various features of the invention can be described in the context of a single embodiment, the features can also be provided separately or in any suitable combination. Conversely, although the invention can be described herein in the context of separate embodiments for clarity, the invention can also be implemented in a single embodiment. Certain embodiments of the invention can include features from different embodiments disclosed above, and certain embodiments can incorporate elements from other embodiments disclosed above. The disclosure of elements of the invention in the context of a specific embodiment is not to be taken as limiting their use in the specific embodiment alone. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in certain embodiments other than the ones outlined in the description above.

The invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described. Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.