Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONTEXT RESPONSIVE SENSING APPARATUS, SYSTEM AND METHOD FOR AUTOMATED BUILDING MONITORING AND CONTROL
Document Type and Number:
WIPO Patent Application WO/2020/206506
Kind Code:
A1
Abstract:
A context responsive sensing apparatus, comprising: a plurality of sensors that are each configured to output sensor data; a local data storage for storing the output sensor data; a processor implementing a context sensing module operable to: determine a contextual state of a surrounding environment utilising the sensor data output by the plurality of sensors; adjust one or more configuration parameters for the apparatus in response to determining that an adjustment action needs to be taken based on the determined contextual state.

Inventors:
KEIGHTLEY DAVID (AU)
Application Number:
PCT/AU2020/050363
Publication Date:
October 15, 2020
Filing Date:
April 14, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ECOSPECTRAL PTY LTD (AU)
International Classes:
G05B15/02; G05B13/02; H04W4/33; H04W4/38
Domestic Patent References:
WO2016010529A12016-01-21
Foreign References:
US20160123618A12016-05-05
US20160123617A12016-05-05
US20180127001A12018-05-10
US20160091471A12016-03-31
Attorney, Agent or Firm:
ADAMS PLUCK (AU)
Download PDF:
Claims:
THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:

1. A context responsive sensing apparatus, comprising:

a plurality of sensors that are each configured to output sensor data;

a local data storage for storing the output sensor data;

a processor implementing a context sensing module operable to:

determine a contextual state of a surrounding environment utilising the sensor data output by the plurality of sensors;

adjust one or more configuration parameters for the apparatus in response to determining that an adjustment action needs to be taken based on the determined contextual state.

2. A context responsive sensing apparatus in accordance with claim 1, further comprising a communication module configured to selectively communicate sensor data and/or context-based control instructions to a centralised control system over a communications network.

3. A context responsive sensing apparatus according to claim 1 or 2, wherein the adjustment determination additionally evaluates a preceding contextual state determined by the context sensing module.

4. A context responsive sensing apparatus according to any one of the preceding claims, wherein the context sensing module determines that an adjustment action needs to be taken if there is a predefined state change between the current contextual state and the preceding contextual state.

5. A context responsive sensing apparatus according to claim 1, wherein the context sensing module determines that an adjustment action needs to be taken if the current contextual state differs from an expected or baseline contextual state.

6. A context responsive sensing apparatus according to any one of the preceding claims, wherein the step of determining whether an adjustment action needs to be taken comprises performing at least one of the following forms of analysis: statistical analysis; Bayesian analysis; Neural network analysis.

7. The context responsive sensing apparatus according to any one of the preceding claims, further comprising sending a signal to a selected one or more neighbouring context responsive sensing apparatus, the signal being sent over the communications network and causing the neighbouring sensing apparatus to adjust one or more configuration parameters.

8. The context responsive sensing apparatus according to any one of the preceding claims, further comprising receiving external contextual state data from a remote source via the communications network and wherein the external contextual state data is evaluated by the context sensing module to determine the current contextual state.

9. The context responsive sensing apparatus according to any one of the preceding claims, wherein the context sensing module adjusts one or more signal processing parameters for sensor output data evaluation.

10. The context responsive sensing apparatus according to claim 9, wherein the context sensing module adjusts a filter applied to the signal output data. 11. The context responsive sensing apparatus according to claim 9, wherein the context sensing module adjusts one or more trigger parameters for triggering capture of data by the sensors.

12. The context responsive sensing apparatus according to claim 11, wherein one of the sensors comprises an audio sensor and wherein the adjustable triggering parameters for the audio sensor comprise one or more of: an intensity level and a timing for data capture.

IB. The context responsive sensing apparatus according to any one of the preceding claims, wherein the context sensing module adjusts a type and or amount of sensor data and/or contextual state data communicated to the centralised control system.

Description:
Context Responsive Sensing Apparatus, System and Method for Automated Building

Monitoring and Control

Technical Field

The present invention relates generally to the field of automated control systems for built environments and more particularly to context aware sensing apparatus for such automated control systems.

Background

Automatic building control systems are known to control many aspects of a building environment, including operation of lighting systems, safety systems, HVAC and humidity control and ventilation systems. Passive sensors located through the environment sense and are controlled to periodically communicate sensed data to a central controller that subsequently evaluates the data to determine whether a control adjustment is required. Such units comprise temperature sensors, lighting sensors, motion sensors and the like. It is common to combine multiple sensors into a single sensing unit.

Traditional automatic building control systems typically receive and act on sensor data to maintain a steady physical state. For example, traditional systems monitor the temperature in a room and strive to maintain the temperature at some desired level.

If the central controller determines that the temperature has exceeded the desired level (i.e. based on an evaluation of the temperature sensor data), the controller may adjust the controls of a HVAC system to lower the temperature.

In recent times, advancements in automatic building control systems have seen the central controller programmed to predict behaviour based on sensor data

communicated by the sensing units and take proactive response, where required, to maintain a steady state for the environment. While such advancements undoubtedly improve building control, current sensor and sensing-control system design is limited in its ability to scale, largely due to inability of the central controller to efficiently and responsively handle the large volume of raw data that is output by the network of sensors. Another downside of such systems in the large amount of bandwidth that is required to communicate raw data from the sensor level layer to the central controller layer, which can be costly in terms of both processing power and energy used.

It would be advantageous if there were provided a sensor control system that could be readily scaled; that was able to efficiently and intelligently act on sensed data and which provided a better use of bandwidth for communicating event data between layers of the system.

Summary of the Invention

In accordance with a first aspect there is provided a context responsive sensing apparatus, comprising: a plurality of sensors that are each configured to output sensor data; a local data storage for storing the output sensor data; a processor implementing a context sensing module operable to: determine a contextual state of a surrounding environment utilising the sensor data output by the plurality of sensors; adjust one or more configuration parameters for the apparatus in response to determining that an adjustment action needs to be taken based on the determined contextual state.

In an embodiment the apparatus further comprises a communication module configured to selectively communicate sensor data and/or context-based control instructions to a centralised control system over a communications network.

In an embodiment the adjustment determination additionally evaluates a preceding contextual state determined by the context sensing module. In an embodiment the context sensing module determines that an adjustment action needs to be taken if there is a predefined state change between the current contextual state and the preceding contextual state.

In an embodiment the context sensing module determines that an adjustment action needs to be taken if the current contextual state differs from an expected or baseline contextual state.

In an embodiment the step of determining whether an adjustment action needs to be taken comprises performing at least one of the following forms of analysis: statistical analysis; Bayesian analysis; Neural network analysis.

In an embodiment the apparatus further comprises sending a signal to a selected one or more neighbouring context responsive sensing apparatus, the signal being sent over the communications network and causing the neighbouring sensing apparatus to adjust one or more configuration parameters.

In an embodiment the apparatus further comprises receiving external contextual state data from a remote source via the communications network and wherein the external contextual state data is evaluated by the context sensing module to determine the current contextual state.

In an embodiment the context sensing module adjusts one or more signal processing parameters for sensor output data evaluation. In an embodiment the context sensing module adjusts a filter applied to the signal output data.

In an embodiment the context sensing module adjusts one or more trigger parameters for triggering capture of data by the sensors. In an embodiment one of the sensors comprises an audio sensor and wherein the adjustable triggering parameters for the audio sensor comprise one or more of: an intensity level and a timing for data capture.

In an embodiment the context sensing module adjusts a type and or amount of sensor data and/or contextual state data communicated to the centralised control system.

In accordance with yet another aspect there is provided a computer program including at least one instruction which, when implemented by a processor of a computing apparatus, causes the computing system to implement the apparatus as described above.

Brief Description of the Drawings

Features and advantages of the present invention will become apparent from the following description of embodiments thereof, by way of example only, with reference to the accompanying drawings, in which;

Figure 1 is a schematic illustration of a system for carrying out embodiments of the present invention;

Figure 2 is a schematic of the internal components of the sensing unit depicted in Figure 1; and

Figures 3 and 4 illustrate the steps performed for determining and acting on context data, in accordance with an embodiment of the invention.

Detailed Description of an Embodiment

With reference to Fig. 1, there is shown a schematic of a building control system 100 in accordance with an embodiment of the invention. The building control system 100 may be configured for use within any type of built environment where it is desirable to provide context-based control of various aspects of building operation. For example, the building control system 100 may be configured for use in controlling heating, ventilation and/or air conditioning ("HVAC") systems in a commercial high-rise building. Equally, the system 100 may implemented in a small residential premise for controlling a camera security system. For ease of illustration, the following description assumes the system 100 is configured for a typical multi-floor office environment.

The building control system 100 comprises a network of intelligent context-responsive sensing apparatus (hereafter "sensing units 102") that are strategically located throughout the office space and which are each communicable with a central controller 104 via a communications network 106. It will be understood that the communications network 106 may comprise any suitable wired and/or wireless network that allows data to be communicated between the sensing units 102 and central controller 104. According to the illustrated embodiment, the communications network 106 comprises a LAN. Further, the sensing units 102 and central

controller 104 may be configured as a mesh network using techniques well understood in the art. The central controller 104 is operable to control various building control systems (e.g. HVAC, access control systems, etc.) based on contextual data reported by the sensing apparatus 102 (hereafter "sensing units") and/or control instructions received therefrom. As will become evident from subsequent paragraphs, the sensing units 102 perform multi-modal adaptive sensing that allow a deep knowledge of the built environment (in this case multi-floor office space) to be understood for determining context, including aspects of outdoor and indoor activity, occupancy and behaviour. Importantly, each of the sensing units 102 are adapted to adjust the way they sense, evaluate and/or report on physical events responsive to determined changes in the built environment. Further, the sensing units 102 are communicable with each other over the communications network 106 for determining a jointly sensed context, that in turn allows for contextual data to be acted and passed on in an optimised manner.

Sensor Unit Configuration In more detail, and with additional reference to Fig. 2, each sensing unit 102 comprises an intelligent controller 202 having a processor (in the illustrated embodiment, taking the form of a microcontroller) that implements a context sensing module 202a, in turn implementing one or more context sensitive sensing algorithms, based on program code and sensor data stored in a local data storage 204 (in this case SRAM). It will be understood that the context sensing module may be implemented as software, hardware or a combination of the two, depending on the desired implementation.

The sensing unit 102 comprises a plurality of sensors 206a to 206n that are each configured to output data representative of sensed physical events in a defined radius. According to the illustrated embodiment, the sensing unit 102 implements an audio sensor 206a and preferably a motion sensor 206b. This combination of sensors allows for real-time context sensitive processing to be carried out at a local level, while keeping manufacturing cost to a minimum. It also ensures minimises the risk of a false positive or negative for properties measured.

To further enhance context-based sensing, the sensing unit 102 may additionally incorporate one or more of an on-board temperature sensor, light sensor, pressure sensor and vibration sensor. The sensor unit 102 may additionally be communicable over the network with other dedicated sensors for sensing context, including, but not limited to, a chemical sensor, CO2 sensor, image sensor, motion imagery sensor, radioactivity sensor and EMR sensor.

In more detail, the audio sensor 206a comprises an omni-directional microphone that is configured to sense sound and has a sensitivity of -70dbM (though this may be greater or less, depending on the application). The audio sensor 206a may be configured to detect audible sound, as well as ultrasonic sound. As shown in Figure 2, the output signal data is passed through a filter 210, amplifier 212 and analogue to digital converter 214 before reaching the processor 202 for evaluation. The motion sensor 206b takes the form of a passive infra-red (PIR) sensor configured to sense motion within a 4-12-meter range. It will be understood that this range could be extended, if desired, with radar (SLAR) and other suitable long-distance motion sensing technologies. The motion sensor output data is passed through an

amplifier 212 and A/D convertor 214 before entering the processor 202 for evaluation.

On-board temperature sensing may be achieved by either a thermocouple or resistance temperature detector (RTD) sensor. Light may be sensed by either a photosensor or light dependent resistor (LDR). All data captured by the sensors is recorded with a time reference (determined by a clock implemented by the processor (202)), which includes the time of day, day of week and time of year (season).

Context Sensitive Processing

As previously stated, an advantage of the sensor units 102 as described herein is that they are adapted to adjust the way they sense, evaluate and/or report on physical events responsive to predefined changes in the built environment. In order to determine predefined changes, each sensor unit 102 maintains information on noise patterns and motion patterns, as well as temperature and light patterns where applicable.

The information may be representative of short-term patterns and long-term patterns. Each unit 102 also implements algorithms for determining expected signal behaviour based on the patterns, average signal behaviours and actions to be taken for both expected and unexpected events. Thus, the system is not merely acting as a collection of different sensors reporting independently, but rather as a collective, where the result of one sensor may impact the behaviour of other onboard sensors (or neighbouring sensor unit's onboard sensors), as well as how the sensor data collected by those sensors is processed and subsequently reported on.

In more detail, the context sensing module 202a implements one or more algorithms that: determine triggers for signal capture; capture, pre-process (where applicable) and store data; evaluate the captured data for context; and control communication of evaluated data. Examples for each of the above functions will now be described. It will be understood that sound is a core sensing capability (and is both driven by and drives other sensing) and allows conclusions to be made from sensed events.

Accordingly, the following description focuses on sound as being the primary triggering signal and the signal that has the greatest influence on controller-based adjustments and reporting decisions.

Triggers for signal capture may, for example, be based on audio parameters (for example, pressure and noise levels, sound quality, etc.); determining a predefined sensed event from non-audio sensor data (e.g. predefined motion detected, predefined light level detected, etc.), instructions or context data communicated by another sensor unit over the network; or periodically.

Once captured, the sensor signal output is time stamped and stored in memory 204. One or more pre-processing steps may be performed on the sensor signal output by the sensor prior to context evaluation by the processor 202. For example, the signal may be filtered using an audio or digital filter. The filter (as well as a specific sampling rate) may be chosen based on an evaluation of the trigger type. For example, if the trigger was a motion-based trigger, an initial evaluation of the data may be carried out to determine dominant noise frequencies and noise amplitude for selecting the most appropriate filter. If no subsequent noise was detected, the filter and sampling rate may then be set to examine the human voice band (based on the assumption that a human has entered the space and will likely start talking soon). On the other hand, if the trigger was based on a predefined timing schedule, then the filters may initially be set for wideband periodogram (to search for any new background noise) and then if noise is detected based on a subsequent evaluation of the sample by the

processor 202, a narrowband filter may be applied for a more optimised evaluation of the newly detected noise. The filter may also be dynamically adjusted based on a real time evaluation of the audio data. Subsequent signal processing of the captured sensor data by the context sensing module 202a is performed to determine if there have been changes in the

environmental context that require on-board and/or external action. As mentioned above, an evaluation of short-term and long-term sensor output data stored by the sensor 102 may be referenced to determine if the current contextual state requires on-board or external action. More particularly, the sensing module 202a may implement an algorithm that keeps a running average or state of a specific sensor reading/variable, e.g. a time since the last motion detected, rate of change of light, temperature and sound, etc. The short term history may influence how the sensor unit 102 reacts from this indicator of context. For example, sounds in a burning building or a building with abnormal temperature building should be measured differently to those sounds in a healthy building. In this case, the algorithm may be programmed to continuously record and report audio to the central controller 104 when the temperature is changing very rapidly (based on temperature sensor output) compared to when the temperature is closer to steady state. Further, the algorithm can evaluate both short and long term history to determine when the sensor configuration and reporting schedule should be changed back to normal operation.

As a further example, it will be understood that context can play an important role in setting audio parameters. For example, listening for machine noise may require recording at ultrasonic ranges, while monitoring human behaviour requires recording at the low end of the spectrum, from 85 Hz to 7500 Hz (human voice range plus higher harmonics). Thus, for example, signal processing may be used to determine or predict what type of noise source is being detected and an appropriate filter thereafter selected by the controller.

It will be understood that action taken by the sensor unit 102 can comprise reporting the current context state (or representative data) to the central controller 104 for taking some form of action. For example, if the change in environment context is that there are no longer any people in the space, then the central controller 104 may turn off the HVAC for the space. Action may also comprise adjusting the behaviour of one or more onboard sensors for more optimal sensing. For example, the sensor unit 102 may adjust a sensitivity setting for a particular sensor. Another action comprises applying a filter or adjusting a sample rate for recorded sensor data based on a determined context to yield more accurate information by which to determine contextual state. For example, one of the following types of filter/sample rate selections may be implemented: i. Analog low pass, band pass filter selection

ii. Sample rate selection according to bandwidth

iii. Discrete Wavelet Filter Denoising (optional depending on signal quality), iv. Hamming Window on audio sample (improves FFT) or Hanning or

Blackman filter depending on signal properties to be

ext ra cted/exa mined

v. Spectrogram (if specific audio (including voice) qualities pursued) vi. Dominant Frequencies

vii. Noise level (in db)

viii. Periodogram

With additional reference to Figure 3, there is shown a process flow implemented by the context sensing module 202a that can draw on both local context data (i.e. derived by the sensor) as well context data provided by higher system levels (e.g. at controller level and remote source). More particularly, the context sensing module 202a implements a probability hypothesis and testing engine 202b that can use both local and higher-level context data for testing hypothesis such as, for example:

Is this room occupied?

Is there danger here?

Is the sensed audio spoken by a man or a woman?

What frequency range should the audio sensor be configured to sense?

Figure 4 illustrates various forms of analysis that can be carried out by the engine 202b for testing such hypothesis.

Returning to Figure 3, the illustrated process flow is particularly configured for evaluating local time-based audio recorded by an audio sensor. It will be understood, however, that the process flow could be configured to evaluate local time-based data output from other sensors (e.g. light, temperature, motion, etc.), depending on the desired configuration. The first stage of the process is the same as that previously described. That is, one or more algorithms are implemented by the context sensing module 202a to determine a predefined state change in context based on local sensor data (in this case digitised audio data). If a predefined state change has been determined, then the appropriate action is taken.

However, if there is an anomaly or unrecognisable change in the contextual state, or if the current context is indeterminable, the module 202a may perform further analysis, including factoring relevant statistics and/or higher-level system data for evaluation by the engine 202b using time-based signal processing (stage 2). In addition, in a further path of the second stage, the engine 202b may perform a frequency domain evaluation of the audio data if it is determined that such an evaluation is relevant. This may include, for example, performing gender analysis, frequency breakdown for machines, spectrogram representation, among others. At each of these stages, other prior known data/knowledge can be introduced. This prior known data can be historic data stored locally by the sensor, or data supplied from a remote source or by higher system levels. The prior known data may provide further context that can guide the conclusions made from the locally derived data and critical to Bayesian and other statistical processing, as will now be described with regards to some example scenarios.

Health

In this scenario, the sensor system is installed in a psychiatric ward of a hospital. Each sensor 102 is configured with audio, light and motion sensors and maintains statistics on board. The individual sensors 102 have each been monitoring the ward for one month and each sensor has recorded and established patterns that are representative of expected patient behaviour. A doctor in the facility has recently prescribed a potentially violent patient with a lower dose rate of an anti-psychotic. Normal kinetic activity and noise levels have been established for the patient. Typical stage 1 analysis is performed by the module 202a of a particular sensor 102 and some aberration or manual request for closer monitoring leads to stage 2 analysis being invoked.

Depending on the output of stage 2, the sensor 102 may, for example, change its settings to capture the spectrogram of the noise to analyse stress levels in the room. Memory partitioning and audio frequency range with digital filtering may be altered to maximise the quality and value of the signal captured. The following are examples of the various evaluations that may be performed by the engine 202b in stage 2: a. Statistical Analysis: (simple statistical difference)

i. Higher levels of kinetic energy (movement) are detected and/or are reported.

If a patient becomes more agitated the sensor is more likely to record a higher number of motion-events/time interval. In this case the sensor recognises the higher number of motion-events as a predefined state change and is programmed to change the time interval for motion evaluation to provide more detailed focus.

ii. Audio data may indicate an increased noise level in the room. The significance of noise level increase can be challenging to establish and can be marked for further analysis by the module 202a.

iii. Student's T-Test: the engine may utilise a t-test or similar to flag (i.e. for subsequent further evaluation) small but significant changes in a signal mean.

iv. Immediate application of digital filtering of the audio signal to focus on critical understanding of what the noises in the room are, balancing that with allowing motion sensing to continue to track physical activity. The sensor can update its settings to record longer or shorter periods of time to allow deeper analysis. Furthermore, a loud noise can be further examined by Bayesian analysis as discuss below. b. Bayesian Analysis: (uses prior knowledge, for example, new dosage rate date- time noted and sent to sensor for input during stage 2). In this case, the engine 202b may use Bayesian probability analysis to determine if there is cause for alarm:

i. Using prior knowledge and hypothesis testing the Posterior probabilities and likelihood given medication, behaviour, audio analysis and motion analysis.

ii. Test using separate and combined motion and audio input on Posterior P (psychotic event I data)

iii. The data (medication change) is critical prior knowledge provided by higher levels of the system and tested at the sensor level.

iv. Sensor level settings adapt to maximise quality of calculation and knowledge gained. c. Neural Network: Acting as a perceptron, the engine 202b may take local stimulus as well as stimulus from nearby sensors into a summing and activation function, which serves two functions:

i. Helps determine the weighted response from local and nearby functions for local response and may trigger communication of results to higher system levels; ii. Provides training data for neural network system to recognise a psychotic episode more accurately in the future.

Safety

In this example scenario, applying adaptive sensing and analysis at the local and sensor level is valuable for both improved knowledge at higher system levels as well as the ability to react locally to potential safety issues rather than waiting for a slower less intelligent system to respond. The ability of a system to self-configure in response to stimulus could be the difference between life or death. By way of example, an individual sensor 102a may determine an increase in temperature that is outside of expectations. The following are examples of the various evaluations that may be performed by the engine 202b: a. Statistical Analysis:

i. Using classical statistics, the engine 202b may determine that the change in temperature is abnormal and that the rate of change is also potentially an issue.

Based on this identified state change, the sensor may immediately change the rate of temperature measurements so a more detailed picture can be captured. If this is a fire, then the rate of temperature change is critical. If the cause is a HVAC or window fault that can be tested along with light levels (blinds open). Immediate actions are obvious. b. Bayesian Inference:

Can be used to determine the probability of a fire given the available data. This depends on prior information on fires that can be provided by the system to the sensors. A Bayesian inference can then be set up and the probability tested. Since the prior has already been supplied by higher levels, sensor levels can be adjusted to vary the input to the engine 202b in real time. For example, the sensor levels can be adjusted to concentrate on human voice range to see if there are people in trouble. Furthermore, a spectral analysis of the audio data may be critical to understand what is burning and how it's burning (explosive, or slower) which data can be fed into the engine 202b. c. Neural Networks:

Neural networks perform well in learning and then applying lessons in identifying the nature of an event. There is a training phase and a filtering phase. Both phases can be assisted by adapting sensor parameters to make training more effective as well as making resulting event filters more powerful. Energy

A powerful capability of the system as described herein is the ability to perform near real-time demand management that considers occupancy, grid demands, building needs and local PV-Battery states. Balancing these criteria and having a building that responds both to human requirements and energy constraints benefits significantly from smart sensing and adaptive controls. For example, it may be advantageous to turn off the power in a hotel room, but this can be challenging to positively determine. Take the scenario where two people have entered the bedroom and get under the blankets but one leaves while the other sleeps. As persons skilled in the art will appreciate, conventional sensor systems are unable to determine that there is still one person in the room and therefore will typically conclude that the room is in an unoccupied state, which is incorrect. In contrast, the present sensor system may advantageously scan for human voice range sounds and detect small changes in noise levels to establish that there is still one person in the room based on the following analysis: a. Statistical Analysis:

Occupancy is a key driver of energy consumption, and occupancy patterns are defined by good input from the sensor network. If the person left in the room breaths heavily, or writes at a desk, types on a keyboard, etc, the noise mean value will change slightly, but the noise variance value will change significantly. In a transition state, this level of noise may need to be seen more than once, then change from an undetermined state to an occupied state. The comfort and safety of the remaining occupant is ensured. In the converse, the processor can enter stages 1 and 2 and while there is one loud noise external to the apartment, no pattern is established. After some predefined timeout, the apartment shuts down and energy is saved. The sensor can then switch back to normal operating mode and move compute and storage resources to a more balanced operating mode giving more time to temperature, light and motion measurements. b. Bayesian Inference:

Using Bayesian inferencing, the occupancy algorithm can become more advanced by asking a straightforward Posterior probability question: P(occupied | data) probability occupied or not occupied given data available. In this example, the likelihoods are P(data I occupied) and P(data | unoccupied). Other data may also be feed into the engine 202b including: noise amplitude, frequency, spectrum and time. P(occupied() (t)), P(unoccupied()(t)), P(data). c. Neural Networks/Markov Models: Once again a NN mode is a powerful one for interpreting stimulus or input into sensors. The resulting power control would be the same, but instead of using statistical or probability techniques, the NN approach may use a deep NN fed by sensor input. That input may be collected across many sensors 102 for a higher level NN training and actuating but would also involve the ability to take multiple sensor input to each "neuron" based in a sensor 102. The sensor input (audio, temperature, light and motion) would be weighted and feed the occupied/not occupied conclusion. The results of this may lead to local sensor tuning as well as high level NN conclusions based on hundreds or thousands of sensor readings.

Embodiments as described herein are also advantageously operable to monitor and act on machine failures in an industrial/building control context. Machine failures typically follow patterns. Some are well known, but predicting machine failure is critical to minimising down-time, valuable for scheduling repair and getting the most out of equipment. Sensor units 102 and controllers 104 as described herein can make a building highly adaptive to local energy (battery and PV) as well as wider power GRID needs and the critical interaction between these two. Efficiency and maintenance of building motors and fans is critical to keeping costs down and efficiency up. In broad terms, the sensors 102 and controller 104 work together to jointly compensate for inefficiencies in building systems, such as HVAC, and help pinpoint those failures, weaknesses and other issues.

In a particular embodiment, the sensor units 102 are each configured to determine (over time) normal operating data, including temperature characteristics, noise characteristics and potentially other data (C02 levels, ozone levels, audio spectrogram etc). If, during routine sampling, one of the sensor units 102 detects a significant difference to normal operation, it can perform deeper level analysis of the signal including: · Filtering audio (digital or analog) to examine specific areas of interest in audio spectrum;

• Invoking higher level analysis including Bayesian and NN pattern analysis to examine both probability of failure given new information.

• Informing higher levels (e.g. at central controller 104 level) of the system 100 of the potentially pending issue and altering sensor parameters and storage use for better spectrogram and frequency analysis. In a particular embodiment, one sensor can be optionally designated to monitor that signal in detail and look for matching temperature changes and, if installed, other sensor changes (C02, etc). T The system 100 may be configured to track when all equipment it monitors and/or controls is installed and how much that equipment is used. All equipment has both MTBF estimates and lifetime expectancies (hrs). As this time approaches, the sensor unit 102 responsible for monitoring that device will be notified. If notified, the sensor 102 may be programmed to modify its analysis processing to include increased probability of failure due to age and MTBF.

For example, the relevant sensor unit 102 may begin monitoring information around equipment more frequently and in designated frequency ranges. If notified, the sensor unit 102 will modify its analysis processing to include increased probability of failure due to age and MTBF. If an anomaly is detected, the sensor unit 102 will inform neighbouring sensor units 102 as well as well as higher system levels.

If required, the sensor units 102 can instruct a local interconnecting power device to immediately cut power to the device for safety. The units 102 can be configured to monitor key sensor data such as audio, temperature and light for fire and safety issues as well as report more frequently to higher system levels.

After a failure, the sensor units 102 may be programmed to monitor human activity and noise during repair, then start monitoring the space for normal operation. The monitoring will be more active to make certain the repair was successful, and the new equipment is running within specification. By tracking equipment closely, the building will run more efficiently and safely. Embodiments described herein provide a sensor system that is readily scalable, responsive and reliable. In contrast to conventional sensor systems and control architectures (i.e. where the analysis and control are centralised while sensors deliver raw data over communications networks to the central control), the present invention provides an intelligent sensor that can make onboard decisions for parameter setting and reporting. Only key (time stamped) information is passed up to higher levels of the system (e.g. controller 104 and above) thereby using much less band width and providing higher level knowledge to the next layer up rather than raw data.

Embodiments thus provide for:

• better scalability,

• faster reaction time to sensed events,

• local autonomy and semi-autonomy for greater reliability,

• better use of bandwidth leading to lower power needed for comms network

• More relevant and valuable knowledge and data to be passed up the system reducing compute loads at higher levels leading to cheaper faster systems,

• Less system storage needed reducing system cost and lowering energy used by system.

It will be understood from the above description that the network of sensor units 102 (which may be configured as a mesh network) allow for near sound properties to be evaluated. Further, motion events detected and reported by neighbouring sensors can be used to predictive behaviours and control the operation of neighbouring units, e.g. to change audio recording settings for the sensor before a person enter the range of those neighbouring units. In addition, priorities can be set according to location and nearby meshed sensors. This enables sensors to work as a group rather than wholly isolated entities.

Across the network, it will be understood that the more relevant context data that is collected and reported to the central controller, the better the overall outcome in terms of effectively controlling building operations. As will be evident from the above description, each layer adds value and acts on data as it is collected, processed and packaged. System settings, prior knowledge and control flow downward through system originated at any level as appropriate, while information, data and some knowledge may flow upwards through the system. For example, at the building level, the system may determine that there is an unpredicted flow of people from one part of the building toward another. The sensing and control system can be made aware of the flow and switch from an intensive operation processing audio to supporting HVAC, traffic flow and monitoring short time noise and human voice range audio. According to such a scenario control and information affecting sensing and control modalities flow downwards, whilst data and conclusions from data and signals processed in the sensor flow up. This repeats from sensor to cloud and back.

It will be understood that for a built environment, context can be extensive.

Embodiments break context down to logical layers, concentrically centred on each sensor unit and expanding to incorporate external world-wide events. In this regard, each sensor unit 102 is configured to receive context data from one or more external sources, with such data potentially impacting on the sensor unit's interpretation of local contextual events, depending on the algorithms. While the invention has been described with reference to the present embodiment, it will be understood by those skilled in the art that alterations, changes and

improvements may be made and equivalents may be substituted for the elements thereof and steps thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt the invention to a particular situation or material to the teachings of the invention without departing from the central scope thereof. Such alterations, changes, modifications and improvements, though not expressly described above, are nevertheless intended and implied to be within the scope and spirit of the invention. Therefore, it is intended that the invention not be limited to the particular embodiment described herein and will include all embodiments falling within the scope of the independent claims.

In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.