Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-DIRECTIONAL LOCALISATION AND MAPPING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2024/094954
Kind Code:
A1
Abstract:
A system comprises a plurality of 3D scanners to scan an unknown environment from a respective plurality of viewpoints. The scanners are configured to capture 3D information about their respective vicinities at intervals, and the method interprets the captured 3D data to create a model of the vicinity of each of the scanners. As the scanners move through the unknown environment, each respective local map is expanded or improved. Where features are uniquely identified in one local map, created by one scanner, which appear in the local map, created by another scanner, a global map can be created by overlapping the local maps such that the uniquely identified features of overlaid. The 3D scanners are suitably body-worn, ROV-mounted, or drone-mounted LiDAR scanners.

More Like This:
Inventors:
KEARNEY PAUL LEONARD (GB)
LATHAM DAVID (GB)
Application Number:
PCT/GB2022/052754
Publication Date:
May 10, 2024
Filing Date:
November 01, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GUARDIAN AI LTD (GB)
International Classes:
G01S7/48; G01C21/00; G01C21/16; G01S17/42; G01S17/86; G01S17/89; G01S17/894; G01S17/931; G01S17/933
Domestic Patent References:
WO2016095050A12016-06-23
WO2016138567A12016-09-09
Foreign References:
US20220137223A12022-05-05
US20170261594A12017-09-14
US20200217666A12020-07-09
US20220036645A12022-02-03
EP3078935A12016-10-12
Other References:
KUMAR RATH PRABIN ET AL: "Real-time moving object detection and removal from 3D pointcloud data for humanoid navigation in dense GPS-denied environments", ENGINEERING REPORTS, vol. 2, no. 12, 12 October 2020 (2020-10-12), XP093049589, ISSN: 2577-8196, Retrieved from the Internet [retrieved on 20230612], DOI: 10.1002/eng2.12275
MURO SHOTARO ET AL: "Moving-object Tracking with Lidar Mounted on Two-wheeled Vehicle :", PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 1 January 2019 (2019-01-01), pages 453 - 459, XP055966162, ISBN: 978-989-7583-80-3, Retrieved from the Internet [retrieved on 20230612], DOI: 10.5220/0007948304530459
SHIPENG ZHONG ET AL: "DCL-SLAM: A Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 21 October 2022 (2022-10-21), XP091350296
PENG HUANG ET AL: "Edge Robotics: Edge-Computing-Accelerated Multi-Robot Simultaneous Localization and Mapping", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 24 January 2022 (2022-01-24), XP091130514
PIERRE-YVES LAJOIE ET AL: "Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 11 January 2022 (2022-01-11), XP091128580
Attorney, Agent or Firm:
HUTCHINSON IP LTD (GB)
Download PDF:
Claims:
CLAIMS A method of mapping an unknown environment comprising the steps of: a. designating an arbitrary primary origin of the unknown environment and using a plurality of scanning devices at locations other than the primary origin of the unknown environment to simultaneously scan parts of the unknown environment in a respective vicinity of each scanning device; b. determining an azimuth and/or elevation of each of the scanning devices at the time each scanning device's vicinity was scanned; c. based on the determined azimuth and/or elevation of each scanning device at the at the time each scanning device's vicinity was scanned, rotating and/or transposing the collected scan data for each so as to align it with an alignment of the primary origin; d. parsing the reoriented scan data from each of the scanning devices to identify features in their respective vicinities; e. moving one or more of the scanning devices; and f. iterating steps b to e above; the method further comprising the steps of: g. adding newly identified features in each successive scan to a local map for each scanning device; h. adding previously identified features in each scan to the local map for each scanning device and noting any difference in position or pose of those previously identified features from scan to scan, and estimating a most likely actual position and pose for those previously identified features based on the scan-to-scan differences; i. searching the local maps for common identified features and overlaying the local maps such that the common identified features in two or more local maps are spatially aligned to derive the global map, which comprises the overlapping and aligned local maps. The method of claim 1, further comprising determining a distance and direction of each scanning device from the primary origin, and using the said distance and direction to transpose the identified features in the respective vicinities according to their respective a distances and directions from the primary origin into the global map. The method of claim 1 or claim 2, wherein the arbitrary primary origin comprises a position and pose, the pose having elevation of 0 degrees (horizontal) and an azimuth of 0 degrees (true north). The method of claim any of claims 1, 2 or3, wherein one or more of the plurality of scanning devices comprise body-worn scanning devices. The method of any preceding claim, wherein one or more of the plurality of scanning devices comprise vehicle-mounted scanning devices. The method of claim 5, wherein the vehicle comprises an land-based remotely-operated-vehicle, or an airborne drone. The method of any preceding claim, wherein the scanning device comprises a distance sensor arranged to determine a distance from the scanner to an object in a given direction, the given direction being adjustable so as to obtain distance readings of objects in a plurality of directions, the distance/direction data for each measurement forming the basis of the data collected by each scanning device. The method of claim 7, wherein the scanning devices comprise any one or more of a LiDAR scanner, a sonar scanner, a time-of-flight sensor, or any other equivalent device or system adapted in use to determine the position and/or orientation of objects and surfaces within a nearby environment. The method of any preceding claim, wherein the azimuth and/or elevation of each of the scanning devices at the time each scanning device's vicinity was scanned is determined using a gyroscope, magnetometer, accelerometer, orientation sensor, triangulation, or any other equivalent device or system adapted in use to determine the pose of the scanning device. The method of any preceding claim, wherein the step of parsing involves identifying edges, vertices and surfaces from within the scan data corresponding to the vicinity of the scanning device via which that scan data was obtained. The method of any preceding claim, wherein the most likely actual position and pose estimate for previously identified features within each local map is determined by regression. The method of any preceding claim, further comprising the step of checking the correlation of overlapping map portions. The method of claim 12, comprising applying a weighting to the correlation between pairs of overlapping map portions, and updating the global map to obtain a best fit. The method of claim 12 or claim 13, comprising fixing the global map when the correlation exceeds a predetermined threshold. The method of any preceding claim, wherein the global map comprises pre-known data about the environment. The method of any preceding claim, wherein the initial position of each of the scanners relative to the primary origin is known, and wherein the local maps within the global map are initially located at positions within the global map that correlate the known starting positions of each of the scanning devices relative to the primary origin. The method of claim 16, wherein the initial positions of each of the scanners relative to the primary origin is known by dead reckoning, by triangulation and/or by using satellite positioning. The method of any preceding claim, wherein the local maps and/or the global map comprise three- dimensional mapping data. The method of any preceding claim, comprising the steps of data matching and loop closure algorithms to finalise the spatial occupancy of the local maps in the global map.
Description:
MULTI-DIRECTIONAL LOCALISATION AND MAPPING SYSTEM

This invention relates to a mapping and localisation system, and in particular, but without limitation, to a mapping and localisation system for use in a non-permissive environment to support Command and Control decision making in an unknown and non-permissive environment as may be faced by the Military, Police or Fire Service.

It is often necessary to be able to simultaneously map an environment and locate oneself within the mapped environment. Such technologies are widely used in the film and special effects industries, where filming cameras need to be able to locate themselves within the set or other environment for purposes of a CGI compositry. Known systems of this ilk are called SLAM (Simultaneous Location and Mapping) systems and these known systems use cameras or other measurement devices to detect features within an environment, such as walls, ceilings, floors etc. and so create a three-dimensional map of the environment. Moving around the environment can be used to update the map and by interpolating between the maps created at different points in time, it is possible to determine one's location within the map in near-time or real-time.

Known systems disclose mapping and localisation systems, which use multiple scanning devices and Al processing to ensure that a map can be created effectively regardless of the instantaneous conditions, and can deliver a Virtual Map of a previously unknown and unprepared indoor enclosed environment identifying the location of multiple operators. It is also known to use combinations of sensors, such as optical cameras, thermal imaging, LIDAR, etc. to create multiple simultaneous maps using the various sensors and to use Artificial Intelligence to select which of the sensors to give preference to. For example, an optical sensor may be most suitable outdoors, whereas a thermal sensor may be more suitable in a smoke-filled room, whereas a LIDAR sensor may be more suitable in other situations. Different measurement techniques can be used simultaneously to obtain the best possible map by selectively using or discarding map data as it is created. It is also known to use "breadcrumb" mapping and localisation, whereby a three-dimensional map of an environment can be generated as sensors progressively move through the environment. This enables people using the technology to re-trace a known route, as well as to break away from a previously-mapped area to map a new location and so the virtual map is expanded over time.

These known, so-called "inside out" mapping methodologies, enable users to enter a building from a common reference point (Primary Origin or Final RDV) and fan out within the building and create a two or three-dimensional map as they go, which map "expands" as the various personnel begin to explore different parts of the environment. Such a mapping and localisation system is ideal for use in a firefighting situation whereby the progression of personnel through the building is carefully controlled and whereby it is established practice to follow and re-trace "safe routes" as they are discovered.

Other known mapping devices are disclosed in, for example, W02016/095050, which discloses creating a floor plan manually, and navigation in non-real time by using method such as proximity, triangulation, trilateration means the building needs to be known building. This does not address tactical real-time tracking (exploration) and unknown building aspects and is not designed to operate in a tactical non-permissive environment to support Military, Police and Firefighters decision support.

Patent WO2016/138567 describes a handheld device with multiple sensors to capture a 3D model, generate a point cloud, capture 3D map, and determine position of the object within the map but focuses on capturing and generating the 3D model of a specific object. A map can be generated but it is viewed as an object rather than for navigational purposes.

US2018/356 determine the location of persons in a building by receiving telemetry data, and by 3D modelling of the interior building. This works in a prepared building, with pre-installed cameras at known positions (or similar technology). There is no real-time exploration and no attempt to support Military/ Police/ Fire commanders in tactical decision making. A need, however, exists for a different type of mapping system which is not, or at least less, reliant on a single reference point as the "origin" of the map. An example of where such a system would be useful is where an environment needs to be mapped simultaneously from different starting points. This is an "outside in" as opposed to an "inside out" mapping methodology for which no workable solution currently exists. This invention aims to provide such a solution and/or to provide an improved and/or alternative location and mapping system.

Aspects of the invention are set forth in the appended independent claim or claims. Preferred and/or optional features are set forth in the appended dependent claims.

An aspect of the invention provides a method of mapping an unknown environment comprising the steps of: a) designating an arbitrary primary origin of the unknown environment and using a plurality of scanning devices at locations other than the primary origin of the unknown environment to simultaneously scan parts of the unknown environment in a respective vicinity of each scanning device; b) determining an azimuth and/or elevation of each of the scanning devices at the time each scanning device's vicinity was scanned; c) based on the determined azimuth and/or elevation of each scanning device at the at the time each scanning device's vicinity was scanned, rotating and/or transposing the collected scan data for each so as to align it with an alignment of the primary origin; d) parsing the reoriented scan data from each of the scanning devices to identify features in their respective vicinities; e) moving one or more of the scanning devices; and f) iterating steps b) to e) above; the method further comprising the steps of: g) adding newly identified features in each successive scan to a local map for each scanning device; h)adding previously identified features in each scan to the local map for each scanning device and noting any difference in position or pose of those previously identified features from scan to scan, and estimating a most likely actual position and pose for those previously identified features based on the scan-to-scan differences; i) searching the local maps for common identified features and overlaying the local maps such that the common identified features in two or more local maps are spatially aligned to derive the global map, which comprises the overlapping and aligned local maps. In essence, the invention subsists in an "outside-in" methodology for simultaneously mapping an unknown environment from a plurality of starting points. Each of the scanning devices are initialised and begin to scan their respective vicinities and provide, preferably 3-dimensional, scan data, which identifies features within various respective vicinities. The features that are identified could be surfaces, corners, edges, etc. Using 3-dimensional modelling techniques, it is possible to recreate a 3-dimensional CAD Model of the environment that has been scanned from the scan data, and that forms the basis of the local map corresponding to each scanning device.

Each of the local maps are generated independently of one another and as each scanner moves through the environment, the local maps are updated and/or expanded as movement through the unknown environment progresses.

In a wide-open or large unknown environment, it may be that the scanners never cross paths and so the local maps remain independent of one another. However, in the case that the local map of one scanner overlaps the local map of another, this can be identified by identifying similar features in one of the local maps to those in another local map. Preferably, the local maps are not immediately snapped onto one another because it is likely that certain features, such as 90-degree corners in walls, will be fairly ubiquitous. It is only when there is a sufficient correlation between identified features to uniquely identity the features in the local maps, that the transposition of the local maps into the global map can reliably take place. Uniquely identifying features may involve grouping features into an arrangement of corners, edges, or surfaces with fixed interrelationships. An example of this could be a flat end surface (wall) at the end of a corridor, which intersects perpendicular flat surfaces on either side, with an arrangement of fixtures, fittings or other features, that differentiate the corridor end in question from other, similar corridor ends.

The method of the invention therefore proposes continuously monitoring the local maps and attempting to identify overlapping feature sets; and if sufficient correlation (or overlap) is found between two or more local maps, to begin piecing the local maps together to form the larger overall global map. Suitably, the method further comprises determining a distance and direction of each scanning device from the primary origin, and using the said distance and direction to transpose the identified features in the respective vicinities according to their respective a distances and directions from the primary origin into the global map.

It may be preferable, to facilitate piecing-together the local maps into the global map if the initial/start positions of each of these scanning devices is known relative to the primary origin. The primary origin is an arbitrary position, but may be, for example, the location of a command post. The scanning devices are deployed from the command post and their initial position/offset from the primary original is preferably measured or otherwise determined. The simplest implementation involves starting each of the scanning devices at the primary origin, in which case the offset would be zero. However, for an outside-in scanning methodology, as in the present disclosure, it may be convenient to have an approximate distance and bearing from the command post/primary origin to each of the scanners at the start of the scanning procedure.

Suitably, the arbitrary primary origin comprises a position and pose, the pose having elevation of 0 degrees (horizontal) and an azimuth of 0 degrees (true north). Setting the arbitrary primary origin to have a horizontal/north pose means that it is mathematically simpler to transpose the scan data onto cartography data which is referenced to true North and elevations above mean sea level. The scanners, and in particular LiDAR scanners, measure distances and directions from the scanner origin based on a polar coordinate system. By using a polar primary origin as a reference as well, this computationally simplifies the aggregation of local map data into the global map.

In one embodiment of the invention, the scanning devices are body-worn scanning devices. Additionally or alternatively, the scanning devices may be vehicle-mounted scanning devices, in which the vehicle could be a land-based remotely operated vehicle, or an airborne drone. It will be appreciated from the foregoing that it is possible to deploy a number of scanning devices with different modalities (body-worn/ground-based/elevated) so as to more comprehensively scan, and hence map out, the unknown environment. The ability to capture overhead and ground-level/head level scans helps to remove "blind spots" in the local maps/global map.

Suitably, the scanning device comprises a distance sensor arranged to determine a distance from the scanner to an object in a given direction, the given direction being adjustable so as to obtain distance readings of objects in a plurality of directions, the distance/direction data for each measurement forming the basis of the data collected by each scanning device.

Suitably, the scanning devices comprise any one or more of a LiDAR scanner, a sonar scanner, a time-of-flight sensor, or any other equivalent device or system adapted in use to determine the position and/or orientation of objects and surfaces within a nearby environment.

By using lined LiDAR, sonar or time-of-flight sensors, it is possible to create a three- dimensional map of an environment simply by measuring the distances to points within the scanner's field of view. Such technology is relatively well-established and does not require detailed description herein.

The invention is predicated on ensuring that the scan data is properly referenced to a common co-ordinate system. Therefore, each of the scanning devices suitably has a device that determines the pose (azimuth and/or elevation) of the scanning device at the time the scan is taken. This can take the form of a gyroscope, a magnetometer, an accelerometer, an orientation sensor or any of their equivalent device or system, adapted in use to determine the pose of this scanning device. Alternative methods of determining the pose of the scanning device involve using the 3-D scan data itself and calculating the pose of the scanning device from triangulation data, for example, off a flat floor surface. Provided the orientation/pose of the scanning device is known at the time each scan is taken, it is possible to rotate/orientate the scan data captured so as to align correctly with the primary origin. This greatly facilitates the creation of the global map.

As has previously been stated, the parsing step suitably involves identifying edges, vertices and surfaces from within the scan data. From this information, it is possible to work out floor plans, ceiling locations, door apertures, furniture, and objects in the scene, etc. It will be appreciated, however, that the scanning devices used in the invention are typically line-of-sight devices and cannot see the back side or "hidden side" of any object. Therefore, the scan data is "one-sided" in the sense that there is no way to determine, or infer, the topology of a surface or object which is not in direct line of sight of the scanner. The invention overcomes this, however, by taking successive scans as the scanning device moves through the environment. At each subsequent interval, the scanning device would be at a different position and/or pose, which yields information about objects which may not previously have been visible to the scanner. By this methodology, it is possible to build up a more complete picture of the unknown environment as each scanner moves through the unknown environment.

The advantages of iteration from different viewpoints is particularly the case for previously identified features within each local map. It can be assumed, in the vast majority of cases, that the locations of walls, floors and door openings, etc. will be relatively static. Therefore, if the scanning device moves around the environment and identifies the same feature in the same position over and over again, it can have greater confidence that that particular feature has been accurately mapped. By using a regression technique between successively captured scan data, it is possible to have increasing confidence levels in the accuracy of the scan data as it arrives. In other words, the scanners are adapted to capture "fresh" data about new/previously unseen objects within the scene, as well as providing "confirmation" or "re-confirmation" data to verify the position, orientation and configuration of previously identified features within the scene.

This regression technique can be also used in the production of the global map by lacing together the local maps. For example, if one local map has a very high degree of certainty about the position of a particular feature and another local map has, what appears to be, the same feature or features albeit less reliably/confidently mapped, then the creation of the global map can be based on certain assumptions regarding the degree of correlation between the overlapping map portions. By weighting the correlation in terms of the number of iterations or confirmations of the position and/or orientation of previously-identified features, it is possible to obtain a greater degree of certainty/confidence in the accuracy of the global map.

In order to facilitate the creation of the global map, the global map may, in certain embodiments, be pre-populated with known features. This could be obtained, for example, from external sources, such as cartography data, satellite imagery, street maps, and the like. Other items of useful information could include archived building plans or indeed previous scans/photographs of that particular environment. By including pre-known information into the global map, the lacing together of the local maps into the global map can be greatly facilitated. For example, if it is known from an archived floor plan of a building that the building layout includes a central atrium with corridors leading off it at given angles; and if one of the scanners identifies features which are likely to be the atrium, and other scanners identify features which are likely to be the corridors, it is possible to make assumptions about the lacing together of the local maps into the global map by assuming a correlation between the data obtained by the scanners based on data contained in the archived floor plan. Obviously, it is important to remember that the invention is intended to map an unknown environment using an "outside-in" technique, rather than simply supplementing known information. The reason for this is that in a non-permissive environment, barricades may have been constructed, obstructions, such as furniture placed in corridors to block the ingress of operatives, ceilings may have come down due to damage, and so on. It is therefore important that the global map of the invention reflects the instantaneous condition/state of the unknown environment, rather than simply making assumptions based on pre-known information.

Suitably, the initial position of each of the scanners relative to the primary origin is known, and the local maps within the global map are initially located at positions within the global map that correlate the known starting positions of each of the scanning devices relative to the primary origin. The positions of each of the scanners relative to the primary origin may be known by dead reckoning, by triangulation and/or by using satellite positioning. By knowing the positions of the scanners relative to the primary origin, it is possible to take various shortcuts in producing the global map. For example, less reliance may need to be placed on finding corresponding features in a plurality of local maps to create the global map. The positions of each of the scanners, as previously described, could be obtained by dead reckoning - for example, by using an accelerometer to estimate the distance and direction of the scanning device from a known starting position; by triangulating known reference markers within the field of view of the scanner and/or by using satellite positioning, for example GPS/GNSS.

Suitably, the local maps and/or the global map comprise three-dimensional mapping data.

For military applications such as hostage rescue and Close Quarter Battle the advantage of the invention comes from coordinating troops and avoiding friendly fire (Blue on Blue) incidents

The key operational rationale of the invention is to support the command-and-control decision making, i.e., to allow a tactical commander to visualise the location of all his team operators in real time as well as seeing a 'Virtual Map' of the environment they are working within. The invention suitably presents the Operational Commander with a composite map of the previously unknown operational, environment that allows him to see where each operator is, has been, and the totality (composite) of what each operator has seen - stitched onto a single Common Operating View (Virtual Map).

The invention suitably uses a combination of sensors, such as optical cameras, thermal imaging, LIDAR, etc. to create multiple simultaneous 'views' and then processes them with Artificial Intelligence to deliver the most tactically relevant Common Operating View to support the Commander. For example, an optical sensor may be most suitable where visibility is good, whereas a thermal sensor may be more suitable in a smoke-filled room, whereas a LIDAR sensor may be more suitable in other situations.

It will be appreciated from the foregoing that the present invention differs from the prior art insofar as it uses multiple scanners mounted on multiple operators to map an unknown and/or non- permissive environment from different positions and/or directions. This is an "outside in" compositing scanning methodology whereby each of the scanning devices dynamically creates its own two or three-dimensional map as it progresses through the environment. Each individual map is suitably iteratively increased in size and/or complexity as the scanning progresses over time.

Each of the individual maps created by each of the respective scanners are integrated into the Global map that is then 'distilled' down to the Commanders (simplified) optimum Common Operating View.

Because each of the individual scanners has an offset vector from the primary origin to its own starting point, it is possible to offset the three-dimensional models within the global map by a distance/direction corresponding to that offset vector. It is therefore possible to build up a global map from various directions/points simultaneously.

The invention is intrinsically self-confirming, self-correcting and self-improving as the multiple sensor maps are constantly re-stitched into the Command Station Global map and any discrepancies/ errors are improved as further information becomes available.

In a preferred embodiment of the invention, the method further comprises determining when two or more of the three-dimensional models overlap. Where two of the same devices move into an area that has been previously scanned by another scanning device, it is possible to re-map the same area using a different scanning device. This has several benefits insofar as the resolution of the global map can be increased or updated depending on the level of correlation or discrepancy between the same region as mapped by one scanning device versus that as mapped by a second scanning device. Ideally, the overlapping map portions will exactly correspond, and this indicates that there are no errors or inaccuracies in the scanning/mapping process. Inevitably, however, there will be drift, offset and/or errors and by using multiple scans of the same area, it is possible to check the correlation/veracity of the global map as more data is added to it.

The degree of correlation between the mapping of a single area by two or more devices could be used to determine the accuracy of the global map and/or it could be used to determine an error in one or more of the scanning devices. Suitably, if an error is detected, a correction can be applied to the data from an affected scanning device, so as to increase the overall accuracy of the global map.

An overlap between two or more three-dimensional models could be determined by detecting the same or similar features in two or more maps. For example, a door-wall-floor intersection could have certain distinguishing features and thus be uniquely identifiable within an environment. If another scanner detects the same unique feature, then it can be determined that this is, in fact, the same feature that has been mapped twice.

From a practical point of view, by knowing the offset and movement of the scanning devices within the environment, it is possible to use a "dead reckoning" type methodology to determine when the maps overlap, or are likely to overlap.

Preferably, the method involves checking the correlation of overlapping map portions and/or applying a weighting to the correlation between pairs of overlapping map portions on updating a global map to obtain a best fit.

Most preferably, if the degree of correlation between individually-captured three- dimensional maps exceeds a predetermined threshold, then the global map could be effectively fixed after that point and deemed determinative of the environment. This means that processing power can be shifted to other areas of the global map, and this avoids unnecessary computation where the global map reaches the desired degree of accuracy in certain regions.

In terms of physical implementations of the invention, there will typically a "command post" or central node with which all of the scanning devices are in communication either directly or indirectly. The command post or central node could be located near to the unknown environment to be mapped, or it could be at a remote location, e.g., in a different country to the unknown environment to be mapped. In order to enable the system to work, each of the scanning devices suitably has a radio transceiver, which can transmit data to the command post or central node, and/or to other nearby scanning devices, and receive data from the command post or central node and/or nearby scanning devices. The use of daisy-chained RF communications means that the effective range of the system can be significantly improved - especially in locations where radio transmission and/or reception is difficult (e.g., indoors).

It is also envisaged that the method could be implemented by using a remote command post or central node, which communicates, for example using a satellite data connection, with a transceiver of a local command post or command node. Each of the scanning devices could connect (directly, or via a daisy chain/mesh network) with the local command post, which relays data to/from the remote command post or central node.

An embodiment of the invention shall now be described, by way of example only, with reference to the accompanying drawings in which:

Figures 1 to 5 are a sequence showing how an environment mapping method in accordance with the invention works;

Figure 6 is a schematic representation of a global map created using the method described herein;

Figure 7 is a photograph of an actual computer-generated global map made up of several local maps; and

Figure 8 is a schematic illustration of an embodiment of the invention.

Figures 1 to 5 of the drawings show a sequence of operation for the method of the invention. Figures 1 to 5 are divided into a left-hand portion, which represents, schematically, the data captured by different scanning devices; and the right-hand portion shows, schematically, a global map created from the collected data.

In Figure 1 of the drawings, a primary origin point 10 is determined and two scanners are moved to their respective start points 12, 14, which are secondary origins offset by vectors 16 and 18, respectively. The primary origin 10 and secondary origins 12, 14 can be inserted into the global map 100 according to the offset vectors 16, 18. In Figure 2 of the drawings, the scanners located at secondary origin points 12, 14 are activated and begin to scan their respective vicinities. The scanner located at secondary origin 12 detects wall features 20 and these are transposed into the global map 100. Similarly, a scanner located at secondary origin 14 also scans its environment and detects different walls 22, which are also transposed into the global map 100.

Referring now to Figure 3 of the drawings, it can be seen that the scanners have moved and so the secondary origins 12, 14 have been displaced by vectors 24, 26, respectively. These vectors can be transposed into the global map 100 and the secondary origins 12, 14 transposed accordingly. The scan then recommences, and it can be seen that the first scanner can now additionally "see" the wall features and see that it previously saw, as well as new wall features 30, which were previously not visible. Likewise, the second scanning device has moved 26 to its new secondary origin 14 and this can now see wall 32 in addition to the walls 22 that were previously visible. As can be seen from the right-hand side of Figure 3, the global map 100 is updated to show the new information (indicated in solid lines) as well as the previously-captured features indicated in dashed or chain-link lines.

Referring now to Figure 4 of the drawings, the scanning devices have moved 40, 42 once again and can now start to fill in the gaps by additional features 44, which were not previously visible, but which now are by virtue of the movement 40, 42. The global map 100 can be updated once again with the new data overlayed on top of the previously-captured data (indicated in dashed or chain-link lines).

As can be seen from Figure 4 of the drawings, there appears to be an error in the measurement of one of the lines 32. Here, that feature is offset slightly to a previous measurement. The method therefore takes it into account and waits to see whether the feature 32 was erroneous, i.e., caused by an error; or whether it be a change (update) in the environment such as a recently opened door etc. The remaining mapped features, however, appear to coincide sufficiently, and so the global map 100 is able to distinguish between "fixed" features, i.e., those which have been mapped several times and shown to be coincident on each instance, and those (such as feature 32), which are less certain. In Figure 5 of the drawings, it can be seen that the scanning devices moved once again 50,52 and are now able to scan their respective environments. It can be seen that the previous "dubious" feature 32 was, in fact, a door 54 and this has been verified by one of the scanners mapping it correctly. This resolves the ambiguity about feature 32, that is to say, it is a moveable object within the scene as opposed to a fixture.

Referring to Figure 6 of the drawings, it can be seen that the global map 100 contains a number of "fixed" or "locked-in" features 60, that is to say features that have either been scanned more than once by the same scanner or scanned multiple times by different scanners and shown to be correct.

Moveable features 62, such as the door previously described, are indicated as well and other features 64, which have only been mapped once, are shown as well.

It will be appreciated that the accuracy and/or veracity of the global map 100 can be updated and that an accurate global map can be created/built up using an "outside in" compositing methodology.

Figure 7 of the drawings shows actual captured LiDAR data obtained from three body-worm scanners moving through an unknown environment. It will be seen that each scanner creates its own local map, and that overlaps in the local maps can be used to piece-together the local maps into the global map.

Figure 8 illustrates a possible schematic of the hardware 200 for implementing the invention, which includes primary command post 202 located in a first country, which connects, via satellite linkups 204/206, with a local command unit 208. Body-worn scanners (not visible) are fitted to the clothing of three operatives 210, 212, 214, who have been tasked with entering an unknown environment 216. A support drone 216 provides an aerial view of the unknown environment 216 and relays data 218 to/from the operatives, thus extending the range of the local command unit 208. Daisy-chain connections between the operatives 210, 212 is possible also.

The unknown environment 218 is first breached using a remote-controlled land-based vehicle

220, also fitted with a scanning device, such that the land-based vehicle 220 can begin to create a local map of the unknown environment 216 within its own vicinity. This enables the first operative 210 to enter the unknown environment 216 and also begin scanning. Likewise, operatives 212, 214 can enter the unknown environment 216 from different entry points, and can map the unknown environment 216 from different viewpoints/start points. The command centre 202 can relay instructions to the operatives 210, 212, 214 via the satellite link-up 204, 206; as well as optionally controlling the drone 216 and ROV 220. This is facilitated by the knowledge of the command centre 202 of the layout of the unknown environment 216, as well as the locations of the operatives 210, 212, 214. This is assisted by the command centre having access to recent floorplans, satellite photography and cartography data 222. It will be appreciated that the invention provides a convenient and improved manner of mapping an unknown environment.

The invention is not necessarily restricted to the details of the foregoing embodiments, where are exemplary of the invention.