Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR DETERMINING VISIBILITY REGION OF DIFFERENT OBJECT TYPES FOR AN AUTONOMOUS VEHICLE
Document Type and Number:
WIPO Patent Application WO/2021/176031
Kind Code:
A1
Abstract:
Embodiments of the present disclosure relate to a method and system for determining visibility regions of different object types for an autonomous vehicle. The system receives sensor inputs of dimensions of a visibility region from sensors of various sensor types associated with the autonomous vehicle. The system generates customized visibility regions for each sensor at various time frames, for obstacles of at least one object type, using the sensor input of corresponding sensor. Further, the system determines a unified visibility region of each sensor type, for the obstacle of each object type, using the customized visibility regions of the sensors of corresponding sensor type for corresponding object type. The system identifies an intersecting visibility region for each object type using the unified visibility region of various sensor types and determines a likelihood of non-existence of obstacles of specific object type in each intersecting visibility region.

Inventors:
SCHWINDT OLIVER (US)
NUSS DOMINIK (US)
SCHIER MANUEL (US)
ULMER BENJAMIN (US)
Application Number:
PCT/EP2021/055543
Publication Date:
September 10, 2021
Filing Date:
March 05, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSCH GMBH ROBERT (DE)
International Classes:
G01S13/931; G01S7/40; G01S7/41; G01S7/48; G01S7/497; G01S13/86; G01S13/87; G01S17/87; G01S17/931; G06K9/00
Domestic Patent References:
WO2019089015A12019-05-09
Foreign References:
US20190384302A12019-12-19
Other References:
PHILIPP LINDNER ET AL: "Multi level fusion for an automotive pre-crash safety system", MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, 2008. MFI 2008. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 20 August 2008 (2008-08-20), pages 143 - 146, XP031346330, ISBN: 978-1-4244-2143-5
Attorney, Agent or Firm:
HOFSTETTER, SCHURACK & PARTNER PATENT- UND RECHTSANWALTSKANZLEI, PARTG MBB (DE)
Download PDF:
Claims:
[0058] Claims:

We Claim:

1. A method of determining a visibility region of different object types for an autonomous vehicle, the method comprising: receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type; and determining a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.

2. The method as claimed in claim 1, further comprising: identifying an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determining a likelihood of non-existence of one or more obstacles of specific object type from a current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

3. The method as claimed in claim 1, wherein generating each of the one or more customized visibility region comprises step of: determining the object type associated with the obstacle using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics.

4. A method of determining a visibility region of different object types for an autonomous vehicle, the method comprising: receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of measurements and a predetermined object type characteristics; determining a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type; identifying an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determining a likelihood of non-existence of one or more obstacles of specific object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

5. A system for determining a visibility region of different object types for an autonomous vehicle, the system comprising: a processor; a memory, communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to: receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, , angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generate one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region with an obstacle of at least one object type observed by the at least one sensor of the plurality of sensors; and determine a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.

6. The system as claimed in claim 5, wherein the processor is further configured to: identify an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determine a likelihood of non-existence of one or more obstacles of specific object type from a current time frame of the plurality of time frames to one of plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

7. The system as claimed in claim 5, wherein the processor is configured to generate each of the one or more customized visibility region by determining the object type associated with the obstacle using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics.

8. A system for determining a visibility region of different object types for an autonomous vehicle, the system comprising: a processor; a memory, communicatively coupled to the processor, wherein the memory stores processor-executable instructions, which, on execution, cause the processor to: receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, , angular dimensions of the sensor with respect to the visibility region, reflection measurement of a sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types, wherein the customized measurements are specific measurements associated with the sensor type of corresponding sensor; generate one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region with an obstacle of at least one object type observed by the at least one sensor of the plurality of sensors, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics; determine a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type; identify an intersecting visibility region for each object type using the unified visibility region of one or more sensor types; and determine a likelihood of non-existence of one or more obstacles of specific object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

Description:
TITLE: “METHOD AND SYSTEM FOR DETERMINING VISIBILITY REGION OF DIFFERENT OBJECT TYPES FOR AN AUTONOMOUS VEHICLE”

[001] PREAMBLE TO THE DESCRIPTION:

[002] The following specification particularly J ~~ cribes the invention and the manner in which it is to be performed:

[003] DESCRIPTION OF THE INVENTION:

[004] Technical field

[005] The present subject matter is related, in general to autonomous driving technology, and more particularly, but not exclusively to system and method for determining visibility region of different object types for an autonomous vehicle.

[006] Definitions

[007] Visibility region: a viewable area of a vehicle’s external environment captured/recorded by plurality of sensors on an autonomous vehicle.

[008] Obstacle: any object that is detected by a sensor in the path of autonomous vehicle.

[009] Customized visibility region: a visibility region observed by a specific sensor of autonomous vehicle with respect to a specific obstacle type.

[0010] Intersecting visibility region: a common visibility region observed by a plurality of sensors of different types with respect to a specific obstacle type.

[0011] General visibility region: the visibility region observed by a plurality of sensors of different types mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation does not occur.

[0012] Unified visibility region: the visibility region observed by a plurality of sensors of specific sensor type mounted on a vehicle, where the plurality of sensors observe a plurality of obstacles where obstacle type differentiation occurs. [0013] BACKGROUND OF THE DISCLOSURE

Autonomous vehicles rely on a series of sensors that help the vehicles understand the external environment in real time to avoid collisions, navigate autonomously, spot signs of danger and drive safely. Sensors not only help to determine the actual environment and present dangers, they also help the vehicle to provide appropriate responses that range from accelerating/decelerating to turning, emergency stopping and evasive maneuverers. These responses could be determined by detecting the obstacles using information provided by various sensors integrated within the autonomous vehicle. Visibility region of an autonomous vehicle can be understood as a complement to detection of existing objects or obstacles by the sensor. Object tracking, or detection techniques, generally estimate the existence of an object only if the object has been detected at least once. In general, it is not possible to reason about the absence of potential objects given the absence of measurements from the sensors, thus the objects that are excluded in the visibility region remain unknown. Current techniques combine all object types detected by sensors and determine only one visibility region for the autonomous vehicle. These techniques do not provide different visibility regions for different object types and thereby do not facilitate reasoning on excluded objects. The techniques providing a single generalized visibility region are either too conservative or aggressive, rendering less effectiveness. Consequently, a need exists for a method and a system that determines a visibility region of different object types for an autonomous vehicle that overcomes the existing limitations.

[0014] SUMMARY OF THE DISCLOSURE

[0015] One or more shortcomings of the prior art are overcome and additional advantages are provided through the present disclosure. Additional features and advantages are realized through the techniques of the present disclosure. Other embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed disclosure.

[0016] In one non-limiting embodiment of the present disclosure, a method of determining a visibility region of different object types for an autonomous vehicle has been disclosed. The method comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of a corresponding sensor. The method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensors. Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect to obstacle of specific object type. Using the one or more customized visibility region of the plurality of sensors of a corresponding sensor type for a corresponding object type, the method determines a unified visibility region of each sensor type, for the obstacle of each object type.

[0017] In another non-limiting embodiment of the present disclosure, a method of determining a visibility region of different object types for an autonomous vehicle has been disclosed. The method comprises the step of receiving a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of a corresponding sensor. The method further comprises generating one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensors. Each of the one or more customized visibility region is the visibility region observed by the at least one sensor of the plurality of sensors with respect of obstacle of specific object type, wherein the at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and predetermined object type characteristics. The method further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of a corresponding sensor type for a corresponding object type. Further, the method identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types. Subsequently, the method determines a likelihood of non-existence of one or more obstacles of specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle, using detection capabilities of the sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

[0018] In yet another non-limiting embodiment of the disclosure, a system for determining the visibility region of different object types for an autonomous vehicle has been disclosed. The system comprises a processor communicatively coupled to the system and a memory, which is communicatively coupled to the processor. The memory stores processor- executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor , angular dimensions of the sensor with respect to the visibility region and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of corresponding sensor. The processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames using the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by the sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type. The processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type.

[0019] In still another non-limiting embodiment of the disclosure, a system for determining visibility region of different obj ect types for an autonomous vehicle has been disclosed. The system comprises a processor communicatively coupled to the system and a memory communicatively coupled to the processor. The memory stores processor-executable instructions, which, on execution, cause the processor to receive a sensor input of dimensional parameters of a visibility region comprising location coordinates, angular dimensions of the sensor with respect to the visibility region, reflection measurement of the sensor and customized measurements from each of a plurality of sensors associated with the autonomous vehicle, wherein the plurality of sensors is associated with one or more sensor types. The customized measurements are specific measurements associated with the sensor type of corresponding sensor. The processor generates one or more customized visibility region for each of the plurality of sensors at a plurality of time frames based on the sensor input from corresponding sensor, wherein each of the one or more customized visibility region is the visibility region observed by at least one sensor of the plurality of sensors with respect to obstacle of specific object type, wherein at least one object type associated with the obstacle is determined using the customized measurements at a current time frame of the plurality of time frames and a predetermined object type characteristics. The processor further determines a unified visibility region of each sensor type, for the obstacle of each object type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type. The processor further identifies an intersecting visibility region for each object type using the unified visibility region of one or more sensor types and determines a likelihood of non-existence of one or more obstacles of a specific object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region of the autonomous vehicle using detection capabilities of sensor type, range dependencies of the sensor, weather conditions and possible occlusion.

[0020] The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.

[0021] BRIEF DESCRIPTION OF THE DRAWINGS

[0022] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed embodiments. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and with reference to the accompanying figures, in which: [0023] Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure;

[0024] Figure 2 is an exemplary block diagram illustrating various components of a visibility region determination system of Figure 1 in accordance with an embodiment of the present disclosure;

[0025] Figure 3a depicts a flowchart of an exemplary method of describing visibility region of different object types in accordance with an embodiment of the present disclosure;

[0026] Figure 3b depicts exemplary representation of customized visibility regions of LIDAR sensor type in accordance with an embodiment of the present disclosure;

[0027] Figure 3c depicts exemplary representation of customized visibility regions of RADAR sensor type in accordance with an embodiment of the present disclosure;

[0028] Figure 3d depicts exemplary representation of unified visibility region of LIDAR sensor type in accordance with an embodiment of the present disclosure;

[0029] Figure 3e depicts exemplary representation of unified visibility region of RADAR sensor type in accordance with an embodiment of the present disclosure; and

[0030] Figure 3f depicts exemplary representation of intersecting visibility region of an object type for LIDAR and RADAR sensor types in accordance with an embodiment of the present disclosure.

[0031] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether or not such computer or processor is explicitly shown. [0032] DETAILED DESCRIPTION

[0033] In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.

[0034] While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the spirit and the scope of the disclosure.

[0035] The terms “comprises”, “comprising”, “include(s)”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises... a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or apparatus.

[0036] Embodiments of the present disclosure relates to a method and a system for determining visibility region of different obj ect types for an autonomous vehicle. In one embodiment, the system receives a sensor input from each of a plurality of sensors associated with the autonomous vehicle. The sensor input includes dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements associated with the visibility region for each sensor. Upon receiving the sensor input, the system generates one or more customized visibility region for each sensor at a plurality of time frames using the sensor input from corresponding sensor. The customized visibility region is the visibility region observed by a specific sensor of the plurality of sensors of the vehicle with respect to a specific obstacle type at a current time frame of the plurality of time frames. Further, the system determines a unified visibility region of each sensor type, for the obstacle of each obj ect type, using the one or more customized visibility region of the plurality of sensors of corresponding sensor type for corresponding object type. In one example, for the obstacle of one object type, the unified visibility region for the sensor type is obtained by determining union of one or more customized visibility regions of the plurality of sensors of corresponding sensor type for corresponding object type. The system further identifies an intersecting visibility region for each object type using the unified visibility regions of one or more sensor types. In each intersecting visibility region of the autonomous vehicle, the system determines a likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame. The estimated likelihood may be fed to an autonomous driving system to make appropriate decisions while driving.

[0037] In the following detailed description of the embodiments of the disclosure, reference is made to the accompanying drawings that form a part hereof, and in which are shown by way of illustration specific embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the present disclosure. The following description is, therefore, not to be taken in a limiting sense.

[0038] Figure 1 depicts an exemplary architecture of a system for determining visibility region of different object types for autonomous vehicle in accordance with an embodiment of the present disclosure. In an embodiment, the object type may be defined by size of object, material of object, velocity of object, semantic of object etc. As shown in Figure 1, the exemplary system 100 comprises one or more components configured for determining visibility region for autonomous vehicle. The system 100 may be implemented using a single computer or a network of computers including cloud-based computer implementations. In one embodiment, the exemplary system 100 comprises a visibility region determination system (hereinafter referred to as VRDS) 102, one or more sensors 109 associated with an autonomous vehicle 103, a data repository 104 and an autonomous driving system 106 connected via a communication network (alternatively referred as network) 105. [0039] The data repository 104 may be a cloud-implemented repository capable of storing sensor related information 110 including sensor type, capabilities of sensor types and so on. The data repository 104 also stores object type characteristics 111 of different possible obstacles on road. In one embodiment, the object type characteristics 111 may be predefined and stored in the data repository 104. In one example, the object type characteristics 111 for obstacles of different types may be defined as at least one from a set not limiting to : (a) any obstacle larger than 10x10x10 centimeter, (b) any obstacle larger than 1 meter height, 30 centimeter width, 30 centimeter length such as an upright pedestrian or bike, (c) any motorized obstacle larger than 1.20 meter height, 40 centimeter width, 1.5 meter length such as motorbike, (d) any obstacle moving faster than 2 meter per second, and (e) any motorized obstacle moving faster than 2 meters per second.

[0040] The autonomous vehicle 103 comprises plurality of sensors 109-1, 109-2, .., 109-N (collectively referred to as sensors 109) capable of detecting or recording visibility region dimensions of external environment of the autonomous vehicle 103. In one embodiment, the plurality of sensors 109 may be associated with one or more sensor types including Radio detection and Ranging (RADAR) sensor type, Light detection and Ranging (LID AR) sensor type, Ultrasonic sensor, camera sensor, speed sensor and so on. The plurality of sensors 109 is configured to identify a visibility region for the autonomous vehicle 103 and record or detect dimensional parameters of a visibility region comprising location coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and customized measurements based on sensor type of the sensor. In one embodiment, the plurality of sensors 109 may also detect speed information, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface.

[0041] The autonomous driving system 106 is coupled with the VRDS 102 and is configured to make appropriate decision while driving based on information provided by the VRDS 102. In one example, the autonomous driving system 106 may be integrated within the autonomous vehicle 103. In one embodiment, the autonomous driving system 106 is configured to act based on information received from VRDS 102 by accelerating/decelerating to turning, emergency stopping and so on. In the context of self-driving and collision avoidance, the functionality of autonomous driving system 106 is based on the information provided by the VRDS 102.

[0042] The VRDS 102 is configured to determine visibility region of different object types based on sensor input provided by the sensors 109 associated with various sensor types. In one example, the VRDS 102 may be configured as a standalone system. In another example, the VRDS 102 may be configured in cloud environment. In yet another example, the VRDS 102 may include any Wireless Application Protocol (WAP) enabled device or any other computing device capable of interfacing directly or indirectly to a network connection. The VRDS 102 also includes a graphical user interface (GUI) provided therein for interacting with the data repository 104 and autonomous driving system 106. The VRDS 102 comprises at least a processor 150 and a memory 152 coupled with the processor 150. The VRDS 102 further comprises a visibility region generation module 156, a unified region determination module 158, an intersecting region determination module 159 and a reasoning module 160. In one embodiment, the VRDS 102 may be a typical visibility region determination system as illustrated in Figure 2. The VRDS 102 comprises the processor 150, the memory 152, and an I/O interface 202. The I/O interface 202 is coupled with the processor 150 and an TO device. The I/O device is configured to receive inputs via the I/O interface 202 from sensors 109 and transmit outputs for displaying in the TO device via the I/O interface 202.

[0043] The VADS 102 further includes data 204 and modules 206. In one implementation, the data 204 may be stored within the memory 152. In one example, the data 204 may include sensor input 210, customized visibility region 212, unified visibility region 214, intersecting visibility region 215, likelihood score 216 and other data 218. The sensor input 210 indicates data recorded or identified by the sensors 109. The sensor input 210, in one example, is dimensions of a visibility region including, but not limiting to, location coordinates, angular dimensions of sensor with respect to the visibility region and other dimensional parameters associated with the autonomous vehicle 103. In another example, the sensor input may also include speed information related to autonomous vehicle 103, brake pressure details, any obstructions like pothole, bump, debris or abnormal level of roughness on the road surface. The customized visibility region 212 may be defined as the visibility region observed by a specific sensor of the plurality of sensors of the autonomous vehicle 103 with respect to a specific obstacle type at a current time frame. The unified visibility region 212 is defined as total visibility region of one sensor type, for the obstacle of one object type. The unified visibility region 212, in one example, may be defined as the visibility region obtained by union of plurality of sensors 109 of one sensor type for the obstacle of one object type. The intersecting visibility region 215 is defined as a common visibility region observed by a plurality of sensors 109 of various sensor types with respect to the obstacle of one object type. The likelihood score 216 may be defined as a probabilistic estimation of non-existence of obstacles of at least one object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in the intersecting visibility region 215 of corresponding object type. In one embodiment, the data 204 may be stored in the memory 152 in form of various data structures. Additionally, the aforementioned data can be organized using data models, such as relational or hierarchical data models. The other data 218 may store data, including temporary data, temporary files and data associated with visibility region, and co-ordinate databases generated by the modules 206 for performing the various functions of the VRDS 102.

[0044] The modules 206 may include, for example, the visibility region generation module 156, the unified region determination module 158, the intersecting region determination module 159 and the reasoning module 160. The modules 206 may also comprise other modules 224 to perform various miscellaneous functionalities of the VRDS 102. It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules. The modules 206 may be implemented in the form of software, hardware and/or firmware. As used herein, the term modules refer to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.

[0045] In operation, the VRDS 102 is configured to receive the sensor input 210 from each of the plurality of sensors 109 and determine visibility region for each sensor for different object types based on the sensor input 210. In one embodiment, the plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 records sensor input as dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103. The customized measurements of the sensor associated with one of the one or more sensor types comprise sensor-measurement parameters associated with corresponding sensor type. For example, the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of RADAR sensor type. The dimensions of the visibility region may comprise location coordinates or Global positioning system (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region. The plurality of sensors 109 sends the sensor input 210 to the visibility region generation module 156 and the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor using one or more known techniques for visibility region construction. For example, each of the one or more customized visibility region 212 is the visibility region with obstacle of specific object type observed by at least one sensor of the plurality of sensors 109 at a current time frame of the plurality of time frames. The object type associated with the obstacle observed by at least one sensor is determined based on the predefined object type characteristics 111 of different object types and the customized measurements of the corresponding sensor at the current time frame. In one embodiment, the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor at the plurality of time frames. In another embodiment, the plurality of sensors 209 is configured to directly generate customized visibility region 212 for different object types by detecting obstacles associated with different object types and determining dimensions of the visibility region with the detected obstacle of at least one object type.

[0046] Upon generating customized visibility region 212 for each sensor, the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type. In one embodiment, the unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214. Based on the unified visibility region 214 generated for each sensor type, the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type. In one embodiment, the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type.

[0047] The reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. In one embodiment, the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 using occupancy grid mapping and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. The probability of non-existence of one or more obstacles in each visibility region, in one example, is calculated, using true positive probability and false positive probability as given below in equation (1).

P(NX) = 1 - [P(Z/X) / [P(Z/X) + P(Z/NX)]] — (1) where, P(NX) is the probability of non-existence of obstacles (event NX) in visibility region; P(Z/X) is the true positive probability i.e., probability that detection occurs (event Z) if the object exists (event X); and

P(Z/NX) is the false positive probability i.e., probability that detection occurs (event Z) if the object does not exist (event NX).

The reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of certain object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles of certain object type from the current time frame to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. [0048] Figure 3a depicts a flowchart of an exemplary method of determining visibility region of different object types for an autonomous vehicle in accordance with an embodiment of the present disclosure.

[0049] As illustrated in Figure 3a, the method 300 comprises one or more blocks implemented by the processor 150 for determining visibility region of different object types for autonomous vehicle. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions or implement particular abstract data types.

[0050] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300. Additionally, individual blocks may be deleted from the method 300 without departing from the scope of the subject matter described herein. Furthermore, the method 300 can be implemented in any suitable hardware, software, firmware, or combination thereof.

[0051] At block 302, sensor input 210 from plurality of sensors 109 of autonomous vehicle 103 is received. In one embodiment, the visibility region generation module 156 of VRDS 102 receives the sensor input 210 comprising visibility region dimensions and customized measurements from the plurality of sensors 109. The plurality of sensors 109 of one or more sensor types associated with the autonomous vehicle 103 d records dimensions of visibility region i.e., viewable area of external environment of the autonomous vehicle 103. The customized measurements of the sensor associated with one of the one or more sensor types are specific measurement parameters associated with corresponding sensor type. For example, the customized measurements for the sensor of RADAR sensor type comprise RADAR Cross Sections (RCS) of identified object type, doppler measurements including velocities and other related measurements based on capabilities of sensor type. The dimensions of the visibility region may comprise location coordinates or Global Positioning System (GPS) coordinates, reflection measurement of a sensor, angular dimensions of the sensor with respect to the visibility region, sensor limitation such as range, environment conditions etc. and other dimensional parameters associated with the visibility region. The plurality of sensors 109 sends the sensor input 210 as visibility region dimensions to the visibility region generation module 156.

[0052] At block 304, one or more customized visibility region 212 for each sensor is received. In one embodiment, the visibility region generation module 156 generates one or more customized visibility region 212 for each of the plurality of sensors 109 (interchangeably referred to as each sensor) at a plurality of time frames based on the sensor input 210 received from corresponding sensor. For example, each of the one or more customized visibility region 212 is the visibility region observed by each sensor of the plurality of sensors 109 with respect to obstacle of specific object type at a current time frame. The at least one object type of the detected obstacle is determined based on the predefined object type characteristics 111 of different obstacle types and customized measurements of the sensor. In an embodiment, the visibility region generation module 156 generates for each sensor, one or more customized visibility region 212 i.e., object specific visibility region for at least one obstacle of at least one object type detected by the corresponding sensor using the visibility region dimensions received from the corresponding sensor. In another embodiment, the plurality of sensors 209 is configured to directly generate customized visibility region 212 for obstacles of different object types by determining dimensions of the visibility region. In one example, the one or more customized visibility region 212 for plurality of sensors of sensor type LIDAR is illustrated in Figure 3b. Figure 3b indicates customized visibility region 212 of four LIDAR sensors for one object type. In another example, the one or more customized visibility region for plurality of sensors of sensor type RADAR is illustrated in Figure 3c. Figure 3c indicates customized visibility region 212 of eight RADAR sensors for the same object type.

[0053] At block 306, unified visibility region for each sensor type is determined. In one embodiment, the unified region determination module 158 determines the unified visibility region 214 of each sensor type, for the obstacle of each object type. The unified region determination module 158 receives, for each object type, the one or more customized visibility region 212 generated for the plurality of sensors 109 of each sensor type. Further, the unified region determination module 158 determines, for the obstacle of each object type, union of the one or more customized visibility region 212 of the plurality of sensors 109 of corresponding sensor type for corresponding object type and generates the unified visibility region 214. In one example, Figure 3d illustrates the unified visibility region 214 for the sensor type LIDAR obtained by determining union of customized visibility region 212 of four LIDAR sensors shown in Figure 3b. In another example Figure 3e illustrates the unified visibility region 214 for the sensor type RADAR obtained by determining union of customized visibility region 212 of eight LIDAR sensors shown in Figure 3c.

[0054] At block 308, intersecting visibility region 214 for each object type is determined. In one embodiment, the intersecting region determination module 159 identifies the intersecting visibility region 215 for each object type based on the unified visibility region 214 generated for each sensor types. In one embodiment, the intersecting region determination module 159 determines intersection of the unified visibility region 214 of one or more sensor types and generates the intersecting visibility region 215 for each object type. In another embodiment, the intersecting region determination module 159 determines the intersecting visibility region 215 as intersection of visibility regions of the plurality of sensors of same sensor type. In one example, Figure 3f illustrates the intersecting visibility region 215 obtained for one object type using unified visibility region of LIDAR and RADAR sensor type depicted in Figure 3d and Figure 3e.

[0055] At block 310, likelihood of non-existence of obstacles of certain object type is determined. In one embodiment, the reasoning module 160 determines the likelihood score 216 of non-existence of one or more obstacles one of certain object type from the current time frame of the plurality of time frames to one of the plurality of time frames subsequent to the current time frame in each intersecting visibility region 215 of the autonomous vehicle 103. In one embodiment, the reasoning module 160 identifies plurality of visibility regions in the intersecting visibility region 215 and estimates the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. The reasoning module 160 determines the likelihood score 216 of non-existence of obstacles of corresponding object type in the intersecting visibility region 215 using the probability of non-existence of one or more obstacles of corresponding object type in each of the plurality of visibility regions. Based on the determined likelihood score 216, detection capabilities of sensor type, range dependencies of the sensor, weather conditions, the reasoning module 160 determines the likelihood of non-existence of one or more obstacles one of certain object type in each intersecting visibility region 215 of the autonomous vehicle 103. Further the reasoning module 160 sends the estimated likelihood to the autonomous driving system 106 to enable the autonomous driving system 106 take appropriate decision while driving. Thus, the system facilitates reasoning on non-existence of obstacles for the autonomous vehicle 103 by determining discrete visibility regions for different object types.

[0056] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words "comprising," "having," "containing," and "including," and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.

[0057] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., are non -transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.