Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MANEUVER POINT GENERATION FOR HEAD-UP DISPLAYS IN VEHICLES
Document Type and Number:
WIPO Patent Application WO/2024/114881
Kind Code:
A1
Abstract:
The present disclosure relates to a method and system for generating a maneuver point. The maneuver point is configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by a head-up display (HUD). The method comprises obtaining a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle, segmenting the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle, and determining the maneuver point based on maneuver segments of a same maneuver type at approximately a same location. The system comprises a memory and at least one processing unit to perform the method.

Inventors:
BORSHCH IEVGENII (DE)
PRYAKHIN ALEXEY (DE)
SDOBNIKOV VIKTOR (UA)
Application Number:
PCT/EP2022/083507
Publication Date:
June 06, 2024
Filing Date:
November 28, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HARMAN BECKER AUTOMOTIVE SYSTEMS GMBH (DE)
International Classes:
G01C21/36; G01C21/32; G06N3/0464
Foreign References:
EP2443418B12018-12-05
US20210300410A12021-09-30
Attorney, Agent or Firm:
BERTSCH, Florian (DE)
Download PDF:
Claims:
CLAIMS

1. A method for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by a head-up display (HUD), comprising: obtaining a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle; segmenting the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle; and determining the maneuver point based on maneuver segments of a same maneuver type at approximately a same location.

2. The method of claim 1, wherein the maneuver type is one of a left turn, a right turn, a U-turn, a ramp entry, a ramp exit, a roundabout use, a lane merge and a lane separation.

3. The method of any one of claims 1 to 2, wherein the segmenting the plurality of vehicle trajectories further includes identifying, within each vehicle trajectory, a maneuver angle change of the identified maneuver type.

4. The method of claim 3, wherein the maneuver angle change indicates a change of a heading within each vehicle trajectory.

5. The method of any one of the preceding claims, wherein the segmenting is performed using a convolutional neural network with an input vector space comprising a heading, a speed and a time stamp of the vehicle trajectories of the plurality of vehicle trajectories.

6. The method of any one of the preceding claims, wherein the determining the maneuver point includes: clustering the maneuver segments of the same maneuver type at approximately the same location to generate a maneuver segment cluster; calculating a center of mass of the maneuver segment cluster; selecting a maneuver segment of the maneuver segment cluster corresponding to the center of mass; and estimating the maneuver point based on the selected maneuver segment.

7. The method of claim 6, wherein the clustering the maneuver segments of the same maneuver type at approximately the same location further includes clustering maneuver segments based on a maneuver angle change.

8. The method of any one of claims 6 to 7, wherein the clustering: further includes calculating extrema of headings of the maneuver segments of the same maneuver type at approximately the same location, and is based on unsupervised special clustering and the calculated extrema.

9. The method of any one of claims 6 to 8, wherein the clustering further includes excluding maneuver segments of the same maneuver type at approximately the same location from the maneuver segment cluster deviating in at least one of length, heading, maneuver angle and location from a median length, a median heading, a median maneuver angle change and a median location of the maneuver segment cluster by more than two standard deviations.

10. The method of any one of claims 6 to 9, wherein the selecting the maneuver segment of the maneuver segment cluster corresponding to the center of mass includes selecting the maneuver segment based on at least one of length, heading, maneuver angle and location of the maneuver segment compared to the center of mass.

11. The method of any one of claims 6 to 10, wherein the estimating the maneuver point includes identifying a maneuver point in the selected maneuver segment.

12. The method of any one of the preceding claims, further comprising: obtaining a plurality of road geometries, each road geometry approximating a road, wherein the determining the maneuver point is further based on a road geometry at approximately the same location.

13. The method of claim 12, further comprising: comparing the estimated maneuver point with the road geometry at approximately the same location.

14. A system for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by a head-up display (HUD), comprising a memory and at least one processing unit, the memory comprising instructions, which, if performed by the at least one processing unit, cause the processing unit to: obtain a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle; segment the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle; and determine the maneuver point based on maneuver segments of a same maneuver type at approximately a same location.

15. The system of claim 14, wherein the memory further comprises instructions, which, if performed by the at least one processing unit, cause the at least one processing unit to perform the method of any one of claims 1 to 13.

Description:
Maneuver Point Generation for Head-Up Displays in Vehicles

TECHNICAL FIELD

[0001] The invention generally relates to the generation of maneuver points at which to display maneuver indications using a head-up display (HUD) and more precisely on the generation of the maneuver points based on real world driving data.

BACKGROUND

[0002] An advanced driver assist system (ADAS) is typically aware of the layout of roads based on navigational databases. These navigational databases provide the road layout as lines approximating the geometry of the roads. Intersections are thus indicated as two intersecting lines and turn lanes and ramps are indicated as lines connecting one line with another line. The lines thus do not indicate real world turn angles, which a vehicle may use to e.g. turn right or left at an intersection. Further, this line-based road layout does not indicate the width of the roads and can thus not be used to determine at which point a turn should be initiated. Therefore, an ADAS cannot use the line-based road layout to determine turn instructions which resemble real-world turn instructions.

[0003] Therefore, it is an objective of the present invention to enable an ADAS to provide real-world turn instructions.

SUMMARY OF THE INVENTION

[0004] To achieve this objective, the present invention provides method for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by an HUD, comprising obtaining a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle, segmenting the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle, and determining the maneuver point based on maneuver segments of a same maneuver type at approximately a same location and a road geometry at approximately the same location.

[0005] The present invention further provides a system for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding AR maneuver indication displayed by an HUD, comprising a memory and at least one processing unit, the memory comprising instructions, which, if performed by the at least one processing unit, cause the processing unit to obtain a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle, segment the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle, and determine the maneuver point based on maneuver segments of a same maneuver type at approximately a same location.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] Embodiments of the present invention will be described with reference to the following appended drawings, in which like reference signs refer to like elements.

[0007] FIG. 1 illustrates a system overview according to embodiments of the present invention.

[0008] FIG. 2 shows a top-down view of an intersection according to embodiments of the present invention.

[0009] FIG. 3 A and FIG. 3B provide a flowchart of a method for generating a maneuver point according to embodiments of the present invention.

[0010] FIG. 4A and Fig. 4B illustrate exemplary vehicle trajectory segmentations according to embodiments of the present invention. [0011] FIG. 5 A and Fig. 5B illustrate steps of the method Fig. 3 A and Fig. 3B based on the top-down view of the intersection of Fig. 2 according to embodiments of the present invention.

[0012] FIG. 6 illustrates a system configured for displaying turn indications according to embodiments of the present invention.

[0013] It should be understood that the above-identified drawings are in no way meant to limit the disclosure of the present invention. Rather, these drawings are provided to assist in understanding the invention. The person skilled in the art will readily understand that aspects of the present invention shown in one drawing may be combined with aspects in another drawing or may be omitted without departing from the scope of the present invention.

DETAILED DESCRIPTION

[0014] The present disclosure generally provides a method and a system for generating a maneuver point. The general concept of the method and the system will be explained in the following with reference to Fig. 1 and Fig. 2.

[0015] Fig. 1 shows a system overview 100, in which a windshield 101, a steering wheel 102, an AR maneuver indication 103 a maneuver point 104 and an HUD generation device 615 are shown. In other words, Fig. 1 provides a view of a driver looking out of windshield 101 onto a road. HUD generation device 615 is located below windshield 101 and projects AR maneuver indication 103 onto windshield 101 in a manner that it appears for the driver to be located at maneuver point 104. Maneuver point 104 thus indicates the location for displaying AR maneuver indication 103. Maneuver point 104 further corresponds to the location at which the driver should commence the maneuver to performed at maneuver point 104. Maneuver point 104 and corresponding AR maneuver indication 103 thus visually indicate to the driver where to start performing a maneuver, which reflects real-world driving behavior. In the example of Fig. 1, AR maneuver indication 103 includes three right-pointing arrows indicating that the driver should initiate a right turn at maneuver point 104.

[0016] To determine maneuver point 104, the method and the system according to the present disclosure rely on a plurality of vehicle trajectories and may further rely on a plurality of road geometries, of which examples are illustrated in Fig. 2.

[0017] Fig. 2 shows a top-down view 200 of a T-intersection. The road, which does not terminate at the T-intersection, includes a divider between the lanes of opposing directions indicated by the striped area between the lanes of opposing directions. The road terminating at the T-intersection likewise includes lanes of opposing direction but only includes a triangular shaped divider at its end. Further, a right-turn lane from the road not terminating at the T-intersection onto the road terminating at the T-intersection is provided, which is also separated from other lanes by a divider. Both additional dividers are also indicated as striped areas.

[0018] From the point of view of an ADAS, the T-intersection of Fig. 2 is approximated by road geometries 210, which form an exemplary plurality of road geometries. The road not terminating at the T-intersection is approximated by road geometries 210 with two lines due to the divider separating the lanes of opposing direction. Accordingly, the road terminating at the T-intersection is approximated by a road geometries 210 with a single line since this road is undivided safe for the very end of this road. This single line terminates at a right angle at the line approximating the far lanes of the road not terminating at the T-intersection. The separated right-turn lane of Fig. 2 is illustrated by road geometries 210 with a line connecting the line approximating the lanes of the road not terminating at the T-intersection closer to the road terminating at the T-intersection and the line approximating the road terminating at the T-intersection.

[0019] Fig. 2 further illustrates vehicle trajectories 220, 230 and 240, which form corresponding pluralities of vehicle trajectories. Each vehicle trajectory indicates a driving trajectory recorded by a vehicle. For example, the vehicle trajectories may have been recorded by a dedicated vehicle fleet of a vehicle manufacturer or an ADAS manufacturer to obtain a database of vehicle trajectories. The vehicle trajectories may also be recorded by vehicle owners voluntarily providing recordings of their vehicle usage to e.g. the vehicle manufacturer or to the ADAS manufacturer. In other words, vehicle trajectories 220, 230 and 240 correspond to paths taken by turning vehicles at the T- intersection shown in Fig. 2. Accordingly, vehicle trajectories 220, 230 and 240 may at least indicate a heading and a speed recorded at specific times, i.e. the heading and the speed may be indicated with an associated time stamp. For example, vehicle trajectories 220, 230 and 240 may include a recording of the heading and the speed of the vehicle every second, every five seconds or every ten seconds. It will be understood that these time intervals are provided as an example and may be chosen as short or as long as required to determine maneuver points 104 based on vehicle trajectories 220, 230 and 240. Vehicle trajectories 220, 230 and 240 may further include geolocation information.

[0020] To avoid overcrowding Fig. 2, vehicle trajectories 220, 230 and 240 are limited to their respective turn segments, i.e. maneuver segments. However, it will be understood that vehicle trajectories 220, 230 and 240 in reality include straight segments preceding and following the turn segments shown in Fig. 2.

[0021] For the exemplary T-intersection of Fig. 2, the method and the system according to the present disclosure determine maneuver points 104a to 104c shown in Fig. 5B based on vehicle trajectories 220, 230 and 240 and may verify maneuver points 104a to 104c based on road geometries 210 as will be discussed in the following with reference to the flowchart of Fig. 3A and Fig. 3B.

[0022] FIG. 3 A and FIG. 3B provide a flowchart of a method 300 for generating maneuver point 104. Optional steps of method 300 are indicated as dashed boxes. Sub steps of method 300 are indicated as boxes inside the boxes of their corresponding main steps. As stated above, maneuver point 104 is configured to indicate a location for displaying corresponding AR maneuver indication 103 displayed by HUD generation unit 615. Maneuver point 104 thus indicates where a driver should initiate the maneuver indicated by AR maneuver indication 103. It will be understood that the maneuver points generated by the following steps of method 300 may also be used by an ADAS configured for autonomous driving as the location where the ADAS initiates the corresponding maneuver.

[0023] In step 310, method 300 obtains a plurality of vehicle trajectories, such as the plurality of vehicles trajectories respectively formed by vehicle trajectories 220, 230 and 240, which each indicate a driving trajectory recorded by a vehicle. Method 300 may thus obtain in step 310 e.g. a database of all vehicle trajectories recorded by a provider of the vehicle trajectory data. In some examples of the present disclosure, method 300 may also query, as part of step 310, a centralized database via an air interface to obtain pluralities o vehicle trajectories based on a current location of a vehicle.

[0024] In step 320, method 300 segments the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types. Each maneuver type indicates a maneuver performed by the recording vehicle and may be one of a left turn, a right turn, a U-turn, a ramp entry, a ramp exit, a roundabout use, a lane merge and a lane separation. More generally, it will be understood that a maneuver may be any deviation of a driving vehicle from continuing along one road to enter another road via any one of a simple turn onto another road or via dedicated turn infrastructure, such as a ramp, a turn lane or a roundabout. In step 320, method 300 thus segments vehicle trajectories into maneuver segments and continuation segments, i.e. segments of the vehicle trajectories continuing along a road, and identifies for each maneuver segment the corresponding maneuver type.

[0025] In some examples of the present disclosure, step 320 may include a step 321, in which method 300 identifies, within each vehicle trajectory, a maneuver angle change of the identified maneuver type. A maneuver angle may indicate a change of a heading within each vehicle trajectory. The identification of the maneuver angle change may for example improve differentiating between maneuvers of the same type at the same location but with different angles. For example, an intersection may include multiple left turn options, such as a sharp left turn and a slight left turn. Accordingly, vehicle trajectories at such an intersection may include two left turn segments, which only differ in terms of their respective maneuver angle changes. Based on step 321, method 300 may be able to differentiate between the two left turn angles.

[0026] Method 300 may perform step 320 and step 321 by analyzing the heading, the speed and the associated time stamps of vehicle trajectories 220, 230 and 240. For example, method 300 may define thresholds for heading changes over time in order to differentiate e.g. between e.g. a right turn and a ramp entry. The speed may e.g. be taken into account to determine the probability of a maneuver being performed at such a speed. In some examples of the present disclosure, steps 320 and 321 may be performed using a convolutional neural network (CNN) with an input vector space comprising the headings, the speeds and the associated time stamps of vehicle trajectories 220, 230 and 240. The output of such a CNN may then be the respective maneuver segments and the corresponding maneuver type of the different maneuver types. In other words, the output of the CNN may indicate both the type and the geometrical extent of the maneuver. For example, method 300 may use U-net to perform the segmentation of vehicle trajectories 220, 230 and 240.

[0027] Figs. 4A and 4B provide an example of step 320. Figs. 4A and 4B respectively show a vehicle trajectory 410 and a vehicle trajectory 420. The heading of vehicle trajectories 410 and 420 is indicated by an arrow at the end of vehicle trajectories 410 and 420. It will be understood that this is a simplification and that vehicle trajectories 410 and 420 may also be illustrated as a chain of vectors indicating segments of vehicle trajectories 410 and 420 having constant headings with the length of the vectors corresponding to the length of the segments of constant heading, as e.g. derived based on the time stamps and speeds included in vehicle trajectories 410 and 420.

[0028] Within vehicle trajectories 410 and 420, method 300 identifies in step 320 maneuver segments 411 to 414 and maneuver segments 421 to 424, respectively. Accordingly, method 300 identifies a left turn, a U-turn, a right turn and a ramp entry within vehicle trajectory 410. Regarding vehicle trajectory 420, method 300 identifies a left turn, a right turn, a left turn and again a right turn in the direction of travel. [0029] In step 330, method 300 determines maneuver point 104 based on maneuver segments of a same maneuver type at approximately a same location. As stated above, for reasons of simplicity Fig. 2 only shows the maneuver segments of exemplary vehicle trajectories 220 to 240. That is, Fig. 2 can be considered the output of step 320 for the T- intersection shown in Fig. 2 and can accordingly be considered the input of step 330. Thus, taking the right turn from the road terminating at the T-intersection onto the road not terminating at the T-intersection as an example, method 300 determines in step 330 maneuver point 104a shown in Figs. 5 A and 5B based on the five right-turn segments of vehicle trajectories 220 shown in Fig. 2. Step 330 may e.g. determine maneuver point 104a by determine a median maneuver segment of the vehicle trajectories 220, as discussed in the following.

[0030] It will be understood that the determination of maneuver point 104 takes into account maneuver segments at approximately a same location to account for slight variations of the location of the respective maneuver segments of the same maneuver type due to e.g. a road width, a presence of more than one turn lane, the real-world behavior of drivers, which typically do not all take an identical turn at a given intersection.

[0031] Step 330 may include a step 331, in which method 300 may cluster the maneuver segments of the same maneuver type at approximately the same location to generate a maneuver segment cluster. In other words, method 330 may in step 331 identify and group together maneuver segments of the same maneuver type at approximately the same location. In the context of Fig. 2, method 300 may in step 331 respectively group together all maneuver segments at the T-intersection corresponding to one of the three turns indicated in Fig. 2 to form one of three maneuver segment clusters possible at this intersection. In examples of the present disclosure, in which method 300 also identifies the maneuver angle change to differentiate between e.g. a slight right turn and a sharp right turn at approximately the same location, method 300 may in step 331 also cluster the maneuver segments based on a maneuver angle change.

[0032] Step 331 may further include a step 331a, in which method 300 may calculate extrema of headings of the maneuver segments of the same maneuver type at approximately the same location. In other words, method 300 may in step 331 determine an upper and a lower end of a heading range of maneuver segments of the same maneuver type at approximately the same location. Method 300 may use these extrema to cluster maneuver segments of the same maneuver type at approximately the same location based on unsupervised special clustering, e.g. based on density-based spatial clustering of applications with noise (DBSCAN).

[0033] Step 331 may further include a step 331b, in which method 300 excludes maneuver segments of the same maneuver type at approximately the same location from the maneuver segment cluster, which deviate in at least one of length, heading, maneuver angle change, speed and location from a median length, a median heading, a median maneuver angle change, a median speed and a median location of the maneuver segment cluster by more than two standard deviations. By excluding such maneuver segments, method 300 may be able to ensure a more robust maneuver point determination.

[0034] Step 330 may further include a step 332, in which method 300 may calculate a center of mass of the maneuver segment cluster. The center of mass may be understood as a median maneuver segment of the maneuver segment cluster. That is, the center of mass indicates a median of the length, the heading, the maneuver angle change, the speed and the location of the maneuver segments included in the maneuver segment cluster.

[0035] Step 330 may further include a step 333, in which method 300 may select a maneuver segment of the maneuver segment cluster corresponding to the center of mass calculated in step 332. In other words, method 300 compares all maneuver segments of the maneuver segments cluster against the center of mass and selects the maneuver segment closest to the center of mass. That is, the selection may be based on at least one of length, heading, maneuver angle and location of the maneuver segment compared to the center of mass.

[0036] Step 330 may further include a step 334, in which method 300 may estimate maneuver point 104 based on the maneuver segment selected in step 333. To estimate maneuver point 104, method 300 may in step 334 e.g. analyze the heading data included in the maneuver segment in order to identify changes in the heading indicative of the maneuver point. To this end, step 334 may include a step 334a, in which method 300 may identify maneuver point 104 in the selected maneuver segment. This identification may for example include identifying when a derivative of the heading over time starts to be not zero.

[0037] The selection of a maneuver segment in step 333 and estimating maneuver point 104 based on this selection in step 334 ensures that maneuver point 104 and thereby AR maneuver indication 103 reflect realworld driving behavior as opposed to merely determining maneuver point 104 based on determining the center of mass of the maneuver segment cluster.

[0038] In step 340, method 300 may obtain a plurality of road geometries, such as road geometries 210, which approximate roads, as discussed with reference to Fig. 2. In examples of the present disclosure, in which method 300 obtains the plurality of road geometries, method 300 determines maneuver point 104 further based on road geometry 210 at approximately the same location. Basing the determination of maneuver point 104 on road geometries 210 may serve as a verification of maneuver point 104. In other words, basing the determination of maneuver point 104 on road geometries 210 may ensure that the determined maneuver point is in fact on a road and not next to the road. To this end, method 300 may further include step 350, which compares estimated maneuver point 104 with road geometry 210 at approximately the same location. The comparison may e.g. be based on an average width of a road, which determines a location range around the straight lines defined by road geometries 210. If maneuver point 104 is located within the location range, maneuver point 104 can be considered a plausible maneuver point. The location range may also take into account at least one of the type of road, e.g. an inner city road or an interstate, and a number of lanes of the road. In some examples, road geometries 210 may define the actual width of a road at a given location and may thereby define the location range.

[0039] Figs. 5A and 5B provide an example of step 330 of method 300 and its sub-steps.

Figs. 5A and 5B correspond to Fig. 2 at various intermediate steps of step 330. [0040] In Fig. 5A, the respective maneuver segment clusters at the T-intersection as determined by step 331 based on vehicle trajectories 210, 220 and 230 are shown. Further, Fig. 5 indicates the respective centers of mass (CoM) as calculated by step 332. As can be seen, the CoM in the example of Fig. 5A can be considered a median maneuver segment.

[0041] In Fig. 5B, only the maneuver segments selected by step 333 from the three maneuver segment clusters are shown. As can be seen in comparison with Fig. 5A, these maneuver segments are the maneuver segments closest to the respective centers of mass. Based on the maneuver segments selected by step 333, step 334 has estimated maneuver points 104a to 104c, which correspond approximately to the location where the heading of these segments start to change. Fig. 5B thus shows the end result of method 300 for the T-intersection of Fig. 2. These maneuver points may thus be used by an HUD to display the corresponding AR maneuver indications 103 or may be used by a an autonomous ADAS to initiate a turn.

[0042] Fig. 6 illustrates a system 600, which comprises an AR maneuver indication generation section 610 and a maneuver point generation section 620. AR maneuver indication generation section 610 generates AR maneuver indications, as exemplary illustrated by AR maneuver indication 103 in Fig. 1. Maneuver point generation section 620 generates maneuver points in accordance with method 300 described above. Both sections may be located in a vehicle. Alternatively, AR maneuver indication generation section 610 may be located in a vehicle while maneuver point generation section 620 may e.g. be located in a datacenter. Further, AR maneuver indication generation section 610 may also be replaced with or provided in addition to an autonomous ADAS, i.e. a selfdriving section, which uses the maneuver points generated by maneuver point generation section 620 to initiate maneuvers.

[0043] AR maneuver indication generation section 610 may include a gyroscope 611, an accelerometer 612 and a global navigation satellite system (GNSS) interface 613. Gyroscope 611 measures at least an orientation and an angular velocity of the vehicle. Gyroscope 611 may thus generally be used to determine a heading of the vehicle and may e.g. be used to record a heading of the vehicle when recording vehicle trajectories. Accelerometer 612 measures the acceleration of the vehicle and may accordingly be used to determine the speed of the vehicle and to record the speed of the vehicle when recording vehicle trajectories. GNSS interface 613 provides connectivity with a satellite constellation which provides positioning, navigation, and timing services and may thus be used to determine at least the location and may further be used to determine the heading and the speed of the vehicle.

[0044] It will be understood that AR maneuver indication generation section 610 may include additional sensors or may solely on GNSS interface 613 or gyroscope 611, an accelerometer 612 in order to obtain the speed, heading and location of the vehicle.

[0045] AR maneuver indication generation section 610 may further include a system on chip (SoC) 614 and HUD generation device 615. As discussed with reference to Fig. 1, HUD generation device 615 is configured to generate an HUD on windshield 101, which may e.g. display AR maneuver indication 103. SoC 614 generally provides the functionality to control HUD generation device 615. SoC 614 may thus be coupled to maneuver point generation section 620 to receive maneuver points generated by maneuver point generation section 620 in accordance with method 300. SoC 614 may further be coupled to gyroscope 611, accelerometer 612 and GNSS interface 613, e.g. via a bus 610B1. Bus 610B1 may be any kind of bus system suitable for use in a vehicle, such as a controller area network (CAN) bus. SoC 614 may use these inputs to determine the correct projection of AR maneuver indication 103 onto windshield 101 in order to appear from the point of view of the driver to be located at the location of maneuver point 104.

[0046] To this end, SoC 614 may include a navigation database 614a, a sensor interface 614b and an HUD controller 614c. Navigation unit 614a may include navigation data, such as map data, point of interest (POI) data, and may determine and provide navigational guidance to be used by SoC 614 to guide a driver e.g. using AR maneuver indication 103. Sensor interface 614b provides an interface with gyroscope 611, accelerometer 612 and GNSS interface 613, either directly or via bus 61 OBI, in order to obtain the respective sensor data for processing by SoC 614 and for forwarding e.g. to maneuver point generation section 620. HUD controller 614c generates control signals for HUD generation device 615 to display AR maneuver indication 103 based on the navigational guidance from navigation unit 614a and the maneuver points generated by maneuver point generation section 620.

[0047] Maneuver point generation section 620 is configured to perform method 400. Maneuver point generation section 620 may include a processor 626, a graphics processing unit (GPU) 622, a memory 623, a bus 620B, a storage 621, a removable storage 624, and a communications interface 625.

[0048] Processor 626 may be any kind of single-core or multi-core processing unit employing a reduced instruction set (RISC) or a complex instruction set (CISC). Exemplary RISC processing units include ARM based cores or RISC V based cores. Exemplary CISC processing units include x86 based cores or x86-64 based cores. Processor 626 may further be an application specific integrated circuit (ASIC) or a field- programmable gate array specially tailored to or programmed, respectively, to perform method 300. Processor 626 may perform instructions causing maneuver point generation section 620 to perform method 300. Processor 626 may be directly coupled to any of the components of maneuver point generation section 620 or may be directly coupled to memory 623, GPU 622 and bus 620B.

[0049] GPU 622 may be any kind of processing unit optimized for processing graphics related instructions or more generally for parallel processing of instructions. As such, GPU 622 may perform part or all of method 300 to enable fast parallel processing of instructions relating to method 300. It should be noted that in some embodiments, processor 626 may determine that GPU 622 need not perform instructions relating to method 300. GPU 622 may be directly coupled to any of the components of maneuver point generation section 620 or may be directly coupled to processor 622 and memory 623. GPU 622 may also be coupled to a display via connection 622C. In some embodiments, GPU 622 may also be coupled to bus 620B. Given that maneuver point generation section 620 may be part of a datacenter or may be integrated into a processing unit of a vehicle, at least the connection 622C may be omitted. In some examples of the present disclosure, GPU 622 may also be omitted, if e.g. the parallel processing capability of GPU 622 is deemed to be not required to perform method 300.

[0050] Memory 623 may be any kind of fast storage enabling processor 626 and GPU 622 to store instructions for fast retrieval during processing of the instructions well as to cache and buffer data. Memory 623 may be a unified memory coupled to both processor 626 and GPU 622 enabling allocation of memory 623 to processor 626 and GPU 622 as needed. Alternatively, processor 626 and GPU 622 may be coupled to separate processor memory 623b and GPU memory 623a.

[0051] Storage 621 may be a storage device enabling storage of program instructions and other data. For example, storage 621 may be a hard disk drive (HDD), a solid state disk (SSD) or some other type of non-volatile memory. Storage 621 may for example store the instructions of method 300.

[0052] Removable storage 624 may be a storage device which can be removably coupled with maneuver point generation section 620. Examples include a digital versatile disc (DVD), a compact disc (CD), a Universal Serial Bus (USB) storage device, such as an external SSD, or a magnetic tape. Removable storage 624 may for example be used to provide road geometries 210 to maneuver point generation section 620 and thereby to method 300 or to store generated maneuver points. It should be noted that removable storage 624 may also store other data, such as instructions of method 300, or may be omitted.

[0053] Storage 621 and removable storage 624 may be coupled to processor 622 via bus 620B. Bus 620B may be any kind of bus system enabling processor 626 and optionally GPU 622 to communicate with storage device 621 and removable storage 624. Bus 620B may for example be a Peripheral Component Interconnect express (PCIe) bus or a Serial AT Attachment (SATA) bus. [0054] Communications interface 625 may enable maneuver point generation section 620 to interface with AR maneuver indication generation section 610 as well as external devices, either directly or via network, via connection 625C. Communications interface 625 may for example enable maneuver point generation section 620 to couple to bus 610B2, which may be a CAN bus or any bus system appropriate in vehicles, as well as a wired or wireless network, such as Ethernet or Wifi. As shown, maneuver point generation section 620 may be coupled with AR maneuver indication generation section 610 via connection 625C and bus 610B2 in order to provide the generated maneuver points to SoC 614. In this context, it will be understood that in examples of the present disclosure, in which maneuver point generation section 620 is located in a vehicle, buses 610B1 and 620B2 may be a single bus, via which sensor data may be provided directly to communications interface 625. Communications interface 625 may also provide connectivity via a USB port or a serial port to enable direct wired communication with an external device.

[0055] It will be understood that the system of Fig. 6 may include further elements as required to provide the processing, sensor data and connectivity required to provide a system implementing method 300 and to enable the processing of the maneuver points generated by method 300, e.g. in a HUD display or in an autonomous driving function.

[0056] The invention may further be illustrated by the following examples.

[0057] In an example, a method for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by a head-up display (HUD), comprising obtaining a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle, segmenting the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle; and determining the maneuver point based on maneuver segments of a same maneuver type at approximately a same location. [0058] In an example the maneuver type may be one of a left turn, a right turn, a U-turn, a ramp entry, a ramp exit, a roundabout use, a lane merge and a lane separation.

[0059] In an example, the segmenting the plurality of vehicle trajectories may further include identifying, within each vehicle trajectory, a maneuver angle change of the identified maneuver type.

[0060] In an example, the maneuver angle change may indicate a change of a heading within each vehicle trajectory.

[0061] In an example, the segmenting may be performed using a convolutional neural network with an input vector space comprising a heading, a speed and a time stamp of the vehicle trajectories of the plurality of vehicle trajectories.

[0062] In an example, the determining the maneuver point may include clustering the maneuver segments of the same maneuver type at approximately the same location to generate a maneuver segment cluster, calculating a center of mass of the maneuver segment cluster, selecting a maneuver segment of the maneuver segment cluster corresponding to the center of mass, and estimating the maneuver point based on the selected maneuver segment.

[0063] 7. The method of claim 6, wherein the clustering the maneuver segments of the same maneuver type at approximately the same location further includes clustering maneuver segments based on a maneuver angle change.

[0064] In an example, the clustering may further include calculating extrema of headings of the maneuver segments of the same maneuver type at approximately the same location, and may be based on unsupervised special clustering and the calculated extrema.

[0065] In an example, the clustering may further include excluding maneuver segments of the same maneuver type at approximately the same location from the maneuver segment cluster deviating in at least one of length, heading, maneuver angle and location from a median length, a median heading, a median maneuver angle change and a median location of the maneuver segment cluster by more than two standard deviations.

[0066] In an example, the selecting the maneuver segment of the maneuver segment cluster corresponding to the center of mass may include selecting the maneuver segment based on at least one of length, heading, maneuver angle and location of the maneuver segment compared to the center of mass.

[0067] In an example, the estimating the maneuver point may include identifying a maneuver point in the selected maneuver segment.

[0068] In an example, the method may further comprise obtaining a plurality of road geometries, each road geometry approximating a road, wherein the determining the maneuver point may further be based on a road geometry at approximately the same location.

[0069] In an example, the method may further comprise comparing the estimated maneuver point with the road geometry at approximately the same location.

[0070] In an example, a system for generating a maneuver point, the maneuver point being configured to indicate a location for displaying a corresponding augmented reality (AR) maneuver indication displayed by a head-up display (HUD), comprising a memory and at least one processing unit, the memory comprising instructions, which, if performed by the at least one processing unit, cause the processing unit to obtain a plurality of vehicle trajectories, each vehicle trajectory indicating a driving trajectory recorded by a vehicle, segment the plurality of vehicle trajectories to identify, within each vehicle trajectory, a maneuver segment based on one or more maneuver types, each maneuver type indicating a maneuver performed by the recording vehicle, and determine the maneuver point based on maneuver segments of a same maneuver type at approximately a same location. [0071] In an example, the memory may further comprise instructions, which, if performed by the at least one processing unit, may cause the at least one processing unit to perform the method of any of the preceding examples. [0072] The preceding description has been provided to illustrate the method and a system for generating a maneuver point. It should be understood that the description is in no way meant to limit the scope of the invention to the precise embodiments discussed throughout the description. Rather, the person skilled in the art will be aware that these embodiments may be combined, modified or condensed without departing from the scope of the invention as defined by the following claims.