Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PROVIDING INFORMATION IN A DIGITAL MAP
Document Type and Number:
WIPO Patent Application WO/2024/099563
Kind Code:
A1
Abstract:
Various example embodiments relate to providing information to a user via digital map data.According to an aspect, a method for navigating to a destination may comprise using a first navigation mode for navigating to a destination using a digital map; determining independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, automatically switching from the first navigation mode to a second navigation mode; and rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map.

Inventors:
MEI JIANHAN (SE)
FANG CHUN (SE)
WANG TINGHUAI (SE)
WANG GUANGMING (SE)
Application Number:
PCT/EP2022/081397
Publication Date:
May 16, 2024
Filing Date:
November 10, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
MEI JIANHAN (SE)
International Classes:
G01C21/36
Domestic Patent References:
WO2013184249A12013-12-12
Other References:
NAVIGON AG: "Navigon 70 Plus/Premium/Premium Live - User's manual", 1 August 2010 (2010-08-01), pages 1 - 136, XP055279997, Retrieved from the Internet [retrieved on 20160613]
Attorney, Agent or Firm:
HUAWEI EUROPEAN IPR (DE)
Download PDF:
Claims:
CLAIMS

1. A method for navigating to a destination, the method comprising: using a first navigation mode for navigating to a destination using a digital map; determining independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, automatically switching from the first navigation mode to a second navigation mode; and rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map.

2. The method according to claim 1, further comprising: switching from the first navigation mode to the second navigation mode to guide the user to the destination, wherein the first navigation mode comprises a vehicle navigation mode and the second navigation mode comprises a walk navigation mode.

3. The method according to claim 1 or 2, further comprising: determining independently from user actions a second trigger to activate a navigation mode switch; and in response to determining the second trigger to activate the navigation mode switch, automatically switching from the second navigation mode back to the first navigation mode.

4. The method according to any of claims 1 - 3, wherein rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map comprises: providing a virtual reality view comprising the additional information to highlight the at least one entity.

5. The method according to any of claims 1 - 3, wherein rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map comprises: providing an augmented reality view using data from a camera of a user device, the augmented reality view comprising the additional information to highlight the at least one entity.

6. The method according to any of claims 1 - 5, wherein the at least one entity comprises a point of interest.

7. The method according to any of claims 1 - 6, wherein the first trigger and/or the second trigger comprises at least one of the following: a navigation status change from driving to walking; a user defined trigger; a speed associated with a user; a distance from the destination; a location environment change of the user; a user exercise status change; a status associated with a car; a status associated with the user; or routing complexity between a current position and the destination.

8. A method for enhancing digital map data, the method comprising: obtaining image data from a camera of a user device; extracting at least one information section from the image data; receiving a first selection of an information section of the at least one information section; obtaining first additional information associated with the selected information section; and displaying the first additional information in the digital map data.

9. The method according to claim 8, further comprising: obtaining location information of the user device; and using the location information in extracting the at least one information section from the image data.

10. The method according to claim 8 or 9, further comprising: receiving a second selection of an information section in the first additional information; obtaining second additional information associated with the selected information section; and displaying the second additional information in the digital map data.

11. The method according to any of claims 8 - 10, wherein the image data comprises still image data or video image data.

12. The method according to any of claims 8 - 11, wherein displaying the additional information in digital map data comprises: providing a virtual reality view comprising the additional information.

13. The method according to claim any of claims 8 - 11, wherein displaying the additional information in digital map data comprises: providing an augmented reality view using image data from the camera, the augmented reality view comprising the additional information.

14. A device for navigating to a destination, the device being configured to: use a first navigation mode for navigating to a destination using a digital map; determine independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, automatically switch from the first navigation mode to a second navigation mode; and render in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map.

15. A device for enhancing digital map data, the device being configured to: obtain image data from a camera of a user device; extract at least one information section from the image data; receive a first selection of an information section of the at least one information section; obtain first additional information associated with the selected information section; and display the first additional information in the digital map data.

16. A computer program comprising program code configured to cause performance of the method according to any of claims 1 - 7, when the computer program is executed on a computer. 17. A computer program comprising program code configured to cause performance of the method according to any of claims 8 - 13, when the computer program is executed on a computer.

Description:
PROVIDING INFORMATION IN A DIGITAL MAP

TECHNICAL FIELD

Various example embodiments generally relate to the field of digital maps and providing information via the digital maps. In particular, some example embodiments relate to enhancing digital map data.

BACKGROUND

Digital maps may be used to provide information, for example, for navigation purposes or when searching a specific geographical location. The digital map may include also additional information about some locations. These locations may be called as points of interest (POI). A POI may refer, for example, to a restaurant, a shopping centre, a tourist attraction, a company etc. POIs are widely used in map drawings and automotive navigation systems. When navigation is used, the digital map may display a selected route in a two-dimensional map. Another possibility for the navigation is to utilize augmented reality. For example, a user may be walking to a destination in a city environment and the user uses a mobile device, for example, a smart phone. When pointing the camera of the mobile device to the walking direction of the user, additional information, for example, turning directions can be augmented on the camera view provided on the display of the mobile device.

SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

It is an objective of the present disclosure to enable provision of additional information in digital map data. Further implementation forms are apparent from the dependent claims, the description, and the drawings.

According to a first aspect, a method for navigating to a destination is provided. The method comprises using a first navigation mode for navigating to a destination using a digital map; determining independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, automatically switching from the first navigation mode to a second navigation mode; and rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map. The solution may enable, for example, a better user experience and ease finding of destinations for users.

According to an implementation form of the first aspect, the method further comprises switching from the first navigation mode to the second navigation mode to guide the user to the destination, wherein the first navigation mode comprises a vehicle navigation mode and the second navigation mode comprises a walk navigation mode. The solution may, for example, ease the navigation the user may be guided to the destination by providing more accurate guidance.

According to an implementation form of the first aspect, the method further comprises determining independently from user actions a second trigger to activate a navigation mode switch; and in response to determining the second trigger to activate the navigation mode switch, automatically switching from the second navigation mode back to the first navigation mode. The solution may enable, for example, a better user experience as the user does not have to manually switch between the navigation modes.

According to an implementation form of the first aspect, rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map comprises providing a virtual reality view comprising the additional information to highlight the at least one entity. The solution may enable, for example, a better digital map function.

According to an implementation form of the first aspect, rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map comprises providing an augmented reality view using data from a camera of a user device, the augmented reality view comprising the additional information to highlight the at least one entity. The solution may enable, for example, a better digital map function.

According to an implementation form of the first aspect, the at least one entity comprises a point of interest. The solution may enable, for example, a better navigation experience as information relating to the point of interest is easily identifiable.

According to an implementation form of the first aspect, the first trigger and/or the second trigger comprises at least one of the following: a navigation status change from driving to walking, a user defined trigger, a speed associated with a user, a distance from the destination, a location environment change of the user, a user exercise status change, a status associated with a car, a status associated with the user, or routing complexity between a current position and the destination. The solution may enable, for example, the use of one or more automatic triggers for activating the navigation mode switch.

According to a second aspect, a method for enhancing digital map data is provided. The method comprises obtaining image data from a camera of a user device; extracting at least one information section from the image data; receiving a first selection of an information section of the at least one information section; obtaining first additional information associated with the selected information section; and displaying the first additional information in the digital map data. The solution may enable, for example, a better digital map function.

According to an implementation form of the second aspect, the method further comprises obtaining location information of the user device; and using the location information in extracting the at least one information section from the image data. The solution may enable, for example, providing more accurate information to the user.

According to an implementation form of the second aspect, the method further comprises receiving a second selection of an information section in the first additional information; obtaining second additional information associated with the selected information section; and displaying the second additional information in the digital map data The solution may enable, for example, providing more accurate information to the user. The second additional information may enable a user, for example, to make a phone call, access a website and use virtual reality navigation or indoor navigation.

According to an implementation form of the second aspect, the image data comprises still image data or video image data. The solution may enable, for example, using different data types for providing more accurate information to the user.

According to an implementation form of the second aspect, displaying the additional information in the digital map data comprises providing a virtual reality view comprising the additional information. The solution may enable, for example, a better digital map function.

According to an implementation form of the second aspect, displaying the additional information in the digital map data comprises providing an augmented reality view using image data from the camera, the augmented reality view comprising the additional information. The solution may enable, for example, a better digital map function.

According to a third aspect, a device for navigating to a destination is provided. The device is configured to use a first navigation mode for navigating to a destination using a digital map; determine independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, automatically switch from the first navigation mode to a second navigation mode; and render in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map. The solution may enable, for example, a better user experience.

According to an implementation form of the third aspect, the device is configured to switch from the first navigation mode to the second navigation mode to guide the user to the destination, wherein the first navigation mode comprises a vehicle navigation mode and the second navigation mode comprises a walk navigation mode. The solution may, for example, ease the navigation the user may be guided to the destination by providing more accurate guidance.

According to an implementation form of the third aspect, the device is configured to determine independently from user actions a second trigger to activate a navigation mode switch; and in response to determining the second trigger to activate the navigation mode switch, automatically switch from the second navigation mode back to the first navigation mode. The solution may enable, for example, a better user experience as the user does not have to manually switch between the navigation modes.

According to an implementation form of the third aspect, the device is configured to, when rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map, provide a virtual reality view comprising the additional information to highlight the at least one entity. The solution may enable, for example, a better digital map function.

According to an implementation form of the third aspect, the device is configured to, when rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map. provide an augmented reality view using data from a camera of a user device, the augmented reality view comprising the additional information to highlight the at least one entity. The solution may enable, for example, a better digital map function.

According to an implementation form of the third aspect, the at least one entity comprises a point of interest. The solution may enable, for example, a better navigation experience as information relating to the point of interest is easily identifiable.

According to an implementation form of the third aspect, the first trigger and/or the second trigger comprises at least one of the following: a navigation status change from driving to walking, a user defined trigger, a speed associated with a user, a distance from the destination, a location environment change of the user, a user exercise status change, a status associated with a car, a status associated with the user, or routing complexity between a current position and the destination. The solution may enable, for example, the use of one or more automatic triggers for activating the navigation mode switch.

According to a fourth aspect, a device for enhancing digital map data is provided. The device is configured to obtain image data from a camera of a user device; extract at least one information section from the image data; receive a first selection of an information section of the at least one information section; obtain first additional information associated with the selected information section; and display the first additional information in the digital map data. The solution may enable, for example, a better digital map function.

According to an implementation form of the fourth aspect, the method further comprises obtaining location information of the user device; and using the location information in extracting the at least one information section from the image data. The solution may enable, for example, providing more accurate information to the user.

According to an implementation form of the fourth aspect, the method further comprises receiving a second selection of an information section in the first additional information; obtaining second additional information associated with the selected information section; and displaying the second additional information in the digital map data. The solution may enable, for example, providing more accurate information to the user.

According to an implementation form of the fourth aspect, the image data comprises still image data or video image data. The solution may enable, for example, using different data types for providing more accurate information to the user.

According to an implementation form of the fourth aspect, displaying the additional information in the digital map data comprises providing a virtual reality view comprising the additional information. The solution may enable, for example, a better digital map function.

According to an implementation form of the fourth aspect, displaying the additional information in the digital map data comprises providing an augmented reality view using image data from the camera, the augmented reality view comprising the additional information. The solution may enable, for example, a better digital map function.

According to a fifth aspect, a computer program is provided. The computer program comprises program code configured to cause performance of the method according to the first aspect, when the computer program is executed on a computer. According to a sixth aspect, a computer program is provided. The computer program comprises program code configured to cause performance of the method according to the second aspect, when the computer program is executed on a computer.

According to a seventh aspect, a device for navigating to a destination is provided. The device comprises means for using a first navigation mode for navigating to a destination using a digital map; means for determining independently from user actions a first trigger to activate a navigation mode switch; in response to determining the first trigger to activate the navigation mode switch, means for automatically switching from the first navigation mode to a second navigation mode; and means for rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map.

According to an eighth aspect, a device for enhancing digital map data is provided. The device comprises means for obtaining image data from a camera of a user device; extracting at least one information section from the image data; means for receiving a first selection of an information section of the at least one information section; means for obtaining first additional information associated with the selected information section; and means for displaying the first additional information in the digital map data.

DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are included to provide a further understanding of the example embodiments and constitute a part of this specification, illustrate example embodiments and, together with the description, help to explain the example embodiments. In the drawings:

FIG. 1 illustrates an example of a device configured to practice one or more embodiments.

FIG. 2A illustrates a solution for navigating to a destination according to an example embodiment.

FIG. 2B illustrates a solution for navigating to a destination according to an example embodiment.

FIG. 2C illustrates a solution for navigating to a destination according to an example embodiment.

FIG. 2D illustrates a solution for navigating to a destination according to an example embodiment. FIG. 2E illustrates a solution for navigating to a destination according to an example embodiment.

FIG. 3A illustrates a solution for enhancing digital map data according to an example embodiment.

FIG. 3B illustrates a solution for enhancing digital map data according to an example embodiment.

FIG. 4 illustrates an example of a method for navigating to a destination.

FIG. 5 illustrates an example of a method for enhancing digital map data

Like references are used to designate like parts in the accompanying drawings.

DETAILED DESCRIPTION

Reference will now be made in detail to example embodiments, examples of which are illustrated in the accompanying drawings. The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.

The embodiments discussed below in more detail may provide a solution for navigating to a destination. A first navigation mode may be used for navigating to a destination using a digital map, and independently from user actions a first trigger may be determined to activate a navigation mode switch. In response to determining the first trigger to activate the navigation mode switch, an automatic switch may be performed from the first navigation mode to a second navigation mode, and additional information may be rendered in the second navigation mode to the digital map to highlight at least one entity in the digital map. Thus, a solution may be provided that enables an automatic switching between different navigation modes without any user actions activating the switching.

The embodiments discussed below in more detail may also provide a solution for enhancing digital map data. Image data may be obtained from a camera of a user device and at least one information section, for example, a brand sign of a company on the wall of a building, may be extracted from the image data. A first selection of an information section of the at least one information section may be received, first additional information associated with the selected information section may be obtained and the first additional information may be displayed in the digital map data. For example, the first additional information may comprise highlighting the floor of the building where a target company is located. In another example embodiment, the first additional information may provide additional information of the target company to the user, for example, the floor, telephone number, or website address, or provide an indoor navigation button. When receiving a selection of the indoor navigation button, a virtual reality navigation may be started for the user.

The embodiments discussed below in more detail may make use of image information relating to various points of interest (POI). In general, a POI may be a map position label that people may be interested in. The POI may identify, for example, a building, a company, a tourist attraction etc. Various POIs are typically labelled as a position mark on digital maps.

Images containing associated satellite position information originating from various sources, for example, crowdsourcing images, can be used as input of an incremental reconstruction system for camera pose estimation. A six-dimensional pose comprises a position in a three-dimensional coordinate system and a three-dimensional orientation (i.e. angles). The six-dimensional (6D) pose enables to determine what was the orientation of a camera in a three- dimensional world when an image was taken. Six degrees of freedom (6D0F) pose or 6D pose is typically used to describe the pose of a rigid object, and it comprise three degrees for object rotation and three degrees for the translation. Thus, the 6D pose may be used to describe the pose of a POI under the camera and world coordinate systems. After obtaining the sixdimensional pose, streets can be extracted from the source image data. Then, POIs from different views may be matched according to their epipolar line. This enables to create a system that is able to identify that multiple images may have been taken about the same POI from different angles.

FIG. 1 illustrates an example of a device 100 configured to practice one or more embodiments. The device 100 may comprise, for example, a mobile device or a device installed in a vehicle, or in general any device configured to implement any functionality described herein. The device 100 may comprise at least one processor 102. The at least one processor 102 may comprise, for example, one or more of various processing devices, such as, for example, a co-processor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. The device 100 may further comprise at least one memory 104. The memory 104 may be configured to store, for example, computer program code or the like, for example, operating system software and application software. The memory 104 may comprise one or more volatile memory devices, one or more non-volatile memory devices, and/or a combination thereof. For example, the memory may be embodied as magnetic storage devices (such as hard disk drives, magnetic tapes, etc.), optical magnetic storage devices, or semiconductor memories (such as mask ROM, PROM (programmable ROM), EPROM (erasable PROM), flash ROM, RAM (random access memory), etc.).

The device 100 may further comprise a communication interface 108 configured to enable the device 100 to transmit and/or receive information. The communication interface may be further configured to provide a wireless radio connection, such as for example a third generation partnership project (3GPP) mobile broadband connection (e.g. 3G, 4G, 5G, or future generations), a wireless local area network (WLAN) connection such as for example standardized by IEEE 802.11 series or Wi-Fi alliance, or a short range wireless network connection such as for example a Bluetooth connection. The communication interface 108 may hence comprise one or more antennas to enable transmission and/or reception of radio frequency signals over the air.

The device 100 may further comprise a display 110 configured to display information, for example digital map data. The device 100 may further comprise a camera configured to provide digital image data.

When the device 100 is configured to implement some functionality, some component and/or components of the device, such as for example the at least one processor 102 and/or the at least one memory 104, may be configured to implement this functionality. Furthermore, when the at least one processor 102 is configured to implement some functionality, this functionality may be implemented using program code 106 comprised, for example, in the at least one memory 104.

The functionality described herein may be performed, at least in part, by one or more computer program product components such as software components. According to an embodiment, the device 100 comprises a processor or processor circuitry, such as for example a microcontroller, configured by the program code 106, when executed, to execute the embodiments of the operations and functionality described herein. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include field-programmable gate arrays (FPGAs), applicationspecific integrated circuits (ASICs), application-specific standard products (ASSPs), system- on-a-chip systems (SOCs), complex programmable logic devices (CPLDs), graphics processing units (GPUs), or the like.

The device 100 may be configured to perform method(s) described herein or comprise means for performing method(s) described herein. In one example, the means comprises the at least one processor 102, the at least one memory 104 including program code 106 configured to, when executed by the at least one processor 102, cause the device 100 to perform the method(s).

Although the device 100 is illustrated as a single device, it is appreciated that, wherever applicable, functions of the device 100 may be distributed to a plurality of devices, for example, between components of a transmitter, a receiver, or a transceiver.

FIGS. 2A, 2B, 2C, 2D and 2E illustrate a solution for navigating to a destination according to an example embodiment. The solution may be implemented, for example, by a navigation application executed by a device, for example, a mobile device.

In FIG. 2A, a view 216 provides a navigation view in accordance with a first navigation mode. A user have chosen to navigate from a source 200 to a destination 202, and a route 214 has been determined for the user. The route 214 between the source 200 and the destination 202 may be, for example, a walking route or a driving route. In an example embodiment, the first navigation mode may display two-dimensional map data. In an example embodiment, information about one or more key points of interest 204, 206, 208, 210, 212 may be displayed on the route 214. The information about a key point interest may comprise, for example, an image associated with the key point of interest. When images of the key point of interests 204, 206, 208, 210, 212 on the route 214 are displayed to the user, the user is able to ensure that he/she is following the correct route. The images of the key point of interests 204, 206, 208, 210, 212 may comprise images of buildings, tourist attractions, restaurants etc.

At some point of the route 214, independently from user actions a first trigger to activate a navigation mode switch may be determined. The term “independently from user actions” may mean that the user does not provide any manual or explicit trigger or activation, for example, a selection of a button on the view 216 or providing a voice command, for changing to a second navigation mode. The trigger may comprise, for example, one or more of the following: A navigation status change from driving to walking. When the navigation status changes from driving to walking, it may be interpreted that the user is close to the destination.

A user defined trigger. The user may beforehand define a special trigger that then may cause activation of the navigation mode switch. For example, the user may determine the navigation routing first and then choose the navigation mode to be used for each part. In another example embodiment, the user may define different regions, like urban and country or highways and business centers, and choose the navigation mode to be used for each pre-defined region. When the navigation reaches a different region, the navigation mode can be changed.

A speed associated with a user. There may be, for example, a predetermined speed limit associated with the route 214, and when the speed of the user or of the vehicle of the user goes under the speed limit, this may cause activation of the navigation mode switch.

A distance from the destination 202. A predetermined distance from the destination 202, for example, “ X meters” may be used as the trigger. When the distance of the user or of the vehicle of the user to the destination 202 is less than the predetermined distance, this may cause activation of the navigation mode switch.

A location environment change of the user. For example, it may be detected that the location of the user changes from outdoor to indoor, and this may cause activation of the navigation mode switch.

A user exercise status change. A mobile device of the user may detect that the user’ s exercise status changes, for example, based on the heartbeat of the user, or based on detecting a change from running to walking, and this may cause activation of the navigation mode switch.

A status associated with a car. For example, a user’s mobile device may detect that the user got out of the car and locked the car door, or that the a wireless connection between the user’s mobile device and the car as disconnect. This may cause activation of the navigation mode switch.

A status associated with the user. For example, the user may be out of the planned traveling time during the navigation, and this may indicate that the user is lost in the routing. A suggestion may then be provided to the user if the user wants to check another navigation mode. - Routing complexity between a current position and the destination 202. The route 214 between the source 200 and the destination 202 may comprise one or more sections or points at which the navigation may be difficult or complex. This may cause activation of the navigation mode switch.

In response to determining the first trigger to activate the navigation mode switch, switching from the first navigation mode to a second navigation mode may be automatically performed. In the second navigation mode additional information may be rendered to the digital map to highlight at least one entity in the digital map. For example, the rendering may comprise highlighting an entrance or entrance door of the destination, or giving indoor routing instructions in a building at the destination. An entity may refer, for example, to any to any section, area or item in the digital map. The highlighting may refer, for example, to any implementation for somehow distinguishing the entity so that the entity can be easily perceived by the user. The highlighting may comprise, for example, using one or more colors, one or more forms or shapes, and/or one or more additional image and/or text sections etc.

In an example embodiment, after switching to the second navigation mode, it may be possible to ask the user whether the user wishes to cancel the second navigation mode. If then receiving a confirmation from the user, the second navigation mode may be switched back to the first navigation mode.

In an example embodiment, an adaptive navigation mode switch may be performed. When detecting the first trigger, a dialog may be presented to the user whether to switch to the second navigation mode. If then receiving a confirmation from the user not to switch to the second navigation mode, the switch to the second navigation mode may be canceled.

FIG. 2B illustrates an example of an augmented reality view 218 that may be provided in the second navigation mode. The user may be using a camera of the user’s mobile device and point the camera towards a walking direction when navigating to the destination. The augmented reality view 218 may provide additional information 230-238 relating to the destination if the destination is, for example, a street address. In the example illustrated in FIG 2B, there may be multiple possible targets 220-228 at the destination, and identification information, for example, company signs associated with the targets 220-228 may be identified based on the location of the mobile device and possibly also based on a viewing orientation of the camera. In response to the identification, the additional information 230-238 may be augmented into the video image. For example, the additional information 230 relating to the sign 220 may identify a company and its floor number in the building, a web address of the company, opening hours and/or a phone number of the company.

FIG. 2C illustrates another example of an augmented reality view 240 that may be provided in the second navigation mode. Again, the user may be using a camera of the user’s mobile device and point the camera towards a walking direction when navigating to the destination. The user may have originally input the destination as “company name, street address” into a navigation application. The augmented reality view 240 may then provide additional information 244 relating for the company 242 at the destination. In the example illustrated in FIG. 2C, the company sign 244 may be identified based on the location of the mobile device and possibly also based on a viewing orientation of the camera. In response to the identification, the additional information 244 may be augmented into the video image. For example, the additional information 244 relating to the company sign 242 may identify, for example, a company and its floor number in the building, a web address of the company, opening hours and/or a phone number of the company.

FIG. 2D illustrates another example of an augmented reality view 246 that may be provided in the second navigation mode. Again, the user may be using a camera of the user’s mobile device and point the camera towards a walking direction when navigating to the destination. The user may have originally input the destination as “company name, street address” into a navigation application. The augmented reality view 246 may then provide additional information, for example, rendered three-dimensional information, to guide the user at the destination. In the example illustrated in FIG. 2D, an entrance 248 has been highlighted or identified in the video image based on the location of the mobile device and possibly also based on a viewing direction of the camera. This provides the user with a hint where to enter the building.

FIG. 2E illustrates an example of a virtual reality view that may be provided in the second navigation mode. The user may be walking towards a direction 268 and has reached a destination. The virtual reality view may provide additional information 252-266 relating to the destination if the destination is, for example, a street address. In the example illustrated in FIG. 2E, there may be multiple possible targets 250 at the destination, and identification information, for example, company signs associated with the targets 250 may be identified based on the location of the mobile device and possibly also based on a viewing direction of the camera. In response to the identification, additional information 252-266 associated with the destination may be provided in the virtual reality view. For example, the additional information 252 may identify a company and its floor number in the building, a web address of the company, opening hours and/or a phone number of the company.

It is evident that FIGS. 2A-2E provide only some examples of possible additional information rendered in the second navigation mode to the digital map to highlight at least one entity in the digital map, and also other ways to provide the additional information may be provided, for example, via audio when the user select a highlighted entity. In an example embodiment, long-distance navigation may use the first navigation mode, and a switch to the second navigation mode may be performed, when switching to short-distance navigation.

In an example embodiment of any of FIGS. 2A-2E, the view provided by a device may be switched from the first navigation mode to the second navigation mode to guide the user to the destination, wherein the first navigation mode comprises a vehicle navigation mode and the second navigation mode comprises a walk navigation mode. In an example embodiment, the vehicle navigation mode may be used when the user is travelling in a car, with a bicycle or with an electric skateboard. For example, the vehicle navigation mode may provide a two- dimensional map for the user, and the walk navigation mode may provide an augmented reality view or a virtual reality view for the user. And more generally, In an example embodiment, long-distance navigation may use the first navigation mode, and a switch to the second navigation mode may be performed, when switching to short-distance navigation.

In an example embodiment of any of FIGS. 2A-2E, independently from user actions a second trigger to activate a navigation mode switch may be determined. In response to determining the second trigger to activate the navigation mode switch, the view provided to the user may be automatically switched from the second navigation mode back to the first navigation mode. The second trigger may involve a similar trigger than discussed in relation to FIG. 2A. For example, a route between a source and a destination may have one or more intermediate route points at which the navigation mode is first changed from the first navigation mode to the second navigation mode. When the intermediate route point has been passed, the navigation mode may be changed back to the first navigation mode. This may be useful, for example, when the navigation is determined to be difficult at the intermediate route point and more accurate information needs to be provided to the user at the intermediate route point.

FIGS. 3A and 3B illustrate a solution for enhancing digital map data according to an example embodiment. The solution may be implemented, for example, by an application executed by a device, for example, a mobile device. A view 300 represents image data that may be obtained from a camera of a user device. The image data may be still image data or real-time video image data provided by a camera of the user device. At least one information section may be extracted from the image data. An information section may comprise some distinguishable information, for example, a brand sign of a company on the wall of a building,. In an example embodiment, it may be possible to identify the position of the user device based only on the image data. In another example embodiment, location information of the user device may be obtained and the obtained location information may be used in extracting the at least one information section from the image data. The location information of the user device may comprise, for example, a satellite positioning location or location information obtained via a mobile communication network to which the user device is connected to. Thus, based on the image data and possibly using also the location information of the user device, it is possible to identify the building or other point of interest. For example, it may be known based on the location information which companies are located in this particular building and this information may be utilized when extracting the at least one information section from the image data.

When the at least one information section has been extracted from the image data, a first selection 304 of an information section 302 of the at least one information section may be received. For example, if the display providing the view 300 is a touch- sensitive display, the first selection may be obtained, for example, via touch input from the user. Based on the selection, first additional information associated with the selected information section may be obtained. In order to obtain the first additional information, the user device may send a request, for example, to a network server or a cloud entity to obtain the first additional information. The first additional information may prestored in the cloud entity.

When the first additional information has been obtained, the user device may be configured to display a view 306 in which the first additional information 308 is included in the digital map data. The views 300, 306 provided by the user device may be a virtual reality view or an augmented reality view. In the augmented reality view, the first additional information may be augmented with a view obtained from a camera of the user device. Thus, the first additional information can be shown in a three-dimensional rendered image. The first additional information may comprise, for example, more accurate information about the selected information section, for example, a company (for example, a floor location of the company in the building), a restaurant, a tourist attraction etc. In an example embodiment, a second selection of an information section in the first additional information may be received from the user. The second selection may indicate that the user still wants to have more information after learning the first additional information. Second additional information associated with the selected information section may again be obtained, for example, by sending a new request to the network server or the cloud entity. After receiving the second additional information, it may be displayed in the digital map data. For example, the first additional information may identify that a company uses three floors in the building. The second selection from the user may comprise a selection of a specific floor used by the company, and the second additional information then provides additional information about the specific floor.

FIG. 4 illustrates an example of a method for navigating to a destination. The method may be implemented, for example, by the device 100 or an application executed by the device 100.

At 400 the method may comprise using a first navigation mode for navigating to a destination using a digital map.

At 402 the method may comprise determining independently from user actions a first trigger to activate a navigation mode switch.

At 404 the method may comprise, in response to determining the first trigger to activate the navigation mode switch, automatically switching from the first navigation mode to a second navigation mode.

At 406 the method may comprise rendering in the second navigation mode additional information to the digital map to highlight at least one entity in the digital map.

FIG. 5 illustrates an example of a method for enhancing digital map data. The method may be implemented, for example, by the device 100 or an application executed by the device 100.

At 500 the method may comprise obtaining image data from a camera of a user device.

At 502 the method may comprise extracting at least one information section from the image data.

At 504 the method may comprise receiving a first selection of an information section of the at least one information section.

At 506 the method may comprise obtaining first additional information associated with the selected information section.

At 508 the method may comprise displaying the first additional information in the digital map data. A device may be configured to perform or cause performance of any aspect of the method(s) described herein. Further, a computer program or a computer program product may comprise instructions for causing, when executed, a device to perform any aspect of the method(s) described herein. Further, a device may comprise means for performing any aspect of the method(s) described herein. According to an example embodiment, the means may comprise at least one processor, and memory including program code, the at least one processor, and program code configured to, when executed by the at least one processor, cause the device to perform any aspect of the method(s).

Any range or device value given herein may be extended or altered without losing the effect sought. Also, any embodiment may be combined with another embodiment unless explicitly disallowed.

Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.

It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to 'an' item may refer to one or more of those items.

The steps or operations of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the scope of the subject matter described herein. Aspects of any of the example embodiments described above may be combined with aspects of any of the other example embodiments described to form further example embodiments without losing the effect sought.

The term 'comprising' is used herein to mean including the method, blocks, or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.

Although subjects may be referred to as ‘first’ or ‘second’ subjects, this does not necessarily indicate any order or importance of the subjects. Instead, such attributes may be used solely for the purpose of making a difference between subjects. It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from scope of this specification.