Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF PROVIDING A FRAME-BASED OBJECT REDIRECTION OVERLAY FOR A VIDEO STREAM
Document Type and Number:
WIPO Patent Application WO/2010/049442
Kind Code:
A1
Abstract:
A method of providing a frame-based object redirection overlay for a live video stream from a camera on a device or a video stream provided by a third-party content provider and hosted by a host provider that may or may not be the same as the third-party content provider. One or more objects depicted within a frame of a video stream are selected. A user is presented with a tag corresponding to the selected object. Upon detection of the user's selection of the tag, the user is presented with options corresponding to the selected tag. The options may determined by the location of the user, the user's preferences, and enforcement of localization rules. Upon detection of the user's selection of an option, an action is taken that corresponds to the user's selection of the option.

Inventors:
SIGAL FREDERIC (FR)
Application Number:
PCT/EP2009/064193
Publication Date:
May 06, 2010
Filing Date:
October 28, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIGAL FREDERIC (FR)
International Classes:
H04N7/173; G01C21/36; G06F17/30; G06Q30/00; G09B5/06; H04W4/02
Domestic Patent References:
WO2007021996A22007-02-22
Foreign References:
US20070162942A12007-07-12
US20030009281A12003-01-09
Attorney, Agent or Firm:
GIOVANNINI, Francesca et al. (32 avenue de l'Opéra, Paris, FR)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of providing a frame-based object redirection overlay for a video stream displayed on a device, the method comprising: selecting an object displayed in a frame of the video stream; presenting a user a tag corresponding to the selected object within the frame; detecting the user's selection of the tag; presenting the user an option corresponding to the selected tag; detecting the user's selection of the option; and performing an action in accordance with the user's selection of the option.

2. The method of claim 1 , further comprising: determining a location of the device by a network connection.

3. The method of claim 2, wherein presenting the user the option is based, in part, on the location of the device.

4. The method of claim 2, wherein the location of the device restricts the action performed.

5. The method of claim 2, wherein the action comprises identification of one or more objects depicted in the video stream.

6. The method of claim 5, wherein the one or more objects are identified, in part, by the location of the device.

7. The method of claim 2, wherein the action comprises a presentation of a location of a nearby street, building, or landmark.

8. The method of claim 1 , wherein the detecting the user's selection of the tag and the detecting the user's selection of the option is by spoken command the device reproducing the video stream is configured to receive.

9. The method of claim 1 , wherein the presenting the user the tag and the presenting the user an option is by a text-to-speech interface that converts textual messages to audible speech.

10. The method of claim 1, wherein the action comprises linking the user to a website related to the selected object.

11. The method of claim 2, wherein the action comprises locating a nearby store where the user can purchase the selected object.

12. A device suitable for displaying a video stream comprising: a processor; a storage device; a display device; a network device; wherein the processor executes software instructions which perform: presenting a user a tag corresponding to an object within a frame of a video stream; detecting the user's selection of the tag; presenting the user an option corresponding to the selected tag; detecting the user's selection of the option; and performing an action in accordance with the user's selection of the option.

13. The device of claim 12, wherein the processor executes additional instructions which perform: determining a location of the user by a network connection.

14. The device of claim 13, wherein presenting the user the option is based, in part, on the location of the device.

15. The device of claim 13, wherein the location of the device restricts the action performed.

16. The device of claim 13, wherein the action comprises identification of one or more objects depicted in the video stream.

17. The device of claim 16, wherein the one or more objects are identified, in part, by the location of the device.

18. The device of claim 13, wherein the action comprises a presentation of a location of a nearby street, building, or landmark.

19. The device of claim 12, wherein the detecting the user's selection of the tag and the detecting the user's selection of the option is by spoken command the device reproducing the video stream is configured to receive.

20. The device of claim 12, wherein the presenting the user the tag and the presenting the user an option is by a text-to-speech interface that converts textual messages to audible speech.

21. The device of claim 12, further comprising a camera device.

Description:
METHOD OF PROVIDING A FRAME-BASED OBJECT REDIRECTION

OVERLAY FOR A VIDEO STREAM

BACKGROUND OF INVENTION

[0001] The conventional provision of a video stream includes the transmission of an encoded video stream by a content provider to an end-user device through a network connection. The end-user device decodes and displays the video stream upon commands executed by a user. The user navigates the decoded video stream through an interface that may provide the ability to start, stop, pause, advance, or reverse the video stream.

[0002] The content provider encodes the video stream using one of the well-known encoding techniques to produce an encoded video stream suitable for transmission through the network connection. Software on the end-user's device decodes the encoded video stream and displays the decoded video stream on a display device.

SUMMARY OF INVENTION

[0003] According to one aspect of one or more embodiments of the present invention, a method of providing a frame-based object redirection overlay for a video stream displayed on a device includes: selecting an object displayed in a frame of the video stream, presenting a user a tag corresponding to the selected object within the frame, detecting the user's selection of the tag, presenting the user an option corresponding to the selected tag, detecting the user's selection of the option, and performing an action in accordance with the user's selection of the option. [0004] According to one aspect of one or more embodiments of the present invention, a device suitable for displaying a video stream includes: a processor, a storage device, a display device, and a network device. The executes software instructions which perform: presenting a user a tag corresponding to an object within a frame of a video stream, detecting the user's selection of the tag, presenting the user an option corresponding to the selected tag, detecting the user's selection of the option, and performing an action in accordance with the user's selection of the option.

[0005] Other aspects of the present invention will be apparent from the following description and the appended claims.

BRIEF DESCRIPTION OF DRAWINGS

[0006] Figure 1 shows a method of providing a frame-based object redirection overlay for a video stream in accordance with one or more embodiments of the present invention.

[0007] Figure 2 shows a structure of an overlay file in accordance with one or more embodiments of the present invention.

[0008] Figure 3 shows a polygon approach to describe the location of an object within a displayed frame in accordance with one or more embodiments of the present invention.

[0009] Figure 4 shows a quadrant approach to describe the location of an object within a displayed frame in accordance with one or more embodiments of the present invention.

[0010] Figure 5 shows a device in accordance with one or more embodiments of the present invention. DETAILED DESCRIPTION

[0011] Specific embodiments of the present invention will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency. Further, in the following detailed description of embodiments of the present invention, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. In other instances, well-known features have not been described in detail to avoid obscuring the description of embodiments of the present invention.

[0012] Figure 1 shows a method of providing a frame-based object redirection overlay for a video stream displayed on a device. In one or more embodiments of the present invention the device is a mobile computing device such as a smart phone, a personal digital assistant, a handheld computer, a netbook, or a laptop computer. In one or more embodiments of the present invention the device is a computing device such as a desktop or a server computer. In one or more embodiments of the present invention the device is a consumer electronics device such as a media player, a monitor, or a television.

[0013] In Sl, an object displayed within a frame of a video stream is selected. In one or more embodiments of the present invention, the video stream is provided by a third- party content provider and is hosted by a host provider that may or may not be the same as the third-party content provider. In these embodiments, the user selects the object. In one or more embodiments of the present invention, the video stream is a live video stream provided by a camera on the device. In these embodiments, the device selects the object. The video stream is analyzed to determine information relating to the video stream itself and objects depicted in the video stream. In one or more embodiments of the present invention, the device, a remote server, or both working together analyze the video stream to determine information including the duration of the video stream, the number of chapters contained in the video stream, start and finish times of the chapters contained in the video stream, one or more objects depicted in the video stream, and/or times when objects are depicted in the video stream.

[0014] In one or more embodiments of the present invention, the device determines geolocalization information relating to the past and/or present location of the device. The device, remote server, or both working together analyze the video stream to determine information including data relating to one or more objects depicted in the video stream and identified, in part, by a location of the one or more objects. An overlay file is created to store the information determined from the analysis. In addition, the overlay file stores information relating to the tagging of objects depicted in the video stream, presenting of options corresponding to the tagged objects, and performance of actions corresponding to the options. In one or more embodiments of the present invention, the overlay file is stored on the device. In one or more embodiments of the present invention, the overlay file is stored on the remote server.

[0015] In one or more embodiments of the present invention, the overlay file is provided in the device in advance of embarkation of the device. In one or more embodiments of the present invention, the overlay file is provided to the device from the remote server in advance of receiving the video stream from the host provider. In one or more embodiments of the present invention, the overlay file is created on the device. The device transmits its past and/or present location to the remote server and the remote server transmits to the device information relating to the past and/or present location of the device. [0016] In S2, a tag corresponding to the selected object within the displayed frame is presented to the user. In one or more embodiments of the present invention, the tag is a visual depiction of an indication that further actions related to the object depicted and tagged are available. In one or more embodiments of the present invention, the tag is an aural sound of an indication that further actions related to the object depicted and tagged are available. The aural sound may be a pre-recorded audio clip or a textual message presented through a text-to-speech interface that converts textual messages to audible speech. The tag may include further description of the tagged object. The tag may include private fields that are not displayed and serve to identify the tagged object as a unique identifier and reference to merchant sites or different URLs. In one or more embodiments of the present invention, the user is presented with a dialog box or an alternate frame instead of a tag. In one or more embodiments of the present invention, the user is presented with one or more tags corresponding to one or more selected objects within the displayed frame when the video stream is paused on the device. One of ordinary skill in the art will recognize that the user could be presented with one or more tags corresponding to one or more selected objects within the displayed frame while the video stream is being played on the device in accordance with one or more embodiments of the present invention.

[0017] In one or more embodiments of the present invention, a location of the device is determined by a network connection. The network connection is a cellular data network, a short-range wireless network, a long-range wireless network, a wired network, or a global positioning satellite system. The determination of the location of the device provides for customization of tags presented to the user, enforcement of restrictions on the tags presented to the user, and localization of the tags presented to the user. For example, custom tags relating to the location of the device may be presented, tags that would otherwise be presented may not be presented based on the location of the device, and tags may be presented in a manner consistent with the location of the device. In one or more embodiments of the present invention, the determination of the location of the user and device provides for the presentation of tags to the user relating to objects depicted in the video stream. In one or more embodiments of the present invention, upon a determination of the location of the user and device, a tag may be presented in the user's native language, the user's native language being different than the native language spoken at the location of the user and the device. The user's native language is determined by a preference selected by the user in advance or the device.

[0018] In S3, the device reproducing the video stream detects the user's selection of one of the one or more tags corresponding to the one or more objects displayed within a frame of the video stream. In one or more embodiments of the present invention, the detection is based on a touch-screen interface of the device reproducing the video stream. The touch-screen interface spans the display area of the device. In one or more embodiments of the present invention, the detection is based on a spoken command the device reproducing the video stream is configured to receive. One of ordinary skill in the art will recognize that a cursor, a mouse, a keyboard, or other conventional interface means could be utilized in accordance with one or more embodiments of the present invention.

[0019] In S4, upon the detection of a user's selection of a tag corresponding to an object displayed within a frame of the video stream, the user is presented with one or more options corresponding to the selected tag. The options include choices that are dependent on the location of the device at the time the tag is selected. In one or more embodiments of the present invention, an option includes a location of one or more physical merchant stores near the location of the device so that the user may browse or purchase the physical object corresponding to the tagged object. Another option provides a link to a web-based merchant store that sells the tagged object. In one or more embodiments of the present invention, an option includes linking to a web-based merchant store in the country in which the user and device are located or a home country of the user. Another option provides textual, graphical, audio, video, web-based, text-to- speech, or other ad-hoc content. One of ordinary skill in the art will recognize that the options presented could vary in accordance with one or more embodiments of the present invention. The determination of the location of the device provides for customization of options presented to the user, enforcement of restrictions on the options presented to the user, and localization of the options presented to the user.

[0020] In S5, the user's selection of an option is detected by the device. In one or more embodiments of the present invention, the detection is based on the touch-screen interface of the device reproducing the video stream. The touch-screen interface spans the display area of the device. In one or more embodiments of the present invention, the detection is based on a spoken command the device reproducing the video stream is configured to receive. One of ordinary skill in the art will recognize that a cursor, a mouse, a keyboard, or other conventional interface means could be utilized in accordance with one or more embodiments of the present invention.

[0021] In S6, the device is directed to perform an action in accordance with the user's selection of the option. In one or more embodiments of the present invention, one or more frames of the video stream may be transmitted to the remote server. The transmitted frame or frames may be used for analysis and comparison of the frame or frames with a database of information as part of the creation of one or more tags. [0022] For the purposes of illustration only, a user browsing a museum may view a video stream depicting the works of art located within the museum, i.e., a video tour. The user may select a tag corresponding to a specific piece of art depicted in the video stream that is of interest to the user. Upon selection of the tag, the user is presented with one or more options that correspond to actions that may be taken. One option provides the location of nearby physical merchant stores that sell prints of the piece of art. Another option provides links to web-based merchant stores that sell prints of the piece of art. Another option provides one or more of a textual description of the piece of art, a graphical depiction of the artist, an audio commentary by a critic, a related video clip of interest, a website of interest, or other ad-hoc content. Another option directs the user to the specific piece of art depicted as determined by the user's current location and a map of the museum. Another option directs the user to nearby pieces of art that might be of interest to those that appreciate the specific piece of art selected.

[0023] Alternatively, for the purposes of illustration only, a vision impaired user may use the device as a virtual eye. For example, overlays are added to the live video stream and a text-to-speech interface communicates important information to the user. For example, if the user is walking down a street, one or more objects that lie ahead, to the left, to the right, behind, above, or below the current location of the device are identified. The objects are identified using, in part, geolocalization of the device and information provided by the remote server relating to the location of the device and objects depicted in the video stream. The device communicates the location of the objects to the user through a text-to-speech interface. Common objects, such as upcoming streets and intersections, building names, and other landmarks are communicated to the user through the text-to-speech interface. One of ordinary skill in the art will recognize that while the method has been described with reference to a video stream, the method could be applied to a downloaded video in accordance with one or more embodiments of the present invention.

[0024] Figure 2 shows an exemplary structure of an overlay file in accordance with one or more embodiments of the present invention. In one or more embodiments of the present invention, the overlay file 200 may be structured according to a geographical heading or geolocalization data. In one or more embodiments of the present invention the overlay file 200 may be structured by the start and finish times of one or more chapters. For purposes of illustration only, consider overlay file 200 that corresponds to a thirty second video stream that includes one chapter. In overlay file 200, chapter one 210 may be defined to start at time {0} and finish at time {10}. One of ordinary skill in the art will recognize that the granularity of the time scale could be changed in accordance with one or more embodiments of the present invention. In addition, one of ordinary skill in the art will recognize that there may be a plurality of chapters that vary in length in accordance with one or more embodiments of the present invention.

[0025] In the overlay file 200, chapter one 210, identified by the start and finish times 220, includes a list of one or more objects 230. Each of the objects 230 includes a unique ID 240 for the object, a timeframe 250 when the object is depicted in the video stream, a name 260 of the object, a description 270 of the object, and actions authorized 290 for the object. The actions authorized 290 for the object may include one or more tags based on geolocalization data. The description 270 includes a location 280 of the object 230 displayed within one or more frames identified by the timeframe 250. Each chapter may vary in having no objects, one object, or a plurality of objects for a given chapter. One of ordinary skill in the art will recognize that the overlay file 200 could be structured differently in accordance with one or more embodiments of the present invention. For example, in an alternate embodiment, the overlay file may be structured based on geolocalization data relating to a past or present location of the user and device and information transmitted from a remote server to the device based on the location of the device.

[0026] As noted above, the location 280 of the object 230 is specified in the overlay file 200 as part of the description 270. In one or more embodiments of the present invention, the location 280 is represented by a polygon that describes the location of an object within the displayed frame. In one or more embodiments of the present invention, the location is represented by a division of the displayed frame into zones. One of ordinary skill in the art will recognize that the location may be represented in a variety of manners in accordance with one or more embodiments of the present invention.

[0027] Figure 3 shows a polygon to describe the location of an object within a displayed frame of a video stream in accordance with one or more embodiments of the present invention. An object 310 is depicted in displayed frame 300. A polygon 320 is utilized to describe the location of the object 310 by its exact position within the displayed frame 300. One of ordinary skill in the art will recognize that polygon 320 can be located anywhere within the span of the displayed frame 300 in accordance with one or more embodiments of the present invention. In addition, one of ordinary skill in the art will recognize that polygon 320 can take the shape of any object 310 within the span displayed frame in accordance with one or more embodiments of the present invention. Polygon 320 information, that corresponds to the location of object 310 within the displayed frame 300, is stored in the overlay file for object 310. Based on the overlay file, a tag 330 is presented as an overlay of the displayed frame 300. The tag 330 is operable by the user selecting the tag 330 anywhere within polygon 320. In one or more embodiments of the present invention, the user selects the tag 330 by uttering a spoken command to the device indicating a selection of the tag 330. In one or more embodiments of the present invention, there may be relatively few objects of interest within a displayed frame. Accordingly, the displayed frame may be divided into zones of roughly equal proportions or zones identified relating to specific objects.

[0028] Figure 4 shows a quadrant approach to describe the location of an object within a displayed frame in accordance with one or more embodiments of the present invention. An object 450 is depicted in displayed frame 400. Displayed frame 400 is divided into quadrants 410, 420, 430, and 440 of roughly equal proportions. Object 450 is depicted in quadrant 410 of displayed frame 400. Accordingly, the entire span of quadrant 410 is defined to correspond to object 450. Quadrant 410 information, that corresponds to the location of object 450 within the displayed frame 400, is stored in the overlay file for object 450. Based on the overlay file, a tag 460 is presented as an overlay of the displayed frame 400. The tag 460 is operable by the user selecting the tag 460 anywhere within quadrant 410. In one or more embodiments of the present invention, the user selects the tag 460 by uttering a spoken command to the device indicating a selection of the tag 460. One of ordinary skill in the art will recognize that there are a variety of ways in which to partition the displayed frame into zones in accordance with one or more embodiments of the present invention.

[0029] Figure 5 shows a device in accordance with one or more embodiments of the present invention. A device 500 comprises a processor 510, a storage device 520, a display device 530, a network device 540, a camera 545, and an interface 550. In one or more embodiments of the present invention, the interface 550 is a touch-screen interface that receives input by touching the display device 530. The touch-screen interface spans the display area of the device 500. In one or more embodiments of the present invention, the interface is a combination of a spoken command interface that allows the user to provide spoken word commands to the device 500 and a text-to-speech interface that allows the device to provide textual information in an audio format to the user. One of ordinary skill in the art will recognize that the interface 550 could be a cursor, a mouse, a keyboard, or other conventional interface in accordance with one or more embodiments of the present invention.

For purposes of illustration only, a user seeks to view a video stream provided by a third-party content provider and hosted by a host provider that may or may not be the same as the third-party content provider on the device 500. Alternatively, the user may utilize a video stream corresponding to a live video stream from the camera 545 of the device 500. In one or more embodiments of the present invention, upon the user's selection of a video stream, the user is provided with an overlay file specific to the selected video stream by a network connection. In one or more embodiments of the present invention, the overlay file may be previously stored on the device 500. In one or more embodiments of the present invention, the overlay file is provided to the device 500 from a remote server (not shown). In one or more embodiments of the present invention, the overlay file is created on the device 500. The video stream is then provided to the user by the host provider, third-party content provider, or the camera 545 and displayed on the display device 530. In one or more embodiments of the present invention, the user may pause the video stream. In one or more embodiments of the present invention, the video stream is displayed without pausing when, for example, the video stream is provided by the camera 545 of the device 500. The user is presented with one or more tags corresponding to an object depicted within a displayed frame on the display device 530. The overlay file and the presenting of the tags is separate from and independent of the video stream to the extent the video stream is hosted, provided, and displayed on the device 500 at the same time as the one or more tags are presented. In one or more embodiments of the present invention, the presentation of the tag via the overlay file is invoked upon the pausing of the video stream. In one or more embodiments of the present invention, the tag is presented while the video stream is being displayed. The device 500 takes appropriate action upon receipt of input from the user.

[0031] The interface 550 detects whether the user has selected a tag. Upon detection of the selection of a tag, the user is presented with one or more options corresponding to the selected tag. The interface 550 detects whether the user has selected an option. Upon detection of the selection of an option, the device 500 performs an action in accordance with the user's selected option. As noted above, that option may be a location of a nearby physical merchant store, a link to a web-based merchant store, textual content, graphical content, audio-based content, video-based content, web-based content, text-to- speech content, or other ad-hoc content.

[0032] Advantages of one or more embodiments of the present invention may include one or more of the following.

[0033] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for objects depicted in a video stream that is independent of the video stream. [0034] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a video stream of content provided by a third-party content provider.

[0035] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a video stream of content provided by a third-party content provider and hosted by a host provider that may or may not be the same as the third-party content provider.

[0036] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a live video stream provided by a camera of a device.

[0037] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a video stream provided by a third -party content provider that does not burden a user's viewing experience.

[0038] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a video stream provided by a third-party content provider that is not visible to the third-party content provider or a host provider.

[0039] In one or more embodiments of the present invention, the method provides a frame- based object redirection overlay for a video stream provided by a third-party content provider that does not burden the provision of content by the third-party content provider or the hosting of content by a host provider.

[0040] In one or more embodiments of the present invention, the method may be performed without the participation of a third-party content provider or a host provider.

[0041] In one or more embodiments of the present invention, the method may be separate and independent from the production of a video stream of content by a third-party content provider.

[0042] In one or more embodiments of the present invention, the method may be separate and independent from the provision of content by a third-party content provider.

[0043] In one or more embodiments of the present invention, the method may be separate and independent from the hosting of content by a host provider. [0044] In one or more embodiments of the present invention, the method generates traffic for physical merchant stores, web-based merchant stores, and websites.

[0045] While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having the benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as disclosed herein. Accordingly, the scope of the invention should be limited only by the attached claims.