Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR GENERATING AND PROCESSING AN INTERACTIVE PRESENTATION ALLOWING FOR LIVE AUDIENCE FEEDBACK
Document Type and Number:
WIPO Patent Application WO/2015/193640
Kind Code:
A1
Abstract:
The invention comprises a system for processing presentation data, comprising: a storage means operable to store presentation data; a conversion tool, operable to receive from the database at least a subset of the presentation data, and to generate from the presentation data a target set of still images and associated target display parameters, for rendering the presentation data at a target image display means. The invention also comprises a method of processing presentation data comprising the steps of: Storing presentation data in a storage means, receiving from the database at least a subset of the presentation data, and generating from the presentation data, a target set of still images and associated target display parameters, for rendering the presentation data at a target image display means. The invention also comprises an electronic presentation aid comprising a processing device programmed: to receive presentation data comprising at least a set of sequenced pages and associated display parameters, the pages being one of: still images, video, presentation elements, and some combination thereof, and the associated display parameters providing some instruction to the electronic presentation aid as to how the pages should be displayed, and to cause display of said pages, wherein said processing device is further programmed, in an editing mode, to edit the presentation data by one or more of: modifying one or more of the existing display parameters associated with the set of sequenced pages, and inserting one or more new display parameters into the set of existing display parameters, inserting one or more new pages into the set of sequenced pages, such that secondary data entered on, or retrieved by, the electronic presentation aid is associated with one or more pages within the set of sequenced pages, and save the edited presentation data for use in the presenting mode.

Application Number:
PCT/GB2015/051666
Publication Date:
December 23, 2015
Filing Date:
June 08, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GLISSER LTD (GB)
International Classes:
G06Q10/10; G06F3/0484; G06Q50/00; G06Q50/20; G09B5/08; G09B7/08
Foreign References:
US20140165087A12014-06-12
US20070266325A12007-11-15
US6760749B12004-07-06
US20070282948A12007-12-06
US20100257449A12010-10-07
US6369835B12002-04-09
US20130038674A12013-02-14
JP2002358064A2002-12-13
Attorney, Agent or Firm:
IP21 LIMITED (Lakeside 300Old Chapel Way,Broadland Business Park, Norwich Norfolk NR7 0WG, GB)
Download PDF:
Claims:
CLAIMS

1. A system for processing presentation data, comprising:

a storage means operable to store presentation data;

a conversion tool, operable

to receive from the database at least a subset of the presentation data, and

to generate from the presentation data a target set of still images and associated target display parameters, for rendering the presentation data at a target image display means.

2. A system according to claim 1 wherein the conversion tool additionally generates a set of targeted sequenced pages.

3. A system as claimed in claim 2, wherein the set of target sequenced pages comprises one or more video files.

4. A system as claimed in any preceding claim wherein the presentation data comprises a "slide show".

5. A system as claimed in any preceding claim, wherein the associated display parameters include data defining the appearance of a plurality of presentation elements.

6. A system as claimed in claim 5, wherein the data further defines one or more of: the positions, the sequence, the timing, and the order, of said presentation elements such that they may be displayed in a presentation layout on a display screen at the target image display means.

7. A system as claimed in claim 5 or 6, wherein the data further defines a data type of each presentation element.

8. A system as claimed in claim 7, wherein the data further defines a start and end point for any presentation element identified as a media clip.

9. A system as claimed in claim 7 or 8, wherein the data further defines a data type of a media clip or object associated with a presentation element.

10. A system as claimed in claim 9, wherein the data further defines a start and end point for any media clip associated with a presentation element.

11. A system as claimed in any of claims 7 to 10, wherein the data type is selected from a group comprising shape, text, audio, video, image, animation and transition.

12. A system as claimed in any preceding claim which further comprises an editor tool whereby one or more functional elements is prepended or appended to, or inserted into the target set of still images, and the associated target display parameters are correspondingly updated to include, reference or otherwise identify appropriate functional requirement logic such that the rendering of the presentation data at the target image display means is at some point made contingent on the receipt of user input at or on said target image display means.

13. A method of processing presentation data comprising the steps of:

Storing presentation data in a storage means,

receiving from the database at least a subset of the presentation data, and generating from the presentation data, a target set of still images and associated target display parameters, for rendering the presentation data at a target image display means.

14. A method according to claim 13 further including the step of generating a set of targeted sequenced pages.

15. An electronic presentation aid comprising a processing device programmed:

to receive presentation data comprising at least a set of sequenced pages and associated display parameters, the pages being one of: still images, video, presentation elements, and some combination thereof, and the associated display parameters providing some instruction to the electronic presentation aid as to how the pages should be displayed, and to cause display of said pages,

wherein said processing device is further programmed, in an editing mode, to edit the presentation data by one or more of:

- modifying one or more of the existing display parameters associated with the set of sequenced pages, and

- inserting one or more new display parameters into the set of existing display parameters,

- inserting one or more new pages into the set of sequenced pages, such that secondary data entered on, or retrieved by, the electronic presentation aid is associated with one or more pages within the set of sequenced pages, and

save the edited presentation data for use in the presenting mode.

16. An electronic presentation aid according to claim 15 wherein the secondary data is social networking template data defining the appearance of a plurality of social networking template elements and further defining positions of said elements such that they may be displayed in conjunction with the one or more pages within the set of sequenced pages with which they are associated.

17. A presentation aid according to claim 16, wherein editing the presentation data by associating the social networking template data with one or more pages in the set of sequenced pages comprises activating one or more social features for one or more pages in the set of sequenced pages.

18. An electronic presentation aid according to claim 16 or 17 comprising a processing device programmed, in a presentation attending mode

- to cause display of said plurality of social networking features in the presentation layout, and

- to receive user input signals from time to time indicating individual social networking features within said presentation layout and in response to the user input signals to send a notification to a target processing device regarding the indicated social networking feature.

19. A presentation aid according to claim 18, wherein said processing device and/or other processing device is further programmed:

to receive a response from the target processing device regarding the indicated social networking feature; and

to display the response as graphical data or text data on a display screen with respect to the presentation data.

20. A presentation aid according to claim 15 wherein the secondary data is one of:

- annotation, comment or note data,

- reporting data,

- user input data mandatorily demanded by one or more pages within the set of pages which constitute the presentation.

21. A presentation aid as claimed in any of claims 15 to 20, further comprising a user input device for generating said user input signals.

22. A presentation aid as claimed in claim 21, wherein said user input device has the function of a touchscreen.

23. A computer program, which when executed by a computer, causes a processing device to implement the system of any of claims 1 to 12 or the presentation aid of any of claims 15 to 22.

24. A storage medium storing the program of claim 23.

25. A system, method or presentation aid substantially as described herein with reference to the accompanying drawings.

Description:
SYSTEM AND METHOD FOR GENERATING AND PROCESSING AN INTERACTIVE PRESENTATION ALLOWING FOR LIVE AUDIENCE FEEDBACK

Field of the invention

The present invention relates to a system and method for processing presentation data, electronic presentation aids using display screens and user input for displaying and collecting information for example in a meeting.

Background

Presentation aids of various types are widely used to convey information. Such aids are used widely in teaching, sales, business management and other applications. The most common example of an electronic presentation aid is a computer running presentation software, such as Microsoft PowerPoint or Keynote. Essentially the software generates a "slide show" comprising a sequence of images. The design effort and organisation of the presentation is done in advance, perhaps using one template for many presenters. This allows sophisticated images to be presented rapidly using standard computer hardware. Within the slide show format, however, the presentation is generally constrained to follow a predetermined sequence. The content is delivered passively in one direction from presenter to audience, and there is no easy way for audience members to interact with each other. This can result in a presentation that is unengaging and not effective in meeting the presenter's objective. It is not easy for the presenter to quickly assess whether the audience was engaged, and whether the messages have been conveyed effectively and understood. There is no simple mechanism to follow up with interested attendees.

Meanwhile, salespeople, lecturers, marketing managers and event organisers want to deliver a better interactive experience, something that provides for effective communication for both presenter and audience. It is desirable to provide technology to deliver this interactive experience in a cost effective manner, and in a way that does not require a significant amount of retraining of presenters with new presentation systems. Summary of the Invention

The inventors have sought to provide a novel presentation aid that overcomes at least some of the drawbacks of the known tools and techniques, combining multiple features into a single solution.

In an embodiment of the invention, there is provided a system for processing presentation data, comprising: a database, operable to store, based on presentation data comprising a sequence of pages, a set of still images representative of the appearance of the presentation data and associated display parameters;

a conversion tool, operable to receive from the database at least a subset of the still images and the associated display parameters, and to generate from the extracted still images and the associated display parameters a set of target sequenced pages for rendering the presentation data at a target image display means.

The sequence of pages may be a sequence of slides, images or video files.

7The presentation data may be a "slide show" comprising the sequence of pages.

The associated display parameters may include data defining the appearance of a plurality of presentation elements.

The data may further define positions of said presentation elements such that they may be displayed in a presentation layout on a display screen at the target image display means.

The data may further define a data type of each presentation element.

The data may further define a start and end point for any presentation element identified as a media clip. The data may further define a data type of a media clip or object associated with a presentation element.

The data may further define a start and end point for any media clip associated with a presentation element.

The data type may be selected from a group comprising shape, text, audio, video, image, animation and transition.

In another aspect of the present invention, there is provided a method of processing presentation data comprising the steps of: storing, in a database, based on presentation data comprising a sequence of pages, a set of still images representative of the appearance of the presentation data and associated display parameters; receiving from the database at least a subset of the still images and the associated display parameters, and generating from the extracted still images and the associated display parameters a set of target sequenced pages for rendering the presentation data at a target image display means.

In a further aspect of the present invention, there is provided an electronic presentation aid comprising a processing device programmed: to receive presentation data comprising a set of sequenced pages and associated display parameters, the associated display parameters defining the appearance of a plurality of presentation elements referenced in the set of sequenced pages, the associated display parameters further defining positions of said presentation elements such that they may be displayed in a presentation layout on a display screen; in a presenting mode to cause display of said plurality of presentation elements in the presentation layout; wherein said processing device is further programmed in a presenting editing mode to: receive social networking template data defining the appearance of a plurality of social networking template elements, the social networking template data further defining positions of said elements such that they may be displayed in the presentation layout; receive a user input signal indicating one of said social networking template elements and an edit command; in accordance with the edit command edit the presentation data by associating the social networking template data with one or more pages in the set of sequenced pages; and to save the edited presentation data for use in the presenting mode.

Editing the presentation data by associating the social networking template data with one or more pages in the set of sequenced pages may comprise activating one or more social features for one or more pages in the set of sequenced pages.

Editing the presentation data by associating the social networking template data with one or more pages in the set of sequenced pages may comprise inserting a page from the social networking template presentation data between two pages in the set of sequenced pages.

The processing device may be further programmed in said presenting editing mode to: in accordance with said edit command edit the social networking template data to modify the appearance and/or position of a social networking template presentation element in the presentation when displayed; and to save the modified template data as presentation data for use in the presenting mode.

In a further embodiment of the present invention, there is provided an electronic presentation aid comprising a processing device programmed: to receive presentation data, the presentation data defining the appearance of a plurality of social networking features, the presentation data further defining positions of said social networking features such that they may be displayed in a presentation layout on a display screen; in a presentation attending mode to cause display of said plurality of social networking features in the presentation layout; in said presentation attending mode to receive user input signals from time to time indicating individual social networking features within said presentation layout and in response to the user input signals to send a notification to a target processing device regarding the indicated social networking feature. The processing device and/or other processing device may be further programmed: to receive a response from the target processing device regarding the indicated social networking feature; and to display the response as graphical data or text data on a display screen with respect to the presentation data.

The processing device may be further programmed in an annotation mode to:

receive a user input signal indicating one of said elements and an annotation command; in response to said annotation command, taking from the user annotation, data associated with the indicated element; storing the annotation data in association with the presentation data.

The processing device or another processing device may be further programmed in a reporting mode to: receive presentation data relating to a delivered presentation and producing a report including the annotation data taken from the user.

The presentation aid may further comprise a user input device for generating said user input signals.

The user input device may have the function of a touchscreen.

In a further embodiment of the invention, there is provided a computer program, which when executed by a computer, causes a processing device to implement the system or the presentation aid as described above.

In a further embodiment of the invention, there is provided a storage medium storing the program.

List of Figures

Figure 1 illustrates schematically the main components of a presentation aid and supporting system,

Figures 2 and 3 schematically illustrate example architectures according to different embodiments of the invention, Figures 4-23 provide various screenshots of taken from particular devices (e.g. smartphone or tablet devices), and in particular

Figure 4 - 'Screenshot 1'

Figure 5 - 'Screenshot 2'

Figure 6 - 'Screenshot 3'

Figure 7 - 'Screenshot 4'

Figure 8 - 'Screenshot 5'

Figure 9 - 'Screenshot 6'

Figure 10 - 'Screenshot 7'

Figure 11A - 'Screenshot 8'

Figures 11B-11F - 'Screenshot 21' - 'Screenshot 25'

Figure 12 - 'Screenshot 9'

Figure 13 - 'Screenshot 10'

Figure 14 - 'Screenshot 11'

Figure 15 - 'Screenshot 12'

Figure 16 - 'Screenshot 13'

Figure 17 - 'Screenshot 14'

Figure 18 - 'Screenshot 15'

Figure 19 - 'Screenshot 16'

Figure 20 - 'Screenshot 17'

Figure 21 - 'Screenshot 18'

Figure 22 - 'Screenshot 19'

Figure 23 - 'Screenshot 20'

Description of Exemplary Embodiments

Figure 1 illustrates schematically the main components of a presentation aid and supporting system. A presentation terminal 100 is provided as the primary presentation aid. Terminal 100 comprises computer hardware including a CPU, a display device DIS and user input devices UIP. A network interface NIF connects the CPU to other sources and repositories of information, including in particular a server 102 that forms part of the overall system. The server is particular relevant in institutional and enterprise-type applications, where a team of presenters are to be enabled to deliver effective presentations using these aids in a consistent but flexible manner. These presenters (users of the presentation aid) may for example be sales representatives, teachers or anyone with a need to communicate complex messages in an interactive manner.

For the purposes of the present disclosure, the presentation terminal is controlled by a presentation application (app) 104. As with any computing hardware, some firmware and an operating system will be provided for the app to interact with the hardware, including input/output devices, memory and the like. The app may be downloaded to generic hardware from an "app store". The operating system may for example be a version of Apple iOS, Android, BlackBerry OS or Microsoft Windows. A "native wrapper" module NWR is provided in this example, which interfaces between the operating system that is specific to a particular hardware design or range of designs, and the core that is app 104. The application program may for example be written in Javascript®, the well-known object-oriented programming language. The wrapper can be written in native code or a commercially available library such as PhoneGap. Functions of the presentation app 104 may be implemented using commercially available libraries.

Within presentation app 104 there are various functional modules such as an editor 110, presenter 112, attendee 113 and report module 114. Numerous other modules for administration of permissions, housekeeping etc. will be present, but do not need to be described here. Various types of information are also stored within the app or in associated databases. These include a social networking template store 110, a presentation store 122 and a report store 124. Within the template store, various presentation templates TMPA, B etc. are stored. These social networking templates incorporate (either within the template data or by reference) various presentation elements which can be displayed via display DIS in a manner to be described further below. These elements include for example icons and a separate icon bank is shown within the template store. Also shown is a media bank which stores items M etc. of media such as documents, video or audio clips and the like. Icons and media may also be accessed through the network interface NIF, though it is preferred in the present embodiment to have a self-contained set of data for autonomous and reliable operation.

The Editor module 106 allows a user to generate a bespoke presentation for a particular meeting or group of meetings. For example, it might be desired to deliver versions of a presentation A to two different clients 1 and 2. Based on social networking template TMPA, the user can create presentation PA1 for client 1 and presentation PA2 for client two. These presentations are shown within the presentation store 122. A presentation PB1 is also present, based on social networking template TMPB. Within the presentation store, the icons may be present again, or the presentations may use references to the icons and media that exist within the social networking template store. When the user uses the presenter module 112 to deliver a presentation, notes can be created. In addition to the elements inherited from the social networking template, therefore, the presentation store includes space for notes to be associated with a presentation, and/or with elements within the presentation. Report module 114 can be used to generate reports incorporating a view of the presentation and the notes taken, as will be illustrated further below. When the user uses the attendee module 113 to attend a presentation, notes can be created. In addition to the elements inherited from the social networking template, therefore, the presentation store includes space for notes to be associated with a presentation, and/or with elements within the presentation. Report module 114 can be used to generate reports incorporating a view of the presentation and the notes taken, as will be illustrated further below.

Server app 124 incorporates, in this example implementation, modules and databases having all the same functions and contents as the presentation app (editor, play, report). The server app also manages template and presentation data for many different users, and perhaps different organisations or departments. The storage of these templates and presentations is therefore segregated so that a particular group of users will see only their own material, and other users' materials are kept confidential and hidden (represented by broken outlines within the server app of Figure 1). Each user's templates and presentations can be synchronised between the presentation terminal 100 and server 102. The invention is by no means limited to multi-user or multi-enterprise implementations, however, and could be implemented on a single, stand-alone device.

While certain combinations of programming tools and operating systems have been mentioned as suitable for use in implementing the server application and presentation application, the invention is by no means limited to these examples, and can be implemented with any programmable, interactive display tool. In particular, a presentation may be edited, delivered and reported directly using the server display and user input devices. A presentation may also be edited delivered and reported using a web-based client. In that case, the functions of the field terminal illustrated in Figure 1 are divided between the presentation app in the form of a web server, and the display and user input devices in the form of a web browser. These are connected via the network interface NIF and internet or local area network, so that the user and colleagues need not be at the same location as the player application and presentation data, although geo-fencing can be applied to restrict the application to a limited point geographically.

Referring to Figure 2, an example architecture according to an embodiment is schematically illustrated. In Figure 2, a conversion server 201 and a target display means 202 (for example, a desktop, tablet or mobile device) are provided. The conversion server 201 comprises a database 203 for storing, based on presentation data comprising a sequence of pages, a set of still images representative of the appearance of the presentation data and associated display parameters, and a conversion tool 204 for receiving from the database 203 at least a subset of the still images and the associated display parameters, and for generating from the extracted still images and the associated display parameters a set of target sequenced pages/images/slides for rendering the presentation at the target display means 202. The conversion tool 204 may comprise a lookup table 205 which can be used to map stored display parameters of the presentation data to corresponding display parameters which can be used to render the presentation data at the target display means 202. In operation, the target display means 202 issues to the conversion server 201 a request 206 for the conversion server 201 to provide presentation data suitable for rendering, at the target display means 202. The request can include an indication of one or both of the identity of the user (user ID), and the identity of the presentation data (e.g. by uploading a copy of the presentation data or by identifying the location of the presentation data (URL or folder/filename data). The request may also include an indication of the type of the target display means 202 (Device ID). In response to the request 206, the conversion tool 204 is operable to receive from the database 203 at least a subset of the still images and the associated display parameters for an identified user/presentation data - the subset being those still images and display parameters (determined from the Device ID) required to generate the target sequenced pages/images/slides for the requesting target display means 202. In some embodiments the type of the target display means may instead be inferred from the origin of the request 206. The database 203 provides to the conversion tool 204 the requested subset of still images and display parameters in a message 209. The conversion tool 204 then uses the lookup table 205 to convert the still images and display parameters provided by the database 203 into the corresponding target sequenced images required by the target display means 202. The conversion tool 204 then formats the generated target sequenced pages into a structure appropriate for the target display means 202, and then sends the resulting formatted target sequenced pages to the target display means 202 via the message 207. The target display means 202 is then able to display the presentation data in a presentation layout on a display screen using the display parameters.

Referring to Figure 3, an example architecture according to an alternative embodiment is schematically illustrated. In Figure 3, the conversion tool 204 resides at the client-side (rather than at the server-side as illustrated in Figure 2). Of course, it is to be understood that some aspects of the conversion tool can reside server-side, whereas other aspects of the conversion tool can reside client-side. Other than the highlighted difference, the architecture illustrated by Figure 3 will operate in a similar manner to that described with respect to Figure 2. In an example, some of the following steps may be conducted in various combinations:

Uploading a presentation

Presenters register and login via the glisser.com website, which logs them into a secure environment (Screenshot 1).

They click the 'New Presentation Event' button (Screenshot 1, Button A), whereby they are taken to the Presentation Upload Screen (Screenshot 2); then they just follow one or more steps such as the following three steps:

1. They select the presentation file via the upload button (Screenshot 2, Button

B) . (PPT, PPTX or Keynote for example).

2. They add information about the presentation and event (Screenshot 2, Area

C) .

3. They click the button which appears (Screenshot 2, Area D) to process the presentation file.

This will use up a Presentation Credit, signified by the number in the Credits Available field (Screenshot 2, Area E) and sends the presenter back to the main screen (Screenshot 3) where they see the presentation added to the list with a

Glisser logo visual (Screenshot 3, Area F), and marked as being 'In Queue' and then 'Processing' (Screenshot 3, Text G).

Presenters can also register and login via the Glisser client application (Άρρ') on their mobile or tablet device ('Device').

They tap the 'New Presentation Event', and the App displays the files available to them on their device. They select the appropriate file (in the same formats as outlined above) from those files stored on their Device, or sent, or transferred, to the App.

They can use a third party application on their Device, that creates presentation documents, to send, or transfer, the document (in the same formats as outlined above) to the App.

They then tap on the presentation file they wish to upload to begin the upload process, which repeats the above steps 1 to 3.

A presentation document can also be created from within the App on the presenter's Device, or Website, to be used in an upcoming presentation event, by the presenter, before enabling the social features. The act of creating a presentation document may reduce the presenter's 'presentation credits' by one (Screenshot 4, Area J), thus the presenter pays a credit for each presentation.

Processing a presentation

For certain presentation document types, or formats, that are uploaded, added or created, it may be necessary to convert the document to video. For each different document type, or format, the software and hardware used to perform this conversion may vary.

For PPTX or PPT files, for example, functionality available within Microsoft Office PowerPoint libraries can be used to convert the presentation into a 'Pack', in order to have fully compatible rendering. In effect, each slide can be a discrete chapter of video, so that the application can allow social features to be added on a slide by slide basis, and to allow social slides to be inserted between specific slides in the original slides.

Software support libraries (such as those distributed with Microsoft Office PowerPoint or Apple Keynote) can be used to broadly identify every object (whether it is shape, text, audio, video, image, animation, transition) within the slide. Mime type detection can then be used to identify the media format type from within this list.

A media handler can be used to identify the length of the media for known and supported formats, and a custom slide timing detection function is implemented, which in accordance to the configuration settings (server set slide duration) and the data obtained while parsing the slides defines slide timings, which are used within the client application.

For PPTX or PPT files, for example, the media handler can use Microsoft Windows' native API to detect the codec/format and/or media length of each media type in the document. The slide timing detection function can use the Microsoft Office PowerPoint libraries to enumerate each vector object (text, drawings, etc.) from the presentation file, gathering their time, location, size and other parameters, and interprets those to create appropriate timings for when, and how, these objects appear and disappear in the video.

The time taken to process varies depending upon the size and the complexity of the presentation file in addition to the processing power available to the conversion server/cluster, but broadly most presentations take a few minutes or so to process.

The Pack produced by this process can consist of a video displaying the original presentation, screenshots of the presentation slides, and a set of timings and other related parameters, that allow the App accurately mimic the original presentation when the video is played.

The system can then store the Pack, and the timing information within a set of database tables, and/or files, to be retrieved later.

The system can then report (e.g. by email) to the presenter to let them know processing is complete. On the main screen (Screenshot 4) the Glisser logo is replaced with the first slide of the Pack (Screenshot 4, Area H) and the 'status' of the Pack swaps from 'processing' to 'ready' (Screenshot 4, Text I).

Presentations can be processed into Packs in a FIFO (First In, First Out) manner, as they are submitted to the queue from the management site. It will be appreciated that presentations can be processed in any order, for example, in order of priority.

The act of processing may also reduce the presenter's 'presentation credits' by one

(Screenshot 4, Area J), thus the presenter pays a credit for each presentation processed. Additional presentation credits can be bought via the website, by clicking the 'purchase' button (Screenshot 4, Button K). A third party payments company can handle this, and credit card processing.

Website Embedded Presentation/Movable data capture 'Gates'

Any presentation processed as described above can also be embedded within any standard webpage or website by presenters using an 'Embed Code'. This is a simple string of computer code that displays the presentation within a frame within a website, and enables a user of the webpage/website to interact with the presentation in exactly the same manner that a presenter would interact with the App running on his mobile tablet or other device, Indeed, such 'embedding' may be visualized as a simply embedded or embeddable version of the App directly in the webpage/website. Viewers of the website can thus interact with the presentation and vote in polls, as if they are watching it live. It is worth mentioning here that it is possible to embed the entire functionality of the invention within a standard webpage in the manner described, and to this end it is thus possible to create & modify presentations (including slide re-arrangement, insertion & deletion) through an embedded webpage application; it is of course equally possible to receive a completed presentation in exactly the same manner, and to comment and interact with that presentation as a mobile smartphone or tablet device user would when using a dedicated or so-called 'native' application executing directly on that device. In a particular embodiment, the embedded App provides the ability for presenters to permit insertion of data capture slides, and to control specifically where these are inserted, e.g. before, within and after a presentation. These data capture slides include user interface elements which allow entry of, or automatically populate, the viewer's email address, other contact details, and other user-specific information required or desired (see below for slide insertion, in particular with reference to Figures 10 and 11). The data capture slides may function slightly differently to the other (image-based) slides in that a user cannot move on to further slides in the presentation until he or she has completed the data request (entered an email address, answered a question, shared on social media, etc.) The presenter uploading the presentation can therefore quickly and easily adjust the exact point at which data requests are made, and thus improve the ratio of data collection episodes to user exits or abandonments.

A particular advantage of this embodiment is that it combines an existing content format (PowerPoint or Keynote slides converted into MP4 and JPEG) with a 'gate' function capable of capturing information and preventing further progress (to additional slide content) if the data is not provided for the user.

Adding Social Features

Before the presentation event, the presenter logs into the App on their Device, e.g. using the same username and password as the website, via the login screen. Once logged in, they arrive at the Home Screen (Screenshot 5), showing separate buttons to toggle between Packs they are presenting (Screenshot 5, Button L), and those they are attending (Screenshot 5, Button M). Only one app is needed per person, and they can both present and be an audience member using it. Also functionality can be implemented for adding these social features via the website itself.

The App displays the available Packs as buttons, or icons, ('Presentation Button') that contain a placeholder for the first slide screenshot, or/and other logos, and additional Editing and Presenting buttons for editing and presenting. These buttons are displayed in sections for either 'Presenting' or 'Attending'.

The presenter taps the 'Presenting' button to see their new event and 'Pack' (Screenshot 5, Area N).

The presenter taps on the Presentation Button to download the Pack to their Device. The app requests the relevant Pack from the server, and saves all these items to the device.

When downloaded, the presenter can see the first slide of the Pack (Screenshot 6, Area 0), within the Presentation Button within the placeholder, as well as an 'Edit' button (Screenshot 6, Button P) and a 'Present' button (Screenshot 6, Button Q).

Simultaneously, the app requests a unique Invitation Code, which is created for that Pack by the server (Screenshot 6, Text R). This is the code the presenter will send to their audience so they can also download the Pack to their device during the event. Only those audience members using the App that have used the Invitation Code to download the Pack can view and attend the Pack.

Before presenting, the presenter can add social features and social polling slides, by tapping the 'Edit' button.

This displays the Edit and Preview-Practice Views. (Screenshot 7). The Preview- Practice and Edit views can be accessed either via tapping the relevant Edit, or Preview-Practice buttons to display those sections, or the presenter can rotate their device between landscape Device orientation for the Preview-Practice View, and portrait Device orientation for Edit View.

In this example, there are four 'presentation slide social features' and three polling/question 'social slides'. The presentation slide social features are:

1. Up/down voting (Screenshot 7, Button S) - where audience members can 'like' or 'dislike' a slide

2. Questions (Screenshot 7, Button T) - where audience members can ask presenters a question about a slide, and other audience members can 'up vote' questions they like (which a presenter can prioritise when answering)

3. Comments (Screenshot 7, Button U) - where audience members can make general comments that they're not necessarily expecting a response to

4. Twitter (Screenshot 7, Button V) - where audience members can view any Tweets using a hashtag the presenter has set via the Twitter Hashtag Edit View (accessed by pressing the Twitter HashTag button) (Screenshot 7, Button W), or Tweet themselves if they have linked their Twitter account

The polling/question social slides are:

1. Multiple choice (Screenshot 7, Area X) - where presenters can ask a question with a set number of answers for the audience to vote on, and display their collective response

2. Rating (Screenshot 7, Area Y) - where presenters can ask the audience to rate something (out of five or ten stars) and display their collective response

3. Free text (Screenshot 7, Area Z) - where presenters can invite their audience to write a general response to any question

Polling/Question Social Slides can be added in one of two ways:

1. The presenter can tap to hold, and drag to move, a social slide from its placeholder outside the Slide Browser View to a position between two normal slides in the Slide Browser View, and then drop it into that position.

2. Alternatively, the presenter can tap on a social slide type button, and the Slide Browser will display placeholder buttons between all the normal slides, signifying the position into which a social slide can be placed. By tapping one of these placeholder buttons, the social slide is placed in that position between two normal slides.

When this is done, the app displays a new question slide (Screenshot 8) and the keyboard, where the presenter can write their question (Screenshot 8, Section AB), and set parameters related to the question (Screenshot 8, Section AC), such as: multiple choice answers (and move/delete them), or select whether a rating is out of five or ten stars. They then tap 'Save' or 'Exit' to save the edits (Screenshot 8, Button AD).

They then tap the orange button in the top left (Screenshot 8, Button AE) to exit and save the Pack social parameters to the server, or the Pack social parameters are saved as changes are made during the presenter's editing process.

The presenter can alternatively edit the Pack to add social features from within the website. The features and functionality are the same as that for the App, as above.

Leading on from the above addition of slides, there is also the possibility of enhancing the presentations by inserting 'decision' elements within the standard presentation.

In one embodiment, 'decision tree' presentations are possible in which presenters are able to create presentations which follow different 'paths' (i.e. a series of different slides containing different content), depending upon how audience members vote on one or more specialized or 'decision' polling slides.

This works by presenters uploading all of their 'linear' PowerPoint (or KeyNote) slides into the Glisser system, as if it were one long presentation. When a decision polling slide is inserted into the line of PowerPoint slides, it creates a 'split' whereby the particular slides which are displayed in the presentation following the audience vote demanded by said decision polling slide are determined by the result of that vote. For example, one set of slides may be subsequently displayed after a 'yes' vote, and another, entirely different set of slides may be displayed after the audience votes 'no'.

The number of paths after a split can be dependent upon different factors, for example:

a) The number of options in a multiple choice poll

b) A (presenter-determined) set of 'grouped scores' in a rating poll - for example, a rating of 1 to 4 out of 10 sends the presentation down Slide Path 1, a rating of 5 to 7 down Slide Path 2, and a rating of 8 to 10 down Slide Path 3

Multiple decision polling slides can be inserted into a presentation, to allow a series of splits and new paths, creating a bespoke journey through the entire slide deck (the entirety of which must of course be originally uploaded), determined by the most prominent audience vote.

Presentation paths may split and then join again. The structure is determined by the presenter using a unique Presentation Path Tool within the Glisser Edit screen.

Each of the above possibilities and features is illustrated in Figures 11B, 11C, 11D, HE, 11F, (screenshots 21-25) and specifically screenshot 21 demonstrates how a user-selectable decision slide Zl may be, for example, dragged and dropped into position within an uploaded presentation about to be edited, in much the same ways shown in Figure 10 (screenshot 7) above. Delete symbols 'X' referenced Z2 in Fig. 11B allow particular slides within the presentation to be deleted. After insertion of the slide Zl, the presentation layout is altered as shown in Fig. 11C in which it can be seen that, initially at least, slides are repeated down both trees subsequent to the decision slide. Figure 11D shows a presentation in which multiple decision slides have been introduced, and Figure HE shown that a decision slide may have multiple (i.e. greater than 2) subsequent trees. Finally, Fig. 11F shows that decision trees may re-join one another. Presenting and attending events process

The presenter sends out their unique Invitation Code to their audience. An audience member choosing to attend the event can redeem the Pack by using this code. Only the audience members who have downloaded the App and have the Invitation Code can redeem the Pack. Furthermore, only audience members who have redeemed the Invitation Code and downloaded the Pack can participate in the presentation process. The audience member taps on the 'Download/Redeem' Button (Screenshot

9, Button AF), which brings up the code entry screen and keyboard (Screenshot 10) where the audience member can type in the code.

When the code is sent to the server, it gives that audience member permission to download and view the Pack. They download the Pack in the same way that a presenter downloads the Pack - the download button appears (Screenshot 9, Button

AG), which when tapped downloads the full Pack and displays the first slide as a button (Screenshot 11, Button AH).

Alternatively, the audience members can connect their Devices directly with the presenter's Device, via a mesh, or multi-peer, network of Devices which have the App installed, at the time of the presentation, and by entering the Invitation Code, as before, they can then download the Pack directly from the presenter's Device.

Geo-fencing

The presenter has the option via the website, or via the App, to set whether audience members attending can be restricted to viewing and attending the presentation event, by the physical location of their device. The presenter can set what the physical location will be for the presentation event in advance of the event either via the App or Website. When this feature is activated, only those audience members who are within the nearby region of the presenter's presentation event will be able to view the presentation.

When this feature is deactivated, any audience member who has redeemed the invitation code can view the presentation during the presentation event remotely at a location at a fair distance from the physical location of the event.

Projection

During the presentation event, the presenter can connect their Device to a video projector, computer monitor or TV, via a wired connection or a 3rd party wireless technology (such as Apple's Apple TV, or Google Chromecast) to display a Projected View of the presentation.

Then from within the App, the presenter can choose display or hide the content of the presentation, including the presentation video, the polling/question social slides, and results of audience's social features activity on a per slide basis, on the projected view.

The App displays all the controls to the presenter on their Device, yet it only displays to the audience those features above that the presenter chooses.

Presenting

To begin presenting, the presenter taps the 'Present' Button, and the audience member taps on the Presentation button within the App, on their respective Devices. This displays the Live Views (Screenshots 12 and 13).

In the Audience Live View (Screenshot 12), the audience member will only be able to see the first slide of the presentation. In the Presenter Live View (Screenshot 13) the presenter will be able to access all their slides along the Slide Browser View (Screenshot 13, Area AI) at the bottom of their screen.

When the presenter advances the main presentation, either by pressing the Next Button (Screenshot 13, Button AJ) , or tapping on a slide in the Slide Browser, the audience devices are notified via the server. They can then view the new slide when they tap the next button, or tap on the new slide(s) which appear in the Slide Browser.

Alternatively, the presenter and audience can move from one slide to the next by using a swiping gesture on the Device screen left and right gestures to respectively move the presentation right and left.

Live View Feature Layout

To allow the presenter and audience users to view the Pack, the polling/question social slides and presentation slide social features, in the Live View of the App unimpeded, they have the option to show or hide aspects of the interface, including:

1. the Main Controls Bar View ('Main Controls') (displayed along one of the edges of the Device screen), displaying the Exit button, Slide Presentation Position label/text, Video Time label/text, Voting Buttons and voting results label/text(s), and buttons to view the Questions, Comments and Tweets Table Views.

2. the Notes and Drawing Controls Bar View ('Annotation Controls') (displayed along one of the edges of the Device screen), displaying the drawing and notes tools, and colour options.

3. the Slide Browser View (displayed along one of the edges of the Device screen), displaying smaller screenshot images of the presentation slides, and polling/question social slides.

4. the Questions, Comments and Tweets Table Views showing audience questions, comments and tweets (either filtered separately or all together). How the social features work

Depending on what the presenter has selected when inserting social features, presentation slides may have up/down voting buttons, questions, comment or Tweet features enabled, or interactive polling and question slides.

Voting

When audience members tap to vote on the up or down buttons (Screenshot 14, Buttons AK), the votes sent out are collected by the server, the presenter's device retrieves the results, and they are displayed as a percentage up and down votes, on the presenter's screen at the bottom (Screenshot 15, Text AL).

Questions, Comments and Tweets

When audience members comment, question or Tweet, these responses are sent to the server, and then all audience members and the presenter Apps automatically retrieve them. They can then see them displayed either by revealing the Social View (Screenshot 14, Area AM) by tapping the Social Button (Screenshot 14, Button AN), or on tablets these responses are displayed to the right of the presentation (Screenshot 15, Area AO):

For the questions that appear, the audience have the ability to up-vote a question with the up-vote button (Screenshot 14, Button AP). The questions with the most up-votes will appear at the top of the list of questions or chronologically as the presenter or audience member chooses by pressing the appropriate Order button.

Comments can appear chronologically. Comments and questions may also appear together in the same list.

The presenter has a Block Button (Screenshot 15, Button AQ) displayed with each question and comment, which when pressed removes the question or comment from the list, by having it deleted from the server. Tweets are retrieved from Twitter's servers by the Glisser server, filtered by the hashtag that has been set by the presenter during editing of the Pack, and then retrieved by all the user's Apps that are attending the live presentation.

In one embodiment, one or more standard Twitter actions may be integrated within the user interface of the App, whether this be in presenter mode or attendee mode. That is, a Twitter-enabled user of the App can 'retweet', 'reply' or 'star' any Twitter posts which are present or can be caused to be displayed in the App user interface. For example, the Glisser server can detect whether a slide that has been live-shared on Twitter by a presenter has been replied to, and count this as a 'question' in the presenter view. As a further example, the Glisser server can detect whether a Glisser slide has been "starred", and count this as a 'like' in the presenter view.

In another embodiment, the App provides the functionality for presenters to pre- write the text of a Tweet associated with each slide, and which the App will timely instruct to be posted (and optionally displayed, either before or after having been tweeted, or as a result of the Tweet having been received) alongside the slide image, when they present. This provides context to accompany the image.

Polling/Question Slides

When a presenter and audience get to a polling/question social slide, the App (and the main projector, if used) will display the Question View (and list of options if a multiple choice slide, or a 5/10 star rating) by hiding, or overlaying, the Presentation Video View. The Question View(s) appear in the same shape and size as the Presentation Video View, thereby appearing as if a seamless part of the presentation.

The audience can see the question and options on their App (Screenshot 16). They can then answer by either tapping on one of the options, or writing text as their answer, then tap the 'Done' button (Screenshot 16, Button AR) to send their answer to the server. The server gathers all the answers together, collated (if in data form), which are then retrieved by the Presenter's Device.

The presenter can then tap the 'Done' button to show the collective results as a graph (Screenshot 17). As more results are regularly retrieved from the server, the graphs will adjust, and the presenter can switch between various graph formats by tapping the icons at the bottom right (Screenshot 17, Button AS).

Notes and Drawings

The audience members can also enable the Annotation View (Screenshot 18) while a presentation slide is displayed. This can be done by either tapping the 'Annotation' button on the tablet, or by turning their mobile Device to landscape orientation, which the app will detect and appropriately display the annotation features.

They can then tap the 'Add Note' button (Screenshot 18, Button AT) to display a new note overlaid over the top of the presentation slide, or tap the 'Draw' button

(Screenshot 18, Button AU) to sketch on the presentation slide, in a selection of colours. For drawing, they have pen, eraser, undo and text buttons, and various colour options.

Once the audience member has finished writing or drawing, or moves to the next presentation slide, the App generates a transparent image overlaid with the notes and drawings they just made on the relevant presentation slide.

Then each image is sent to the server, and associated with the audience's user account and the presentation they attended. Upon presentation completion the server combines the images received in the precise location in a copy of the original presentation document, by overlaying the image over the top of the content of each respective slide. As the images are transparent, aside from the drawing and notes content, the content of the respective slide is still visible beneath. The audience member is then sent an email with a link, which when pressed allows the user to download that individual presentation document to their computer or Device.

Real-time polling and pushing during live presentation

By downloading the Pack in advance users are only using the minimal required bandwidth of their Device Internet connection to maintain the real-time state of the live presentation event within the App on their Device.

During the live presentation event, the presenter and audience Devices are receiving and updating the live and social content within the App in real-time.

The live and social content includes:

a. the current slide the presenter is showing, and parameters associated with that slide,

b. the questions and comments related to that slide, and their respective parameters,

c. tweets associated with the assigned Twitter Tag the presenter choses, d. votes, and associated parameters, by audience on presentation slides, e. uploading of annotation images by audience apps.

The real-time state of a live presentation event can be achieved in one of two ways:

1. The App on the user's device sends requests to the server for updated information and parameters at regular intervals (usually 5 seconds), which can be managed remotely, or live.

2. The server pushes updated information and parameters to users' Devices, to the

App, as the information changes. The push is achieved by uniquely identifying the user's device that are registered by the server, and utilising 3rd party, or proprietary push technology, manages the sending of information out to only these registered devices. The presenter and audience members can also connect their Devices directly to each other, via a mesh, or multi-peer, ad-hoc local network of Devices which have the App installed, at the time of the presentation, which does not require the use of external 3rd party wireless connections such as WiFi, or mobile networks. The above live and social content can be sent and received from the presenter's Device and App to the audiences' Devices and App, and vice versa, by continuously syncing, either by sending requests, or by pushing, the above content in real-time from Device to Device via this ad-hoc local network.

Completion

When the presenter and audience reach the end of the presentation, the server is notified, and the completed state is saved to the Pack on the user's device so that they can view it at a later time.

For audience members that attended, the server collates the annotation images, and using the software libraries, such as those available in Microsoft Office, the processing server combines the images for each user with the original file. It then instructs the Management server to send out a report to the audience member (e.g. an email with a link), which when clicked, downloads the original file with the annotations included. This way the audience member retains their notes and drawing on the relevant slides, and can print or edit this presentation document.

Post Event Presenter Question Response

At any point following the completion of the presentation event, the presenter can use the App and Website to respond to any outstanding questions from audience members incurred during the event.

This can be accomplished in one of two ways: 1. Via the App. As previously, from Home View the user taps the 'Presenting' button to filter for their presentations events. They select the relevant presentation by pressing on its Presentation Button to open it in the Live View. They then either tap on the Questions Button, or rotate their device to the appropriate orientation, to display the Response View which is displayed part or full screen over the presentation. To display the Response Text View they tap on a question. Once a suitable response is entered, using the device on-screen keyboard, the presenter taps the Done button, this hides the Response Text View, and sends the response to the server. The App then requests an update to the Question-List/Response View. Once a response is received, the question list is updated, and the response just added is displayed in-line beneath the question selected.

2. Via the Website. From the Home View, the presenter is shown the available presentations in a table. The completed presentations show the Results and Respond buttons enabled. Pressing the Respond Button displays the Response Section View, displaying the list of slides on the left, and the question list on the right. The presenter then clicks a question to bring up the Response Text View pop-up. They enter their response and click on the Done Button. The response is sent to the server, and the response section is refreshed to display the question list with the response displayed in-line beneath the question selected.

Post Event Processes

Once the presenter has reached the last slide it is considered 'Complete' and the audience interaction data can be sent to the glisser.com website for presenters to view and download. The website main view (Screenshot 19) changes the presentation status to 'complete' (Screenshot 19, Text AV):

As soon as the presentation is complete, the server collates all the relevant participation data, including slide votes cast, comments made, questions asked, and the answers of the polling questions for the presentation. When the presenter clicks on the Results Button (Screenshot 19, Button AW) for a given presentation on the glisser.com website, the website displays the data as an infographic (Screenshot 20) showing the results of the presentation, in the Results section.

The infographic shows interactions over the course of the presentation (Screenshot 20, Area AX), total interactions from the audience and average by audience member (Screenshot 20, Area AY). It also shows the most popular and least popular slides by up and down votes (Screenshot 20, Area AZ), and the results of any social slide poll (Screenshot 20, Area BA).

The presenter can also select the 'Export Data' option to extract the data from their audience interactions in Excel format. Functionality to extract the infographic as a PDF or image file can also be added.

The solution can also include the following features:

• Option to generate PDF and image file of infographics for the presenter to export and links to allow integration with existing social media platforms

• Emailing PDF of slides (with any annotations) to audience after the presentation is complete

• Create a view of all questions/comments/Tweets on the logged-in website by slide (the 'Respond' tab)

o These are presented similar to how free-text question answers are shown - with slide, then image, then text

o An option to respond to these attendees via email - via a click button to bring up either a field below or a pop up window o An option to send message to all attendees simultaneously (again via email or push message to their device)

• 'Presenter options' section in the App and website including- o Ability to switch 'send audience PDF' after the event on or off o Ability to remove presentation from audience devices after the event o Ability to switch automatic push of slides on and off

• The ability for a presenter to 'copy' a Pack with added social features and slides, but with interactions reset to zero

• Add existing social media informationto profiles

In a preferred embodiment, the invention may provide facility for decision tree presentations