Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ELECTRONIC DEVICE AND METHOD WITH IMPROVED LOCK MANAGEMENT AND USER INTERACTION
Document Type and Number:
WIPO Patent Application WO/2012/098360
Kind Code:
A2
Abstract:
Provided are a method and apparatus and software or hardware-implemented logic for responding to user interaction with GUI objects by changing the visual appearance of a boundary that a GUI object is close to. This provides feedback to the user as the GUI object approaches the boundary and crosses through the boundary, which is especially advantageous on touchscreen devices and on electronic devices such as mobile telephones that have small screen areas. Operations can be invoked by gestures that move a GUI object from a first to a second screen area, and the visual feedback of a flexing boundary reduces the likelihood of unintentional invocations.

Inventors:
VITOLO GAETANO (GB)
RUEGA DIEGO (GB)
BUCK JUSTIN (GB)
Application Number:
PCT/GB2012/000053
Publication Date:
July 26, 2012
Filing Date:
January 20, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INQ ENTPR LTD (BS)
VITOLO GAETANO (GB)
RUEGA DIEGO (GB)
BUCK JUSTIN (GB)
International Classes:
G06F3/0488
Domestic Patent References:
WO2007142256A12007-12-13
Foreign References:
EP1860535A22007-11-28
US20100306718A12010-12-02
EP2256610A12010-12-01
US20070150842A12007-06-28
US20100257490A12010-10-07
US6545669B12003-04-08
US20090058821A12009-03-05
EP2148497A12010-01-27
Other References:
None
Attorney, Agent or Firm:
JENNINGS, Michael, J. et al. (235 High Holborn, London WC1V 7LE, GB)
Download PDF:
Claims:
CLAIMS

1. An electronic device comprising:

a processing unit;

a display screen;

a user input mechanism for user interaction with graphical user interface objects (GUI objects) displayed within a graphical user interface (GUI) on the display screen, including for detecting selection and user-controlled movement of GUI objects;

wherein the processing unit is adapted to:

change the state of one or more applications in response to selection of a GUI object and movement of the GUI object across a boundary from a first screen area to a second screen area; wherein at least one of a plurality of applications is invoked in response to movement of an associated GUI object across the boundary; and

wherein at least one of the plurality of applications is unlocked without invocation by movement of a GUI object across the boundary.

2. An electronic device according to claim 1, wherein each of a first set of applications has an associated GUI object that is user-selectable from an unlock screen of the device, and wherein each of a second set of applications has an associated GUI object that is not user- selectable from the unlock screen and becomes user-selectable following an unlock operation performed by movement across the boundary of a GUI object representing the unlock operation.

3. An electronic device according to claim 1 or claim 2, wherein the display screen and user input mechanism are integrated as a touch-sensitive display screen.

4. An electronic device according to any of claims 1 to 3, wherein a visual representation of the boundary is changed as the GUI object is moved across the boundary.

5. An electronic device according to any one of claims 1 to 3, wherein said invoking or unlocking of applications is only performed in response to completion of movement of respective GUI object across the boundary, wherein said completion comprises release of the GUI object when the movement of said respective GUI object satisfies a predefined boundary- crossing threshold condition.

6. An electronic device comprising:

a processing unit;

a display screen;

a user input mechanism for user interaction with graphical user interface objects (GUI objects) displayed within a graphical user interface (GUI) on the display screen, including for detecting selection and user-controlled movement of GUI objects;

wherein the processing unit is adapted to:

generate a visual representation of a boundary between a first screen area and a second screen area; and

change the visual representation of the boundary, in response to a selected GUI object being moved from the first screen area towards or into the second screen area.

7. An electronic device according to claim 6, wherein the processing unit is adapted to change the visual representation of the boundary in response to the movement of the selected

GUI object satisfying a predefined threshold condition.

8. An electronic device according to claim 6 or claim 7, wherein the processing unit is adapted to invoke an operation that is associated with the selected GUI object in response to the movement of the selected GUI object satisfying a predefined threshold condition.

9. An electronic device according to any one of claims 6-8, comprising an unlock mechanism, wherein a GUI object representing an unlock function is movable from the first screen area to the second screen area, and the processing unit is adapted to perform an unlock operation in response to movement of the GUI object that represents the unlock function into the second screen area.

10. A method for controlling the state of a function of an electronic device by movement of a GUI object, which represents that function, from a first screen area to a second screen area of the electronic device, comprising:

generating a visual representation on a display of the electronic device of a boundary between a first screen area and a second screen area;

detecting user selection of a GUI object; detecting user interaction with a selected GUI object, and moving the GUI object on the electronic device screen in response to the detected user interaction;

comparing the detected user interaction or the movement of the GUI object with a predefined boundary-crossing condition, to determine when the GUI object passes from the first 5 screen area to the second screen area; and

changing the visual representation of the boundary, in response to a selected GUI object of a graphical user interface (GUI) being moved from the first screen area towards or into the second screen area.

10 11. A method according to claim 10, further comprising:

changing the visual representation of the boundary in response to the movement of the selected GUI object satisfying a predefined threshold condition.

12. A method according to claim 1 1, wherein the predefined threshold condition comprises 15 the GUI object being moved through the visual representation of the boundary into the second screen area.

13. A method according to claim 11, wherein the predefined threshold condition comprises a distance that the GUI object is moved relative to an original location of the visual

20 representation of the boundary.

14. A method according to claim 11, wherein the predefined threshold condition comprises a speed with which the GUI object is moved.

25 15. A method according to any of claims 10 to 14, further comprising:

invoking an operation that is associated with the selected GUI object in response to the movement of the selected GUI object satisfying a predefined threshold condition.

16. A method for controlling an operation on an electronic device comprising:

30 generating a visual representation on a display of the electronic device of a boundary between a first screen area and a second screen area; changing the visual representation of the boundary, in response to a selected GUI object of a graphical user interface (GUI) being moved from the first screen area towards or into the second screen area;

invoking an operation that is associated with the selected GUI object in response to the movement of the selected GUI object satisfying a predefined threshold condition.

17. A method according to claim 16, wherein a first GUI object represents an unlock operation that can be performed to enable a device user to use one or more functions of the device, and wherein the unlock operation represented by the first GUI object is invoked in response to releasing the first GUI object when it has been moved to a position in the second screen area.

Description:
ELECTRONIC DEVICE AND METHOD WITH IMPROVED LOCK MANAGEMENT

AND USER INTERACTION

Field of invention

The present invention relates to apparatus, methods and computer programs providing improved user interaction with electronic devices and efficient activation of device operations.

Background

Many mobile telephones, PDAs and other portable electronic devices feature touch-sensitive display screens ("touchscreens") that provide ease of use by enabling intuitive user interaction with the device and avoid the need to reserve a large portion of the device for a separate keyboard. Many touchscreens are implemented to allow users to select an item by finger touch, and to initiate an operation by finger gestures, avoiding the need for separate data input apparatus. Various technologies have been used to implement touch-sensitivity, including capacitive touch sensors which measure a change in capacitance resulting from the effect of touching the screen, resistive touchscreens that measure a change in electrical current resulting from pressure on the touchscreen reducing the gap between conductive layers, and other technologies.

A problem with touchscreens is the increased likelihood of accidentally invoking an operation when pressure is applied to the screen, such as accidentally making calls or switching the handset off. This problem has been solved by 'locking' devices after a period of non-use or in response to an explicit lock request by the user, and then requiring users to carry out a predefined user-interaction as an 'unlock' operation before they use any device functions. The unlock operation may involve a predefined touchscreen interaction, such as an identifiable gesture or using a drag operation to move an element of an interactive GUI object along a fixed path, although a dedicated hardware key for lock/unlock operations is provided on some mobile telephones. However, very simple unlock gestures retain a relatively high likelihood of unintended invocations, whereas complex interaction sequences result in a delay before the user can carry out required operations. Even for devices that do not have touchscreens, intuitive graphical user interfaces are now considered by many users to be essential features of mobile telephones and any other data processing device. Typical device users are very familiar with standard user-interaction operations to unlock a device, to navigate between display screens, and to select required applications from an on-screen menu by a sequence of key presses or by interacting with a displayed icon representing the application. Typical device users work with multiple different applications on each device and can use complex interaction-sequences, when required, to unlock and activate required applications and to navigate back to a previous application or home screen. However, the fact that users are capable of learning complex interaction and navigation sequences does not mean that this complexity is ideal.

Another problem is that, despite the increased screen size of recent mobile telephones compared- with earlier generations of mobile phones, the screen size of many handheld electronic devices constrains the user's interaction with the device.

The inventors of the present invention have identified a number of constraints on device users who wish to perform desired operations on an electronic device, such as a touch-enabled mobile telephone, and invented a solution that mitigates the problems. Summary

An aspect of the invention provides an electronic device comprising:

a processing unit;

a display screen;

a user input mechanism for user interaction with graphical user interface objects (GUI objects) displayed within a graphical user interface (GUI) on the display screen, including for detecting selection and user-controlled movement of GUI objects;

wherein the processing unit is adapted to:

change the state of one or more applications in response to selection of a GUI object and movement of the GUI object across a boundary from a first screen area to a second screen area: wherein at least one of a plurality of applications is invoked in response to movement of an associated GUI object across the boundary; and wherein at least one of the plurality of applications is unlocked without invocation by movement of a GUI object across the boundary.

In one embodiment, each of a first set of applications has an associated GUI object that is user- selectable from an unlock screen of the device, and wherein each of a second set of applications has an associated GUI object that is not user-selectable from the unlock screen and becomes user-selectable following an unlock operation performed by movement across the boundary of a GUI object representing the unlock operation. The invention according to this aspect provides a differentiated lock management for different functions of the device. For example, a camera, calling or typing application can be made easily accessible, whereas other applications such as emails and contact list management can be protected by the lock screen. In one embodiment, the display screen and user input mechanism are integrated as a touch- sensitive display screen. In another embodiment, a visual representation of the boundary is changed as the GUI object is moved across the boundary. The invoking or unlocking of applications is preferably only performed in response to completion of movement of a respective GUI object across the boundary, wherein said completion comprises release of the GUI object when the movement of said respective GUI object satisfies a predefined boundary-crossing threshold condition.

An electronic device according to another aspect of the invention comprises:

a processing unit;

a display screen;

a user input mechanism for user interaction with graphical user interface objects (GUI objects) displayed within a graphical user interface (GUI) on the display screen, including for detecting selection and user-controlled movement of GUI objects;

wherein the processing unit is adapted to:

generate a visual representation of a boundary between a first screen area and a second screen area;

change the visual representation of the boundary, in response to a selected GUI object being moved from the first screen area towards or into the second screen area. In one embodiment, the processing unit is adapted to change the visual representation of the boundary in response to the movement of the selected GUI object satisfying a predefined threshold condition so as to move the GUI object into the second screen area. The predefined threshold condition may be a threshold position of the GUI object relative to the visual representation of the boundary, or a threshold movement taking account of the speed and direction of the movement.

In another embodiment, the processing unit is adapted to invoke an operation that is associated with the selected GUI object in response to the movement of the selected GUI object satisfying a predefined threshold condition. The GUI object may be an icon representing an unlock function, and the invoked operation may be an unlock operation.

In an embodiment of the invention, the boundary is controlled to behave as a flexible membrane that stretches in response to movement of the icon. In one such embodiment, a portion of the boundary in the vicinity of a selected icon is constrained to move with the selected icon, to give the appearance of stretching. When the icon reaches a predefined point beyond the original position of the boundary, the icon moves through the boundary, to give the appearance of the stretched membrane breaking. This provides highly intuitive feedback to the user of the progress of movement of the selected icon relative to the boundar .

Another aspect of the invention provides a method for controlling invocation of an operation on an electronic device, comprising: changing the visual representation of a boundary between first and second screen areas in response to a selected GUI object being moved by a user from the first screen area towards or into the second screen area; and invoking an operation in response to the movement of the selected GUI object satisfying a predefined threshold condition.

Another aspect of the invention provides a method for controlling the state of a function of an electronic device by movement of a GUI object, which represents that function, from a first screen area of the electronic device to a second screen area, comprising:

generating a visual representation on a display of the electronic device of a boundary between a first screen area and a second screen area;

detecting user selection of a GUI object; detecting user interaction with a selected GUI object, and moving the GUI object on the electronic device screen in response to the detected user interaction;

comparing the detected user interaction or the movement of the GUI object with a predefined boundary-crossing condition, to determine when the GUI object passes from the first screen area to the second screen area; and

changing the visual representation of the boundary, in response to a selected GUI object of a graphical user interface (GUI) being moved from the first screen area towards or into the second screen area. A further aspect of the invention provides a method for controlling an operation on an electronic device comprising:

generating a visual representation on a display of the electronic device of a boundary between a first screen area and a second screen area;

changing the visual representation of the boundary, in response to a selected GUI object of a graphical user interface (GUI) being moved from the first screen area towards or into the second screen area;

invoking an operation that is associated with the selected GUI object in response to the movement of the selected GUI object satisfying a predefined threshold condition. The invention according to various of the above-described aspects provides visual feedback to the device user as a GUI object is moved into close proximity with the boundary or through the boundary, indicating how close the GUI object is to a boundary -crossing condition and/or showing when the GUI object has passed through the boundary. This is achieved by the visual representation of the boundary changing in response to the movement of an icon relative to that boundary. This can be implemented to give a clear and intuitive visual indication of progress towards completion of a movement to the second screen area, and potentially also movement of the GUI object back into the first screen area, even when the distances moved by the GUI object are small. In view of the constraints on screen space in many handheld electronic devices, such as mobile telephones, the ability to provide clear and intuitive visual feedback in response to small movements of icons on a screen is very helpful to end users. This feedback can help the user to avoid unintended invocations, and give more control in the performance of desired operations. The solution is especially advantageous when used as a consistent mechanism for invoking multiple different operations (e.g. for unlocking or otherwise activating device functions, in response to a user interacting with any of a set of icons displayed on a screen), because it scales much better than known alternatives such as progress bars.

In one embodiment, the invention provides an error-tolerant activation mechanism, by only invoking an operation when movement of a selected GUI object satisfies a predefined threshold condition and the GUI object is then released. The user may move an icon towards the boundary or partially through the boundary, and they will receive a visual indication that they have done so without any operation being invoked. In some embodiments, the user may move an icon completely through the boundary and yet the operation is only invoked if the icon is then released. In each of these examples, an operation associated with a GUI object is only invoked if the GUI object is moved and then released when the movement satisfies a predefined condition. This may be a predefined distance, such as requiring an icon to completely cross through the boundary between first and second screen areas, or requiring the outer extremity of an icon to extend a threshold distance past the original position of the boundary. Alternatively, the predefined condition may take account of the speed of movement of an icon when a user interacts with the icon by a flick gesture. In one embodiment, the processing unit comprises a data processor and computer program code implementing instructions for controlling the performance of operations by the processor, the computer program code adapting the processing unit to provide a changed visual representation of the boundary in response to the GUI object being moved. In various embodiments, the processing unit comprises hardware logic circuits, software-implemented logic or a combination of hardware and software-implemented logic for changing the visual representation of the boundary and invoking an operation.

In one embodiment, logic for changing the visual representation of the boundary comprises an animation controller that, in response to the selected GUI object being moved within a first defined region proximate the visual representation of the boundary, moves a part of the visual representation of the boundary that is proximate the moved GUI object. This may appear as a flexing of the boundary when a selected icon approaches within a set number of pixels of the boundary. This movement of a part of the boundary ensures that the visual representation of the boundary is not crossed by the GUI object, while the GUI object remains within the first defined region, and the user is provided with a visual feedback of the GUI object's position relative to the boundary. The animation controller logic is responsive to the selected GUI object being moved across the first defined region to a threshold position within the second screen area (e.g. a predefined distance from the original position of the boundary), to change the visual representation of the boundary in a way that shows that the GUI object has moved through the boundary. In one embodiment, a selected operation is performed

when the GUI object is released after this movement to a threshold position. The operation typically depends on the GUI object that was moved. Each GUI object represents a particular device function and so the operation performed in response to movement of the GUI object through the boundary is typically an activation operation for the represented device function. In a first embodiment, this threshold position is a first threshold distance from the original position of the visual representation of the boundary. However, another threshold condition of the movement of the GUI object may be the speed of user-controlled movement of an icon across the device's screen (e.g. by a flick gesture applied to an icon) or a combination of position and speed of movement. In another embodiment, the behaviour of the boundary varies according to the different functions represented by GUI objects. This can take account of the level of risk or inconvenience to the user associated with an accidental invocation of an operation. For example, a camera icon may pass easily through the membrane as activation of the camera is low risk, whereas activation of an operation to change a phone's settings or to delete data may be given a different response. If the threshold distance that a settings icon must be moved is larger than the distance for a camera, the boundary will appear to stretch further before breaking when a settings icon is moved.

Brief Description of Drawings

Embodiments of the invention are described below in more detail, by way of example, with reference to the accompanying drawings in which: Figure 1 is a schematic representation of a mobile telephone, as a first example of an electronic device in which the invention may be implemented;

Figure 2 is a representation of the software architecture of an electronic device running the Android operating system;

Figure 3 is a representation of points in a spatial grid, as used to compute the displacement of the boundary at each sampling instant; Figure 4 is an unlock screen of a mobile telephone, in accordance with an embodiment of the present invention;;

Figure 5 represents the unlock screen of Figure 3 reacting to a user-implemented gesture; Figure 6 is a schematic flow diagram representing a first method according to the invention;

Figure 7 is a visual representation of the unlock screen when a user interaction is taking place, in accordance with an embodiment of the present invention; Figure 8 is a visual representation of the unlock screen just after the user has successfully competed an unlock gesture, according to an embodiment of the present invention;

Figure 9 is a schematic diagram illustrating a ballistic unlock motion in accordance with an embodiment of the present invention;

Figure 10 is a visual representation of a multi -tiered unlock system of an embodiment of the present invention;

Figures 11A and 1 IB are schematic representations of device screens for an embodiment of the invention for controlling an alarm function; and

Figures 12A and 12B are schematic representations of device screens for an embodiment of the invention for controlling call handling on a mobile telephone. Description of Embodiments

Mobile telephones have evolved significantly over recent years to include more advanced computing ability and a great deal of additional functionality beyond standard telephony functions. Such advanced phones are commonly known as "smartphones". In particular, many phones are used for text messaging, Internet browsing and/or email as well as gaming.

Touchscreen technology is useful in phones since screen size is limited and touch screen input provides direct manipulation of the items on the display screen such that the area normally required by separate keyboards or numerical keypads is saved and taken up by the touch screen instead. Embodiments of the invention will now be described in relation to handheld

smartphones, but various features of the invention could be implemented within or adapted for use in other electronic devices such as handheld computers without telephony capability or touchscreens, for example in e-reader devices, tablet PCs and PDAs.

Figure 1 shows an exemplary electronic device. The device in this example is a mobile telephone handset, comprising a wireless communication unit having an antenna 101 and a radio signal transceiver 102 for two-way communications, such as for GSM and UMTS telephony, and a wireless module 103 for other wireless communications such as WiFi. An input unit includes a microphone 104 and a touchscreen 105. An output unit includes a speaker 106 and a display 107 for presenting iconic and/or textual representations of the device's functions. Electronic control circuitry includes amplifiers 108 and a number of dedicated chips providing ADC/DAC signal conversion 109, compression/decompression 1 10, encoding and modulation functions 1 1 1, and circuitry providing connections between these various components, and a microprocessor 1 12 for handling command and control signalling. Associated with the specific processors is memory generally shown as memory unit 1 13. Random access memory (in some cases SDRAM) is provided for storing data to be processed, and ROM and Flash memory for storing the phone's operating system and other instructions to be executed by each processor. A power supply 1 14 in the form of a rechargeable battery provides power to the phone's functions. The touchscreen 105 provides both an input mechanism and a display for presenting iconic or textual representations of the phone's functions, and is coupled to the microprocessor 1 12 to enable input via the touchscreen to be interpreted by the processor. There may be a number of separate microprocessors for performing different operations on the electronic device. These features are well known in the art and will not be described in more detail herein.

In addition to integral RAM and ROM, a typical mobile telephone using UMTS telephony also has significant storage capacity within the Universal Integrated Circuit Card (UICC) on which the Universal Subscriber Identity Module (USIM) runs. For GSM phones, a smaller amount of storage capacity is provided by the telephone handset's UICC (commonly referred to as the SIM card), which stores the user's service- subscriber key (IMSI) that is needed by GSM telephony service providers and handling authentication. The UICC providing the USIM or SIM typically stores the user's phone contacts and can store additional data specified by the user, as well as an identification of the user's permitted services and network information.

Many older phone types include a physical keyboard and a non-touch-sensitive display screen, but the example of a touchscreen phone is used here as some embodiments of the invention are particularly suitable for touchscreen devices, as explained below. Compared with older phones, the latest "Smartphones" tend to have improved battery life, increased RAM and higher speed processors, whereas mobile phones have until very recently been characterised by limited RAM and persistent data storage. During recent years, typical mobile telephones have included many additional features beyond basic telephony, such as text messaging, Bluetooth connectivity with nearby devices, PDA functions such as calendars and email, music playback, built-in cameras, alarm clock, etc, and the previous differentiation between different types of portable electronic device is no longer clearly defined. Various aspects of the present invention are applicable to a number of different types of handheld electronic devices and other data processing apparatus, such as netbooks, e- reader devices, tablet PCs and PDAs, as well as mobile telephones.

As with most other electronic devices, the functions of a mobile telephone are implemented using a combination of hardware and software. In many cases, the decision on whether to implement a particular functionality using electronic hardware or software is a commercial one relating to the ease with which new product versions can be made commercially available and updates can be provided (e.g. via software downloads) balanced against the speed and reliability of execution (which can be faster using dedicated hardware), rather than because of a fundamental technical distinction. The term 'logic' is used herein to refer to hardware and/or software implementing functions of an electronic device. Where either software or hardware is referred to explicitly in the context of a particular embodiment of the invention, the reader will recognize that alternative software and hardware implementations are also possible to achieve the desired technical effects, and this specification should be interpreted accordingly.

The latest mobile telephones and other portable electronic devices are able to run a great many different application programs, which each run on top of the telephone's operating system. The number of available applications is increasing very quickly, increasing the need for high speed processors, increased RAM and improved task management. Users typically choose required applications from the set of applications installed on their device by selecting from a list menu or interacting with an icon representing the application. It is also possible for new software applications to be downloaded to an electronic device from a remote server computer, subject to acceptance of license terms, or to be installed from media that is insertable into the device. Example operating systems used on mobile telephones are Google's Android mobile device operating system, Nokia's Symbian OS, Microsoft's Windows Mobile and the Blackberry OS from Research In Motion, Inc.

As shown in Figure 2, the software architecture on a mobile telephone using the Android operating system, for example, comprises object oriented (Java and some C and C++) applications 200 running on a Java-based application framework 210 and supported by a set of libraries 220 (including Java core libraries 230) and the register-based Dalvik virtual machine

240. The Dalvik Virtual Machine is optimized for resource-constrained devices - i.e. battery powered devices with limited memory and processor speed. Java class files are converted into the compact Dalvik Executable (.dex) format before execution by an instance of the virtual machine. The Dalvik VM relies on the Linux operating system kernel for underlying functionality, such as threading and low level memory management. The Android operating system provides support for touchscreens, GPS navigation, cameras (still and video) and other hardware, as well as including an integral Web browser and graphics support and support for media playback in various formats. Android supports various connectivity technologies

(CDMA, WiFi, UMTS, Bluetooth, WiMax, etc) and SMS text messaging and MMS messaging, as well as the Android Cloud to Device Messaging (C2DM) framework. Support for media streaming is provided by various plug-ins, and a lightweight relational database (SQLite) provides structured storage management. With a software development kit including various development tools, many new applications are being developed for the Android OS. Currently available Android phones include a wide variety of screen sizes, processor types and memory provision, from a large number of manufacturers. Which features of the operating system are exploited depends on the particular mobile device hardware.

The Android operating system provides a screen lock feature to prevent accidental initiation of communications and other operations, and applications are typically inaccessible until the user has performed the required unlock operation. This is additional to the pattern of user interactions that can be set up as a security code. As mentioned above, screen lock features have been considered desirable to prevent unintended activation of device functions (starting a call or another application, or switching off the device).

A first embodiment of the present invention that is described in detail below provides logic that is responsive to user-controlled movement of GUI objects to change the visual representation of a boundary between a first screen area and a second screen area. The boundary is represented as a flexible membrane. A portion of the 'membrane' that is proximate a GUI object in the first screen area flexes in response to that GUI object being moved towards the second screen area, such that the boundary remains between the GUI object and the second screen area for a range of movement of the GUI object. This provides visual feedback to the user, showing movement to the GUI object relative to the boundary, and a clear indication of which side of the boundary the GUI object is deemed to be on (i.e. clearly showing when the GUI object is currently in the first screen area). Digital hand-held electronic devices have small display screens and relatively small icons or other GUI objects tend to be used to represent selectable functions. In view of the small screen area, it is necessary to take account of small movements of each GUI object. It can be difficult for users to visualise how close a GUI object is to crossing a boundary, resulting in accidental invocations and delays before desired results are achieved. The present invention uses movement of a GUI object through a boundary to invoke certain operations. GUI objects are movable within a first screen area and operations are invoked by moving the GUI object to a second screen area and then releasing the GUI object while in the second screen area. By providing the user with visual feedback on the progress of their movement of the GUI object, it is possible to reduce the likelihood of unintentional invocation of operations.

With conventional fixed boundaries, if a number of icons representing selectable functions of the device are displayed in a small first screen area, it may be difficult fo a device user to move any icons without part of the icon crossing the boundary (i.e. part of their visual representation may cross the boundary). If this partial crossing of the boundary triggers an operation, there will be many unintentional invocations. However, even if an icon must fully cross a fixed boundary to invoke an operation, the lack of intuitive feedback in conventional devices as an icon approaches the boundary-crossing point and then crosses the boundary tends to result in unintentional invocations and a poor user experience. Visual changes to an icon that only occur after the icon has moved completely into a new screen area cannot provide any indication of how close the movement of the icon is to the boundary-crossing condition. In the case of a touchscreen, the icon itself may be obscured by the very same user's finger that is controlling the icon's movement. This is especially true in small touch screen devices that rely on finger gestures as an input mechanism, if each icon is smaller that the user's finger-tip. In contrast, changes to the visual representation of the boundary in response to movement of the icon towards and/or through the boundary can indicate when an icon is close to or has reached the correct position to be released.

In one embodiment of the invention, as mentioned above, the boundary is represented visually as a flexible membrane, which stretches in response to movement of an icon, until the movement satisfies the boundary-crossing condition. One or more selectable icons (each representing a device function, such as screen unlock, camera, media viewing or communications functions, etc) are placed on one side of the membrane. In order to unlock the device screen or to unlock a specific device function, the user must drag one of the icons through the membrane. The membrane is stretched as the selected icon moves part way through the membrane, until a threshold boundary-crossing condition is achieved. When the breaking point is reached, the visual representation of the boundary changes such that the boundary moves to lie on the other side of the icon. This provides a clearer indication of progress towards completion of a boundary crossing operation than conventional display solutions, and provides a clearer indication of when to release the icon. The behaviour of the icons in response to user-controlled movements can be modelled using equations of classical inertial movement with friction. The movement vector is determined by the speed and direction of the user's swiping finger (or a separate input device) while the finger (or input device) is in contact with the screen, and the icon's subsequent movement is determined by equations representing frictional inertial movement, with an initial velocity equal to the velocity at the moment the finger loses contact with the screen.. If the movement of the icon continues to the edge of the display screen, the inertial movement of the icon can be set to continue such that the icon bounces at the edge of the screen - modelled by equations representing elastic collisions. This provides an intuitive user experience because it corresponds to the user's experience of the movement of real world objects.

Device users will have seen, in their normal lives, the effects of surface tension on the surface of a viscous liquid and stretching of other types of membrane until a break point is reached. Therefore, a visual representation of a boundary which changes in a very similar way to the behaviour of a real-world liquid surface membrane can provide a highly intuitive visual feedback to the user in response to small movements of GUI objects on an electronic device's display screen.

The inventors of the present invention have developed a solution that mitigates the problem of unintentional invocation while providing clear and intuitive feedback to the user regarding how close an icon is to crossing a boundary between the first and second screen areas. The intuitive visual feedback of the flexible membrane guides the user when intentionally moving an icon across the boundary from the first screen area to the second screen area, and when touching icons without wishing to cross the boundary, giving the user a greater sense of control. The solution is particularly useful for touch screen devices and the embodiment described in detail herein is implemented on a touch screen device.

In this embodiment, the membrane is modelled as a unidimensional vibrating string with losses. In particular, the algorithm that controls the movement of the membrane uses a wave equation with a damping factor. The membrane is constrained by the icon; whenever the icon is dragged and put in contact with a part of the membrane, that part of the membrane will be constrained to follow the icon until the breaking point is reached. When the breaking point is passed, the icon will continue its movement according to the physic engine described above and the membrane position will evolve according to the wave equation with damping.

The wave equation with damping can be modelled using the Finite Difference Time Domain (FDTD) method. FDTD is a numerical method for solving time dependent

differential equations, and which can be efficiently implemented in software. Since it is a time- domain method, it allows visual feedback to be provided to the device user as the membrane moves over a period of time. The wave equation with damping can be written as:

where/is the displacement of the membrane perpendicular to the membrane's initial position. Space is discretized by a grid of points (1 pixel wide) x Time is also discretized by sampling instants r . , whose separation in time is the inverse of the frame rate.

Second derivatives are approximated using a central difference approximation. The second derivative for space is determined by the equation:

Similarly for time:

By using the approximations described in EQ2, EQ3 and EQ4 in EQl, the following equation is obtained: ,( ' +l) = T¾[ 2( ^ (EQ5)

where &( = /(*,,/) .

To compute the displacement of the membrane at the next sampling instant &■(/,· + ,) at any given horizonthal grid position, one needs only the position of the membrane at the two adjacent points at the previous and antecendent sampling instant. The latter can be depicted by the diagram of Figure 3.

Note also that the value of the displacement at t j is only needed for the displacement at the current point in the spatial grid and therefore it does not need to be stored in memory once the current point as been computed. This allows highly efficient processing, which can be implemented using program code instructions, since only two vectors need to be defined: one containing the next positions of the membrane ( Y[i] - g t (t J+i ) ) and one containing the previous positions of the membrane (Z[z] = (¾)). The current membrane position can be computed as:

Y[i] = F(Y[i),Z[i-\lZ[ilZ[i +-i)) Mobile electronic devices such as mobile telephones usually implement some form of user interface lock mechanism that differentiates between a locked and unlocked condition, in order to prevent the user from inadvertently pressing buttons on their device when it is not in use. When the device is in its locked state, an unlock screen is commonly presented to the user before other operations can be performed. The unlock screen is typically presented to the user after awakening the device from a sleep state. In the sleep state, most user interface functionality of the device is disabled, involving for example the screen being turned off and being non- responsive to tactile feedback. Many portable electronic devices enter a sleep state for battery saving purposes, either due to a prior instruction from the user or after a period of non-use. The unlock screen facilitates the 'unlocking' of the device through a predefined user interaction, such that upon performing this action (and possibly a subsequent security action) the user regains full control of all functionality of the device. In some instances a dedicated hardware key for lock/unlock operation is provided.

However, there is a trade-off between the speed with which users can access desired operations and the avoidance of accidental invocation of operations. The inventors of the present invention have determined that current solutions for avoidance of accidental invocation of operations on touch-sensitive devices have typically reduced useability.

Locks may be used to prevent various functionalities of the device from being accidentally activated or, when used in conjunction with a security mechanism such as a passcode entry, to prevent unauthorized access to certain functions. When in the locked state, the device is powered on and operational but is non-responsive to most user input.

In the first embodiment of the invention, as shown in Figure 4, this unlock screen is divided into two portions: a first screen area 10 and a second screen area 20, divided by a boundary 30 which may comprise a straight horizontal line across the width of the screen. The first screen area contains one or more icons 40 representing functions that the user can work with. In the present embodiment, the device ignores all user input when in a locked state except unlock requests and interactions with selectable icon or icons in the first screen area of the device. In some embodiments, unlock requests can include pressing of dedicated hardware function keys or sequences of key presses, as well as the unlock operation described in detail below.

In the present embodiment, user input is directed to the first screen area to activate a chosen function, examples of which include but are not limited to an unlock function, a camera function activating the device's built-in camera, and a data capture function that presents a data entry field for displaying user typed or spoken information.

The activation of any functions represented by icons in the first screen area involves the user executing a gesture. The gesture may involve movement of a user's finger or another input device while in contact with the touch screen of the device. An example of such a gesture is a user tapping the screen with a finger in the location of an icon. Gestures may incorporate, but are not limited to, discrete touches of the touch screen, continuous motion along the touch screen (e.g. touch-and-drag operations that move an icon) or a combination thereof. In the present embodiment, as per Figure 5, the user executes an 'activation gesture' 42 to activate one of the selectable functions represented by icons in the first screen area. This gesture involves touching the screen with an input device or the user's finger in the location of the desired icon 40; while remaining in contact with the screen, moving the icon by a drag operation from the first screen area 10 towards the second screen area 20, and subsequently removing the input device or finger from the screen after the icon has crossed through the boundary 30 and reached the second screen area 20. Selection of an icon causes visual cues designed to assist the user in successfully executing the activation gesture 42 to be presented in the second screen region 20. In the present embodiment text 50 is displayed in the second screen region 20, informing the user of the motion required to activate the gesture. In addition, the boundary deforms 44 in response to the presence of the desired icon 40, such that the user is able to gauge the degree to which the activation gesture has been completed. It will be appreciated that such cues are merely indicative of the motion the user should execute to complete an activation gesture and do not constrain the path of the gesture. Other types of cue may also be employed, such as providing the user with auditory or tactile feedback indicating the transition through the boundary 30 and/or success or failure of an unlock operation.

In the present embodiment, when an icon is selected by the user, the icon is moved automatically to position its center directly below the point at which the user contacts the screen. The user's view of the icon is at least partially obscured while the icon is being moved but, unlike in prior art solutions, visual feedback is still available to the user via the flexing of the boundary. The logic underlying the gesture recognition and unlock process is shown in Figure 6. The user input gesture 310 is detected 320 and compared to a list of predefined actions 330 stored within the device. If this comparison operation 340 results in the finding that the user input gesture 310 corresponds to the predefined general unlock gesture then the screen is unlocked 350, restoring full functionality to the device. If alternatively comparison operation 340 results in the finding that the user input gesture does not correspond to the predefined unlock gesture, a second comparison operation 360 is invoked in order to determine if the user input gesture 310 matches one of a plurality of predefined function specific unlock gestures. If such a match is found, the device is unlocked and the function to which the identified gesture corresponds is invoked 380. Finally if no match is found by either comparison operation 340 or 360, an unlock reminder 370 is displayed on the screen of the device.

During the course of an activation gesture, it is desirable to indicate to the user which icon they have selected. Figure 7 illustrates an example of the preferred method of indication, using the example of an unlock icon. That is, the icon represents a general unlock function to enable the user to work with various screen functions. In this example, there are other icons presented in the first screen area, including a camera icon. These other available icons represent functions that do not require a separate unlock operation before they are activated. In the example of Figure 7, a selected icon 60 is highlighted on the screen by displaying a coloured bubble 70 containing the icon 60. This creates a larger GUI object than the original icon alone and so makes subsequent user-interaction with the icon easier to visualize. The larger GUI object causes the boundary between the first and second screen areas to be distorted in the region proximate the selected icon 60. Further differentiation from the unselected remaining icons is provided by redrawing the larger GUI object in colour, as opposed to the black and white representation that is used for the GUI objects corresponding to unselected icons.

An activation gesture will only activate the corresponding device functionality if completed. The user interface state just after the completion point of an activation gesture is shown in Figure 8. Note that the boundary has sprung back from its stretched condition, to provide behaviour that approximates more closely to the behaviour of a flexible fluid surface membrane. An activation gesture is defined as being completed if it results in an icon fully penetrating the interface 30, such that the extents of the icon are fully within the second screen region 20. However, the implementation can involve identifying when an outer extremity of an icon reaches a threshold point that is defined to be a specific number of pixels beyond the visual representation of the boundary, such that ' the icon must distort the boundary by a specific distance before the movement of the icon is recognized as completion of an activation event. This does not necessarily require the full extent of the icon to have moved across to the second screen area, but comprises a sufficient distance to involve a substantial movement of the icon that is deemed to be an intentional activation operation. The choice between different implementations may vary according to the device type, to ensure a suitable visual feedback to users regardless of the device type. In the present embodiment if the user aborts an activation gesture before completion, the icon that the user attempted to activate returns to the first screen area. This is implemented as a smooth movement taking a short but finite period of time (typically a fraction of a second). The icon's highlighting (e.g. coloured bubble) is also removed and it is no longer represented as a larger, coloured icon. The icon and boundary thus return to their original states 40, 30, (i.e. their states before user selection) over a short time period. The user may then select a different icon, or reselect the same icon.

In the present embodiment, the response of the interface to the motion of an icon during an activation gesture is to flexibly deform around the icon. The extent of deformation is dependant on the extent to which the icon occupies the second screen area 20 (where increased occupation of the second screen area 20 corresponds to increased deformation), in addition to the speed at which the user executes the activation gesture. The user is alerted to the point during an activation gesture at which the selected icon has moved fully into the first screen area by the appearance of the interface, which is redrawn at this stage 46 to appear as a bounding line excluding the selected icon from the second screen area 20. Upon this condition being satisfied, subsequent removal of the user's finger or input device from the touch screen will cause the device functionality corresponding to the selected icon to become activated. In other embodiments, the progress towards completion of an activation is presented visually by changing the visual representation of the boundary in other ways). In other embodiment, completion of the activation operation results in immediate display of a screen of the activated function, without redrawing the boundary to show a completed state.

Another embodiment of the invention involves a setup as depicted in the first embodiment but employs the alternative ('ballistic') activation gesture of Figure 9, comprised of: touching the screen with an appendage or input device 80 in the location of the desired icon 90 and subsequently making a rapid, short flicking motion of the appendage or input device towards the second screen area 20, ending in the appendage or object losing contact with the touch screen of the device. This gesture causes the selected icon to behave as if momentum were imparted to it, causing it to move in a short time period through the boundary and into the second screen area.

The motion of the icon is along the trajectory 100, which is defined by the user's motion. If the trajectory is such that that icon contacts the edges of the screen 48, it rebounds elastically and follows an accordingly calculated trajectory 52. In the preferred embodiment, the amount of momentum imparted to the icon is a function of the gesture input from the user, with a threshold momentum value being set, above which the icon is able to cross the boundary and move from the first to second screen area. An icon with a momentum of less than this threshold value will not cross the interface and hence remains in the first screen area, such that the icon's corresponding functionality is not activated.

The threshold value is chosen such that it is unlikely to be reached without intentional user input, hence allowing the activation gesture to serve as a means for differentiating between accidental and intended user input. As in the first embodiment, an icon with its full extent occupying the first screen area will cause the device functionality corresponding to said icon to become active.

This form of user input does not depend on setting a predefined path for the user to follow, as it can take account of gesture recognition including direction, speed and the length of the arc defined between a first touch and cessation of touch of a finger or input device with the touchscreen. In addition, the screen area in which the gesture needs be enacted is not predefined by a separate gesture-responsive progress bar, allowing the user to carry out a simple and intuitive gesture to activate a required device function.

In a further embodiment of the present invention, the user can invoke any of a first set of functions using the method of dragging a respective icon through a boundary from a first screen area to a second screen area, as described in the embodiments above. A first set of selectable icons corresponding to these selectable functions is placed in the first screen region, and the user can activate the respective functions even when the device is in a locked state. This provides the user with a fast route to activating certain functions, circumventing the more usual activation sequence of performing a predefined unlock sequence, locating the application or widget of interest and separately activating the application or widget. In addition, access to functionalities that are unnecessarily protected via passcode entry in conventional devices (such as camera functions, for example) can be provided to users without knowledge of device security codes via the circumvention of the device unlock sequence.

Meanwhile, a second set of functions require an unlock icon to be moved through the boundary as a first step before the user can work with the second set of functions. This provides a multilevel unlock capability.

In a further embodiment of the present invention, a tiered unlock functionality is provided such that a plurality of unlock levels offering different user access rights may be assigned to device functions by the device user. In the preferred embodiment, two such unlock levels are provided; a limited unlock mode offering limited device functionality but requiring no passcode to complete an unlock operation and a full unlock mode offering full functionality but requiring passcode entry to complete an unlock operation and gain access to said functionality. The logic of such a structure is shown in Figure 10. Secured items 1 10 require passcode entry before allowing the user to access their functionality, where as unsecured items 120 are accessible directly from the unlock screen 230 without passcode entry.

In the embodiment described in detail above and represented in Figure 7, a first set of functions is provided via icons in the first screen area, which can be activated by moving an icon into the second screen area. It should be noted, however, that the invention provides a more general selection and feedback mechanism that is not limited to the lock screen example given above. Figures 11 A and 1 IB show the example of an alarm screen on a mobile telephone, with an icon representing an operation to switch off the alarm sound and a second icon representing a "snooze" function (i.e. silence the alarm but repeat in a set period of time). The mechanism of moving an icon through the boundary can be used to switch off the alarm's sound and to activate "snooze", or an alternative simple touch-to-activate mechanism may be used for the "snooze" function. Another example is shown in figure 1 IA and 11 B, where a call silence function is selectable via an icon, which is movable from a first to a second screen area and can be released in the second screen area to silence the call. In Figures 12A and 12B, two additional icons are represented. These correspond to a call accept function and a call reject function, both of which can be activated by dragging the relevant icon through the boundary. This shows that the present invention can be used as a very general activation mechanism, and/or in combination with other activation mechanisms.

The invention can also be used as a general switch. For example, a settings page on an electronic device can be provided with icons that are switchable between On' and Off states by moving the icons between screen areas through a boundary. A single boundary can thus be used to visually separate two groups of GUI objects, indicating a different status or state between the two groups. The invention is not limited to a single boundary, and there may be multiple boundaries between multiple screen areas into which GUI objects can be moved. The multiple screen areas may be associated with multiple different levels of protection from accidental invocation. The boundaries can be located at any required position on the screen, either fixed or movable, without being limited to horizontal surfaces. Functions can thus be unlocked on an electronic device by moving GUI objects associated with selectable functions to the current screen location of a target unlock screen area. The boundary crossing condition is evaluated with reference to the current position of the boundary, and th crossing of the boundary is indicated visually to the user.