Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR ESTIMATING TOW BALL POSITION
Document Type and Number:
WIPO Patent Application WO/2023/129858
Kind Code:
A1
Abstract:
A method and system for estimating the position of a tow ball of a vehicle are disclosed. The method includes receiving camera extrinsic values for a rear camera disposed on a rear vehicle portion. An image captured by the rear camera is received having a representation of a tow ball of the vehicle. A first distance associated with the tow ball is received, being one of a longitudinal distance of the tow ball to a center of a rear axle of the vehicle, or a height of the tow ball from a ground surface. A point on a representation of the tow ball based on the image is selected. A camera ray is determined from the camera image to the selected tow ball point. Based on the received camera extrinsic values, the first distance, and the camera ray, a second distance associated with the tow ball is estimated.

Inventors:
YU XIN (US)
RAMIREZ LLANOS EDUARDO JOSE (US)
VERMA DHIREN (US)
Application Number:
PCT/US2022/082204
Publication Date:
July 06, 2023
Filing Date:
December 22, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONTINENTAL AUTONOMOUS MOBILITY US LLC (US)
International Classes:
G06T7/73
Foreign References:
US20160121885A12016-05-05
US20200079165A12020-03-12
US20200039582A12020-02-06
US20180081370A12018-03-22
Attorney, Agent or Firm:
ESSER, William, F et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for estimating the position of a tow ball of a vehicle, the method comprising: receiving, at data processing hardware, one or more camera extrinsic values for a rear camera disposed on a rear portion of the vehicle; receiving, at the data processing hardware, at least one image captured by the rear camera having a representation of a tow ball therein, the tow ball being connected to the vehicle; receiving, at the data processing hardware, a first distance associated with the tow ball, the first distance being one of a longitudinal distance of the tow ball to a center of a rear axle of the vehicle, or a height of the tow ball from a ground surface; selecting a point on a representation of the tow ball based on the at least one image; determining, at the data processing hardware, a camera ray from the camera to the selected tow ball point in the image; and estimating, by the data processing hardware based on the received camera extrinsic values, the first distance, and the camera ray, a second distance associated with the tow ball.

2. The method of claim 1, wherein the first distance comprises the longitudinal distance of the tow ball to the center of the rear axle of the vehicle, and the second distance comprises the height of the tow ball from the ground surface.

3. The method of claim 2, further comprising identifying, by the data processing hardware, a vertical line passing through the selected tow ball point, and determining an intersection of the camera ray and the vertical line, wherein estimating the second distance is based upon the intersection of the camera ray and the vertical line.

4. The method of claim 3, wherein determining the intersection comprises performing a least squares estimation to identify a point that is closest to the vertical line and the camera ray.

5. The method of claim 2, wherein the longitudinal distance of the tow ball to the center of the rear axle of the vehicle comprises a first longitudinal distance value from the center of the rear axle to a center of a pinhole of the tow ball, and a second longitudinal distance vale from the pinhole center of the tow ball to a center of the tow ball.

6. The method of claim 1, wherein the first distance comprises the height of the tow ball from the ground surface and the second distance comprises the longitudinal distance of the tow ball to the center of the rear axle of the vehicle.

7. The method of claim 6, further comprising identifying, by the data processing hardware, a horizontal line passing through the selected tow ball point, and determining an intersection of the camera ray and the horizontal line, wherein estimating the second distance is based upon the intersection of the camera ray and the horizontal line.

8. The method of claim 7, wherein determining the intersection comprises performing a least squares estimation to identify a point that is closest to the horizontal line and the camera ray.

9. The method of claim 1, wherein the first distance is received from one of non- transitory memory to which the data processing hardware is communicatively coupled or from a user interface of the vehicle as user input.

10. The method of claim 1, wherein selecting a point on a representation of the tow ball based on the at least one image comprises one of receiving a user selection via a user interface of the vehicle or determining, by the data processing hardware, the point using an object detection algorithm.

11. A system for estimating the position of a tow ball of a vehicle, the system comprising: data processing hardware, and non-transitory memory having program code instructions stored there which, when executed by the data processing hardware, causes the data processing hardware to receiving one or more camera extrinsic values for a rear camera disposed on a rear portion of the vehicle; receiving at least one image captured by the rear camera having a representation of a tow ball therein, the tow ball being connected to the vehicle; receiving a first distance associated with the tow ball, the first distance being one of a longitudinal distance of the tow ball to a center of a rear axle of the vehicle, or a height of the tow ball from a ground surface; selecting a point on a representation of the tow ball based on the at least one image; determining a camera ray from the camera to the selected tow ball point; and estimating, based on the received camera extrinsic values, the first distance, and the camera ray, a second distance associated with the tow ball.

12. The system of claim 11, wherein the first distance comprises the longitudinal distance of the tow ball to the center of the rear axle of the vehicle, and the second distance comprises the height of the tow ball from the ground surface.

13. The system of claim 12, further comprising identifying, by the data processing hardware, a vertical line passing through the selected tow ball point, and determining an intersection of the camera ray and the vertical line, wherein estimating the second distance is based upon the intersection of the camera ray and the vertical line.

14. The system of claim 13, wherein determining the intersection comprises performing a least squares operation to identify a point that is closest to the vertical line and the camera ray.

14

15. The system of claim 12, wherein the longitudinal distance of the tow ball to the center of the rear axle of the vehicle comprises a first longitudinal distance value from the center of the rear axle to a center of a pinhole of the tow ball, and a second longitudinal distance vale from the pinhole center of the tow ball to a center of the tow ball.

16. The system of claim 11, wherein the first distance comprises the height of the tow ball from the ground surface and the second distance comprises the longitudinal distance of the tow ball to the center of the rear axle of the vehicle.

17. The system of claim 16, further comprising identifying, by the data processing hardware, a horizontal line passing through the selected tow ball point, and determining an intersection of the camera ray and the horizontal line, wherein estimating the second distance is based upon the intersection of the camera ray and the horizontal line.

18. The system of claim 17, wherein determining the intersection comprises performing a least squares operation to identify a point that is closest to the horizontal line and the camera ray.

19. The system of claim 11, wherein the first distance is received from one of non- transitory memory to which the data processing hardware is communicatively coupled or from a user interface of the vehicle as user input.

20. The system of claim 11, wherein selecting a point on a representation of the tow ball based on the at least one image comprises one of receiving a user selection via a user interface of the vehicle or determining, by the data processing hardware, the point using an object detection algorithm.

15

Description:
SYSTEM AND METHOD FOR ESTIMATING TOW BALL

POSITION

CROSS REFERENCE TO RELATED APPLICATION

The present application claims priority to U.S. provisional application 63/266,321, filed December 31, 2021, titled “System and Method for Estimating Tow Ball Position,” the content of which is incorporated by reference herein.

TECHNICAL FIELD

[0001] This disclosure relates to a trailer hitch assist system and method, and particularly to such a system and method which estimates the position of the tow vehicle’s tow ball.

BACKGROUND

[0002] Tow ball position information in 3-D world coordinates is very useful for a trailer hitch assist system. The tow ball's height information may be used for truck height adjustment, and the position information can be used for collision avoidance between the trailer coupler and vehicle’s tow ball. An existing approach to estimating tow ball location uses the lateral distance of camera to estimate the position of the tow ball.

SUMMARY

[0003] There is disclosed a method for estimating the position of a tow ball of a vehicle. The method includes receiving, at data processing hardware, one or more camera extrinsic values for a rear camera disposed on a rear portion of the vehicle. At least one image captured by the rear camera is received by the data processing hardware having a representation of a tow ball therein, the tow ball being connected to the vehicle. A first distance associated with the tow ball is also received by the data processing hardware, the first distance being one of a longitudinal distance of the tow ball to a center of a rear axle of the vehicle, or a height of the tow ball from a ground surface. A point on a representation of the tow ball based on the at least one image is selected. A camera ray from the camera to the selected tow ball point is determined by the data processing hardware. The data processing hardware estimates, based on the received camera extrinsic values, the first distance, and the camera ray, a second distance associated with the tow ball.

[0004] In one aspect, the first distance may include the longitudinal distance of the tow ball to the center of the rear axle of the vehicle, and the second distance comprises the height of the tow ball from the ground surface.

[0005] The method may further include identifying, by the data processing hardware, a vertical line passing through the selected tow ball point, and determining an intersection of the camera ray and the vertical line, wherein estimating the second distance is based upon the intersection of the camera ray and the vertical line.

[0006] Determining the intersection may include performing a least squares estimation to identify a point that is closest to the vertical line and the camera ray. [0007] The longitudinal distance of the tow ball to the center of the rear axle of the vehicle may include a first longitudinal distance value from the center of the rear axle to a center of a pinhole of the tow ball, and a second longitudinal distance vale from the pinhole center of the tow ball to a center of the tow ball.

[0008] In another aspect, the first distance comprises the height of the tow ball from the ground surface and the second distance comprises the longitudinal distance of the tow ball to the center of the rear axle of the vehicle.

[0009] The method may further include identifying, by the data processing hardware, a horizontal line passing through the selected tow ball point, and determining an intersection of the camera ray and the horizontal line, wherein estimating the second distance is based upon the intersection of the camera ray and the horizontal line.

[0010] Determining the intersection may include performing a least squares estimation to identify a point that is closest to the horizontal line and the camera ray. [0011] The first distance may be received from one of non-transitory memory to which the data processing hardware is communicatively coupled or from a user interface of the vehicle as user input. [0012] Selecting a point on a representation of the tow ball based on the at least one image may include one of receiving a user selection via a user interface of the vehicle or determining, by the data processing hardware, the point using an object detection algorithm.

[0013] In another example embodiment, a system for estimating the position of a tow ball of a vehicle is disclosed. The system includes data processing hardware and non- transitory memory having program code instructions stored there which, when executed by the data processing hardware, causes the data processing hardware to perform the method described above.

DESCRIPTION OF DRAWINGS

[0014] FIG. l is a top view of a vehicle having a vehicle system according to an example embodiment.

[0015] FIG. 2 is a schematic view of the vehicle system of FIG. 1 according to one or more example embodiments.

[0016] FIG. 3 is a view of a tow ball and relative dimensions thereof.

[0017] FIG. 4 is a simplified side elevational view of a vehicle rear camera and the tow ball illustrating a tow ball estimation operation according to an example embodiment.

[0018] FIG. 5 is a simplified side elevational view of a vehicle rear camera and the tow ball illustrating a tow ball estimation operation according to another example embodiment.

[0019] FIG. 6 is a perspective view of a tow ball;

[0020] FIG. 7 is a flowchart illustrating a tow ball location estimation operation according to an example embodiment; and

[0021] FIG. 8 is a flowchart illustrating a tow ball location estimation operation according to an example embodiment.

[0022] Like reference symbols in the various drawings indicate like elements. DETAILED DESCRIPTION

[0023] Referring to FIGS. 1 and 2, in some implementations, a vehicle system 100 includes a vehicle 102. The vehicle 102 includes a vehicle tow ball 104 supported by a vehicle hitch bar 105. The vehicle 102 may include a drive system 110 that maneuvers the vehicle 102 across a road surface based on drive commands having x, y, and z components, for example. As shown, the drive system 110 includes a front right wheel 112, 112a, a front left wheel 112, 112b, a rear right wheel 112, 112c, and a rear left wheel 112, 112d. The drive system 110 may include other wheel configurations as well. The drive system 110 may also include a brake system (not shown) that includes brakes associated with each wheel 112, 112a-d, a steering system (not shown) for use in controlling a direction of travel of the vehicle 102, and an acceleration system (not shown) that is configured to adjust a speed and direction of the vehicle 102. In addition, the drive system 110 may include a suspension system (not shown) that includes tires associates with each wheel 112, 112a-d, tire air, springs, shock absorbers, and linkages that connect the vehicle 102 to its wheels 112, 112a-d and allows relative motion between the vehicle 102 and the wheels 112, 112a-d. The vehicle 102 is a vehicle having a bed 113, such as a pickup truck or flatbed truck having a truck bed 113. However, it is understood that the vehicle 102 may be other types of vehicles.

[0024] The vehicle 102 may move across the road surface by various combinations of movements relative to three mutually perpendicular axes defined by the vehicle 102: a transverse axis Xv, a fore-aft axis Yv, and a central vertical axis Zv. The transverse axis Xv extends between a right-side and a left-side of the vehicle 102. A forward drive direction along the fore-aft axis Yv is designated as Fv, also referred to as a forward motion. In addition, an aft or rearward drive direction along the fore-aft direction Yv is designated as Rv, also referred to as rearward motion. In some examples, the vehicle 102 includes a suspension system (not shown), which when adjusted causes the vehicle 102 to tilt about the Xv axis and or the Yv axis, or move along the central vertical axis Zv.

[0025] The vehicle 102 may include a user interface 120 (FIG. 2). The user interface 120 may include a display 122, a knob, and a button, which are used as input mechanisms. In some examples, the display 122 may show the knob and the button. While in other examples, the knob and the button are a knob button combination. In some examples, the user interface 120 receives one or more driver commands from the driver via one or more input mechanisms or a touch screen display 122 and/or displays one or more notifications to the driver. The user interface 120 is in communication with a controller 140. In some examples, the display 122 is configured to display an image 133 of an environment of the vehicle 102. The user interface may include a vibrator mechanism associated with the steering wheel and/or the driver seat of the vehicle 102, for imparting a haptic message via vibrating the steering wheel and/or the driver seat. [0026] The vehicle 102 may include a sensor system 130 (FIG. 2) to provide reliable and robust driving. The sensor system 130 may include different types of sensors that may be used separately or with one another to create a perception of the environment of the vehicle 102 that is used for the vehicle 102 to drive and aid the driver in making intelligent decisions based on objects and obstacles detected by the sensor system 130. The sensor system 130 may include the one or more cameras 132, 132a-132d supported by the vehicle system 100. In some implementations, the vehicle 102 includes a rear vehicle camera 132a (i.e., a first camera) that is mounted to a tailgate or along the bumper of the vehicle 102 to provide a view of a rear-driving path for the vehicle 102, or in other words, the rear vehicle camera 132a captures images 133a of a rear environment of the vehicle 102. Additionally, the sensor system 130 includes a front-facing camera 132b (i.e., a second camera) that is mounted at the front of the vehicle 102 to provide a forward view of the vehicle 102. In some examples, the sensor system 130 also includes side view vehicle cameras 132c, 132d (i.e., third camera and fourth camera) each mounted to provide a side image 133 of the side environment of the vehicle 102.

[0027] In some implementation, the rear camera 132a, the front camera 132b, and the side view vehicle cameras 132c, 132d include a fisheye lens having an ultra- wide-angle lens that produces strong visual distortion intended to create a wide panoramic or hemispherical image. Fisheye cameras capture images having an extremely wide angle of view. Moreover, images captured by the fisheye camera have a characteristic convex non-rectilinear appearance. Other types of cameras may also be used to capture the images 133. In some embodiments, the cameras 132a-132d are monocular cameras. [0028] The sensor system 130 may also include other sensors 134 that detect the vehicle motion, i.e., speed, angular speed, position, etc. The other sensors 134 may include an inertial measurement unit (IMU) configured to measure the vehicle’s linear acceleration (using one or more accelerometers) and rotational rate (using one or more gyroscopes). In some examples, the IMU also determines a heading reference of the vehicle 102. Therefore, the IMU determines the pitch, roll, and yaw of the vehicle 102. The other sensors 134 may also include, but are not limited to, radar, ultrasonic (sonar), LIDAR (Light Detection and Ranging, which can entail optical remote sensing that measures properties of scattered light to find range and/or other information of a distant target), LADAR (Laser Detection and Ranging), ultrasonic, HFL (High Resolution 3D Flash LIDAR), etc. which are disposed around the vehicle 102. In some implementations, the sensor system 130 may provide external sensor data received from other systems or vehicles, such as by way of V2X communication or any other communication.

[0029] The controller 140 includes a computing device (or processor) 142 (e.g., central processing unit having one or more computing processors) in communication with non-transitory memory 144 (e.g., a hard disk, flash memory, random-access memory) capable of storing instructions executable on the computing processor(s) 142. The controller may be supported by the vehicle 102. In some examples, the controller 140 includes a trailer hitch assist system 150 that assists a vehicle driver in maneuvering the vehicle 102 towards a trailer for connection thereto.

[0030] In example embodiments, the trailer hitch assist system 150 includes a tow ball position estimator module or algorithm 151. In general terms, the tow ball position estimator 151 estimates the position of the tow ball 104. The estimated position of the tow ball 104 may then be used in performing a trailer hitch assist operation by the trailer hitch assist system 150. The trailer hitch assist operation may involve maneuvering the vehicle 102 towards an unconnected trailer (not shown) and/or aligning the tow ball 104 relative to the hitch coupler of the trailer for coupling to the tow ball.

[0031] The tow ball position information in 3-D world coordinate is very useful for the trailer hitch assist system 150. The tow ball's height information can be used for truck height adjustment (to adjust the tow ball’s height for alignment with the trailer coupler of the trailer), and the position information can be used for collision avoidance between coupler and tow ball 104. The tow ball position estimator module 151 estimates the tow ball's 3-D position by leveraging the calibration of the rear camera 132a and tow ball parameters. Specifically, the approach is to estimate the longitudinal distance of tow ball 104 and/or the height of tow ball so that the position of the tow ball 104 is known. [0032] For the example embodiments, rear camera 132a is a monocular camera. Since a monocular camera cannot provide distance information without motion, the approach by the tow ball position estimator 151 leverages the measured information from tow ball specifications and calibration of the camera 132a. If the longitudinal distance of the tow ball 104 is known, the tow ball position estimator 151 estimates the height information for the tow ball 104 using a least square method or the like. If the longitudinal distance of the tow ball 104 is unknown but its height is known, the tow ball position estimator 151 estimates the longitudinal distance using the measured tow ball height Z.

[0033] FIG. 6 illustrates typical specification information of a tow ball 104 which may be maintained in memory 144. It is assumed that the tow ball 104 is located along the longitudinal center line, in this case the fore-aft axis Yv.

[0034] FIG. 3 illustrates parameters associated with the tow ball 104. The position of the tow ball 104 is (X, 0, Z), wherein either the tow ball longitudinal distance X to the rear axle of the vehicle 102 or the tow ball height Z relative to ground is unknown. In one scenario, the longitudinal distance X is known and obtained from the vehicle specification (XI, corresponding to the distance from the rear axle of the vehicle 102 to the pinhole center of the tow ball 104) and the specification of the tow ball 104 (X2, corresponding to the distance from the pinhole center to the tow ball center). As shown in FIG. 3, the longitudinal distance X is the sum of XI and X2. In another scenario, the longitudinal distance X is unknown and the height Z of the tow ball 104 from the ground is known. In addition, the extrinsic values of the rear camera 132a are known, including the position of the center of the camera relative to the center of the rear axle of the vehicle 102.

[0035] FIG. 4 illustrates the first scenario in which the longitudinal distance X (from the center of the rear axle of the vehicle 102) is known (from XI and X2) but the tow ball height Z is unknown, and FIG. 7 illustrates a method for estimating tow ball position by estimating the tow ball height Z. Referring to FIG. 7, the method includes the controller 140, when executing instructions of the tow ball position estimator algorithm 151, receiving at 501 the longitudinal distance X of the tow ball 104 to the vehicle rear axle as well as the distances between the rear camera 132a and the center of the rear axle. These values may be known from specifications of the vehicle 102, the tow hitch and the camera 132a, all of which may be maintained in the memory 144. In addition, the controller 140 receives one or more images captured by rear camera 132a. At 502, a point on the tow ball 104 in at least one captured image is obtained for use in the estimation of tow ball position. The point may be provided via the user interface 120 by displaying the image and the vehicle user touching the touch screen of the user interface 120 at a location of the displayed image corresponding to a center of a representation of the tow ball 104. Alternatively, an object detection module of the trailer hitch assist system 150 may detect and/or select a center point on the tow ball 104 based on one or more images captured by the rear camera 132a. Based on the image and the selected point on the image representation of the tow ball 104 therein, the controller 104 obtains at 504 a camera ray R from the center (origin) of the camera 132a to the selected tow ball point in the image. The camera ray R provides direction information but not distance information. However, the camera ray R is associated with the known distance X from the vehicle rear axle to the tow ball center, and the extrinsic values such as the location (x, y, z) of the center of the rear camera 132a relative to the rear axle center. As shown in FIG. 4, a vertical line V (X, 0, Z) is obtained or determined at 506, from a plane of potential vertical lines, that passes through the selected tow ball point (i.e., has y- coordinate value of zero and x-coordinate value X) and thus intersects camera ray R at that selected point. At 508, the intersection point between camera ray R and vertical line V is determined. The intersection determination utilizes the longitudinal distance X from the tow ball 104 to the center of the rear axle, the distance of the center of the camera 132a to the rear axle center, and the ray R. Due to noise, the ray R and the vertical line V may be difficult to intersect at one point in world coordinates, and a least squares method or other comparable method may be utilized to find the intersection point closest to both the ray R and the vertical line V. The z-component of the determined intersection point (X, 0, Z) between the camera ray R and the vertical line V corresponds to the height of the tow ball 104. The height of the tow ball 104 may then be used in performing a drive assist operation, such as a trailer hitch assist operation.

[0036] FIG. 5 illustrates the second scenario in which the longitudinal distance X is unknown but the tow ball height Z is known, and FIG. 8 illustrates a method for estimating tow ball position by estimating the longitudinal distance X from the tow ball 104 to the rear axle of the vehicle 102. Referring to FIG. 8, at 602 the controller 140, when executing instructions of the tow ball position estimator module 151, receives the height Z of the tow ball 104 as well as distance information (extrinsic values) between the rear camera 132a and the center of the vehicle rear axle. In addition, one or more images are received from the rear camera 132a. In one aspect, the height Z of the top of the tow ball 104 is measured by a vehicle user and provided to the trailer hitch assist system 150 via the user interface 120, for example. At 604, a point on the tow ball 104 is obtained for use in the estimation. As discussed above, the point may be provided via the user interface 120 by displaying an image captured by the rear camera 132a and the vehicle user touching the touch screen at location of the displayed image corresponding to a center of the tow ball 104. Alternatively, an object detection module of the trailer hitch assist system 150 may detect or select a center point on the tow ball 104 based on images captured by the rear camera 132a. Based on the selected point on the tow ball 104 and the captured image, the controller 104 obtains at 606 a camera ray R from the center (origin) of the camera 132a to the selected tow ball point. The camera ray R provides direction information but not distance information. However, the camera ray R is based on the known height Z to the tow ball point from ground, the extrinsic values of the rear camera 132a relative to the rear axle, and the selected ball center. At 608, a horizontal line H is obtained or determined which passes through the selected tow ball point. Based upon the known height Z of the selected point of the tow ball 104 as well as the ray R (including the distance in the x direction of the tow ball center relative to the camera position) and the known extrinsic distance of the rear camera 132a (including the known distance in the x and/or z directions of the rear camera relative to the rear axle center), the intersection point (X, 0, Z) of the horizontal line H and the camera ray R is determined at 610, from which the distance X between the tow ball 11 and the rear axle is identified. [0037] Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. [0038] These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. [0039] Implementations of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Moreover, subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The terms “data processing apparatus”, “computing device” and “computing processor” encompass all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus. [0040] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multi-tasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

[0041] A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.