Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER-IMPLEMENTED METHOD FOR SELF-LEARNING OF AN ANTHROPOMORPHIC ROBOT AND RELATED SELF-LEARNING SYSTEM
Document Type and Number:
WIPO Patent Application WO/2022/049500
Kind Code:
A1
Abstract:
The computer-implemented method (M) for self-learning of an anthropomorphic robot comprises the following steps: acquisition at predefined time intervals and by means of a three-dimensional vision sensor (V) of a sequence of images (I) relating to a machining operation carried out by an operator (O) on a reference object (R) within a learning area (step 12); processing, for each of the acquired images (I) of said sequence of images, of a point cloud (N) relating to the operator (O) and to the reference object (R) within the framed learning area (step 13); for each processed point cloud (N), identification of a tool (U) used by the operator (O) during the machining operation and of different parts of the body of the operator (O) (step 21); determination of coordinates (X, Y, Z) of movement axes relating to the tool (U) and to the different parts of the body of the operator (O) (step 22); calculation of a roto-translation matrix (RX, RY, RZ) of the tool (U) in space starting from the coordinates (X, Y, Z) of movement axes relating to the tool (U) and to different parts of the body of the operator (O) (step 31); calculation of the complete trajectory (T) of the tool (U) starting from the succession of the coordinates (X, Y, Z) of the movement axes of the tool (U) and from the succession of the angles of the roto-translation matrix (RY, RY, RZ) that identify the spatial inclination of the movement axes of the tool (U); conversion of the calculated trajectory (T) into a program module (P) for an anthropomorphic robot provided with a tool (U), wherein the program module (P) comprises a sequence of instructions for the movement of the tool (U) along the trajectory (T).

More Like This:
Inventors:
PASSONI DAVIDE (IT)
Application Number:
PCT/IB2021/057981
Publication Date:
March 10, 2022
Filing Date:
September 01, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIR SOC ITALIANA RESINE SPA (IT)
International Classes:
B25J9/16
Domestic Patent References:
WO2014093822A12014-06-19
Foreign References:
US20160303737A12016-10-20
US20120071892A12012-03-22
CN110561450A2019-12-13
Attorney, Agent or Firm:
GRANA, Daniele (IT)
Download PDF:
Claims:
CLAIMS

1) Computer- implemented method (M) for self-learning of an anthropomorphic robot, characterized by the fact that it comprises at least the following steps:

- acquisition at predefined time intervals and by means of at least one three- dimensional vision sensor (V) of a sequence of images (I) relating to a machining operation carried out by an operator (O) on a reference object (R) within a learning area (step 12);

- processing, for each of the acquired images (I) of said sequence of images, of a point cloud (N) relating to the operator (O) and to the reference object (R) within said framed learning area (step 13);

- for each processed point cloud (N), identification of at least one tool (U) used by said operator (O) during said machining operation and of different parts of the body of said operator (O) (step 21);

- determination of coordinates (X, Y, Z) of movement axes relating to said tool (U) and to said different parts of the body of the operator (O) (step 22);

- calculation of a roto-translation matrix (RX, RY, RZ) of said tool (U) in space starting from said coordinates (X, Y, Z) of movement axes relating to said tool (U) and to said different parts of the body of the operator (O) (step 31);

- calculation of the complete trajectory (T) of said tool (U) starting from the succession of said coordinates (X, Y, Z) of the movement axes of the tool (U) and from the succession of the angles of said roto-translation matrix (RY, RY, RZ) that identify the spatial inclination of said movement axes of the tool (U);

- conversion of said calculated trajectory (T) into a program module (P) for an anthropomorphic robot provided with at least one tool (U), wherein said program module (P) comprises a sequence of instructions for the movement of said tool (U) along said trajectory (T).

2) Computer- implemented method (M) according to claim 1, characterized by the fact that said acquisition step (12) is carried out by means of a plurality of three-dimensional vision sensors (V).

3) Computer-implemented method (M) according to claim 2, characterized by the fact that said processing step (13) comprises at least one union step for the union of the individual point clouds (N) obtained from the images (I) acquired in a specific instant by each of said three-dimensional vision sensors (V), to obtain a single union point cloud (N’) of all the framed learning area.

4) Computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that it comprises at least one calibration step (11) of said at least one three-dimensional vision sensor (V) with respect to at least one three-dimensional reference marker.

5) Computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that said coordinates (X, Y, Z) of movement axes relating to said tool (U) comprise coordinates relating to the end positions of the index finger and thumb of the hand used by said operator (O) for the machining operation or coordinates relating to a reference tool (U) held by said operator (O) during the machining operation.

6) Computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that said coordinates (X, Y, Z) of movement axes relating to different parts of the body of said operator (O) comprise coordinates relating to the knuckles of the index, middle, ring and little finger of the same hand.

7) Computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that said calculation step of the roto- translation matrix (RX, RY, RZ) comprises:

- receiving at input the data relating to the sequence of said coordinates (X, Y, Z) of the tool (U) and of said coordinates of the four knuckles of said operator (O);

- calculating the average of the coordinates of the four knuckles;

- calculating the roto-translation matrix between the tool (U) and the median position (X, Y, Z) of the knuckles, obtaining the Euler angles (RX, RY, RZ) of the two joined fingers or of the reference tool (U) in space.

8) Computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that it comprises at least one normalization and correction step (33) of said calculated trajectory (T).

9) Self-learning system (S) for anthropomorphic robot, comprising means for the performance of the computer-implemented method (M) according to one or more of the preceding claims, characterized by the fact that it comprises:

- a learning station comprising at least one processing unit (E) and said at least one three-dimensional vision sensor (V), said at least one three- dimensional vision sensor (V) being operationally connected to said processing unit (E) and being configured to carry out said acquisition step (12) of said sequence of images (I);

- wherein said processing unit (E) is configured to carry out said processing step (13) of said point cloud (N), said identification step (21) for identifying at least one tool (U) and of different parts of the body of said operator (O), said determination step (22) for determining the coordinates (X, Y, Z) of the movement axes; said calculation step (31) for calculating a roto-translation matrix (RX, RY, RZ), said calculation step (32) for calculating the complete trajectory (T) of said tool (U), said conversion step (34) for converting said calculated trajectory (T) into a program module (P) for an anthropomorphic robot.

10) System (S) according to the preceding claim 9, characterized by the fact that said at least one three-dimensional vision sensor (V) is a high-speed stereoscopic sensor.

11) System (S) according to one or more of claims 9 and 10, characterized by the fact that said learning station comprises a plurality of three-dimensional vision sensors (V).

12) System (S) according to claim 11, characterized by the fact that said processing unit (E) is configured to carry out said union step (14) of the single point clouds (N) obtained from the images (I) acquired in a given instant by each of said three-dimensional vision sensors (V), to obtain a single reconstructed union point cloud (N’) of the whole framed learning area.

13) System (S) according to one or more of claims 9 to 12, characterized by the fact that said processing unit (E) is configured to carry out said calibration step (11) for calibrating said at least one three-dimensional vision sensor (V) with respect to at least one three-dimensional reference marker.

14) System (S) according to claim 13, characterized by the fact that it comprises at least a first three-dimensional reference marker placed inside said training station.

15) System (S) according to claim 14, characterized by the fact that it comprises at least a second three-dimensional reference marker associated with said reference object (R) to be machined.

16) System (S) according to one or more of claims 9 to 15, characterized by the fact that said processing unit (E) is configured to carry out said at least one normalization and correction step (33) of said calculated trajectory (T).

Description:
COMPUTER-IMPLEMENTED METHOD FOR SELF-LEARNING OF AN ANTHROPOMORPHIC ROBOT AND RELATED SELF-LEARNING SYSTEM

Technical Field

The present invention relates to a computer-implemented method for selflearning of an anthropomorphic robot and related self-learning system.

Background Art

With reference to robotics for industrial application, programming technologies of anthropomorphic robots are known based on field point learning or offline path programming.

Such technologies, however, involve long programming times and require at least one specialized programmer.

Description of the Invention

The main aim of the present invention is to devise a computer-implemented method and a self-learning system for an anthropomorphic robot which allow carrying out automatic learning (auto-teach) of complex paths for an anthropomorphic robot, by means of “imitation” of the same movements performed by a human operator.

Another object of the present invention is to devise a computer- implemented method and a self-learning system for an anthropomorphic robot which allow considerably reducing the programming time of an anthropomorphic robot.

The aforementioned objects are achieved by the present computer- implemented method for self-learning of an anthropomorphic robot according to the characteristics of claim 1.

The aforementioned objects are achieved by the present self-learning system for an anthropomorphic robot according to the characteristics of claim 9.

Brief Description of the Drawings

Other characteristics and advantages of the present invention will become more apparent from the description of a preferred, but not exclusive, embodiment of a computer-implemented method and a self-learning system for an anthropomorphic robot, illustrated by way of an indicative, yet non-limiting example, in the accompanying tables of drawings wherein:

Figure 1 is a general diagram of the computer-implemented method according to the invention;

Figure 2 schematically shows the self-learning system according to the invention;

Figure 3 schematically shows the point cloud of detectable by the method and the system according to the invention on the hand of an operator.

The method and system according to the invention envisage to carry out automatic learning (auto-teach) of complex paths for an anthropomorphic robot, by means of “imitation” of the same movements performed by a human operator. In this way, the movements of the machining operations by means of the anthropomorphic robot, in particular the finishing operations, can be easily programmed in an extremely small amount of time.

In particular, with reference to the devised method and system, the operator performs a movement or a trajectory which, after being identified and interpreted by means of appropriate devices and a suitable software, may be easily repeated by an anthropomorphic robot, thus performing a real “teaching” process towards the robot itself.

Embodiments of the Invention

With particular reference to Figure 1, letter M globally denotes a computer- implemented method for self-learning of an anthropomorphic robot.

The method M comprises a first phase 1 of managing at least one three- dimensional vision sensor V and acquired point clouds N.

In particular, the method M comprises a step 12 of acquisition at predefined time intervals and by means of at least one three-dimensional vision sensor V of a sequence of images I relating to a machining operation carried out by an operator O on a reference object R within a learning area.

The reference object R to be machined consists of a physical element (e.g., a mechanical part) that is exactly the same as the object to be machined by the anthropomorphic robot in the industrial application.

Prior to the step 12 of acquisition, the method M comprises at least one calibration step 11 of the three-dimensional vision sensor V with respect to the at least one three-dimensional reference marker.

Furthermore, the method M comprises, for each of the acquired images of the sequence of images I, a step 13 of processing a point cloud N relating to the operator O and to the reference object R within the framed learning area.

Preferably, the acquisition step 12 of the image sequence I is carried out by means of a plurality of three-dimensional vision sensors V which are arranged so as to cover the framed learning area by 360 degrees.

In case of using a plurality of three-dimensional vision sensors V, the method M comprises at least one union step 14 of the individual point clouds N obtained from the images I acquired in a specific instant by each of the three-dimensional vision sensors V, to obtain a single reconstructed point cloud N’ of the whole framed learning area.

Next, as schematically shown in Figure 1, the method M comprises a phase 2 of managing a tool used by the operator O (or operator tool).

In particular, for each processed point cloud N or for the union point cloud N’, in case of use of a plurality of three-dimensional vision sensors V, the computer- implemented method M comprises a step 21 of identifying at least one tool U used by said operator O during said machining operation and of different parts of the body of the operator O.

Furthermore, the method M comprises a step 22 of determining coordinates X, Y, Z of movement axes relating to the tool U and to different parts of the body of the operator O.

In particular, starting from the succession of reconstructed images I, therefore from the succession of the union point clouds N’, the computer- implemented method M is able to recognize, for each of the point clouds, as schematically shown in Figures 2 and 3, the joint of the elbow, the wrist and the knuckles of the fingers, the phalanges and the extremities of the operator’s fingers.

Preferably, the coordinates X, Y, Z of the movement axes relating to the tool U comprise coordinates relating to the end positions of the index finger and thumb of the hand used by the operator O for the machining operation or coordinates relating to a reference tool held by the operator O during the machining operation.

Still according to a preferred embodiment, the coordinates X, Y, Z of movement axes relating to different parts of the body of the operator O comprise coordinates relating to the knuckles of the index, middle, ring and little finger of the same hand.

In particular, the positions of the joined index finger and thumb, which symbolize the tool U or machining end-effector of the anthropomorphic robot, are very important. The movement of these two fingers in a joint position on the element to be machined, identified by a succession of points X, Y, Z, is substantially the trajectory that the tool or end-effector on board the robot will have to take, meaning by tool U of the robot the extremity position of the endeffector itself (e.g. the milling cutter of a spindle).

The extremity position of the tool U represents in fact the point of contact with the actual workpiece during the machining operation.

Next, as schematically shown in Figure 1, the computer- implemented method M comprises a phase 3 of running a simulation (off-line).

In particular, the method M comprises at least one step 31 of calculating a roto- translation matrix RX, RY, RZ of the tool U in space starting from the coordinates X, Y, Z of movement axes relating to the tool U and to the different parts of the body of the operator O.

In particular, the step 31 of calculating the roto-translation matrix RX, RY, RZ comprises:

- receiving at input the data relating to the sequence of said coordinates X, Y, Z of the tool U, relating to the two joined fingers of the operator O or of the reference tool U, and of the coordinates of the four knuckles of the same operator;

- calculating the average of the coordinates of the four knuckles;

- by knowing the condition of “angle 0” of the operator’s hand, calculating the roto-translation matrix between the tool U (two joined fingers or reference tool) and the median position X, Y, Z of the knuckles, thus obtaining the Euler angles RX, RY, RZ of the two joined fingers or of the reference tool in space.

Furthermore, the method M comprises at least one step 32 of calculating the complete trajectory T of the tool U (think e.g. of the milling cutter of a spindle) starting from the succession of coordinates X, Y, Z of the movement axes of the tool U and from the succession of the angles of the roto-translation matrix RY, RY, RZ which identify the spatial inclination of the movement axes of the tool itself.

Therefore, the macro included in the off-line simulation software will automatically draw the trajectory T of the tool U within the simulation. This trajectory T will comprise dimensions and angles and will be positioned on the virtual representation of the element to be machined.

The difference in roto-translation between the human end-effector (the fingers) and the robot end-effector in the virtual environment (the spindle cutter, with its dimensions), will be automatically computed by the simulator by moving the reference triad from the virtual position of the joined fingers to that of the end of the cutter (roto-translation of the tool triad).

As an alternative to the joined fingers, the manual operator could hold, during the auto-teach phase, a dummy tool of the same size as the actual tool U, e.g. of the milling cutter: in this case the X, Y, Z of the extreme point of the tool U will no longer be those of the two joined fingers, but those of the tip of the dummy tool also identified by the sensors or, if this identification is not possible (very small tool), those related to a three-dimensional marker positioned on the dummy tool itself.

Conveniently, the method M comprises at least one step 33 of normalization and correction of the calculated trajectory to obtain a normalized trajectory T’.

In particular, after the drawing of the trajectory T has been superimposed to the virtual piece within the offline simulator, it is possible, through a special function, to normalize portions of trajectory (selectable by sequences of points), reducing them to straight or circular segments, or portions of the inclinations (always selectable by sequences of points), in order to correct any errors due to the unsteady hand of the human operator. The path on the simulator is therefore fully editable, so it is possible to correct dimensions or angles, if these involve areas of collision, singularity, out of area of the robot axes. It is also possible to move the complete trajectory T closer to or further away from the surface of the element to be machined, so that the tool can exert more or less force during machining by contact.

The same corrections may be applied to the rounding data of the trajectory T (passage accuracy at fly-by points) and to the path execution speed data.

The refinement of the trajectory T on the simulator occurs quickly (since the trajectory has already been made) and is assumed to involve only a few corrections. At the end of the process, by means of a run key, it will be possible to preview (simulate) the behavior of the robot in carrying out the normalized trajectory T’.

Next, the method M comprises at least one step 34 of converting the calculated, possibly normalized, trajectory T into a program module P for an anthropomorphic robot provided with at least one tool. Specifically, the program module P comprises a sequence of instructions for the movement of the tool along said trajectory T by an anthropomorphic robot.

Therefore, once the goodness of the trajectory T of the simulator has been checked in the virtual representation, the program module P of the robot (sequence of movement instructions that make up the trajectory) is transferred via Ethernet or other bus to the robot of the actual cell.

As schematically shown in Figure 1, the computer- implemented method M comprises a phase 4 of managing the actual robot within the actual cell.

In particular, the method M comprises a step 41 of executing the trajectory T or the normalized trajectory T’ on the actual cell by means of the anthropomorphic robot.

The robot (real world), being properly calibrated with the virtual world and with the auto-teach world and by sharing with such worlds the same reference triads, is consequently able to repeat the trajectory actually (i.e. in the real world), with the same results displayed in the offline simulation. The computer-implemented method may also comprise a further step 42 of fine correction of the trajectory T or of the normalized trajectory T’, for possible corrections aimed to achieve optimal machining quality.

The self-learning system S for anthropomorphic robot is schematically shown in Figure 2 and comprises means for the execution of the computer-implemented method M described above.

The system S comprises a learning station comprising at least one processing unit E and at least one three-dimensional vision sensor V.

The three-dimensional vision sensor V is operationally connected to the processing unit E and is configured to carry out the acquisition step 12, at predefined time intervals, of the sequence of images I during a machining operation carried out by an operator O on the reference object R, within a learning area.

The processing unit E is configured to carry out the processing step 13, for each of the acquired images I of the image sequence, of the point cloud N relating to the operator O and to the reference object R within the framed learning area.

Preferably, the three-dimensional vision sensor V employed is a high-speed stereoscopic sensor.

Still according to a preferred embodiment, the learning station comprises a plurality of three-dimensional vision sensors V arranged to cover said framed learning area by 360 degrees.

In this case, the processing unit E is configured to carry out the union step 14 of the single point clouds N obtained from the images I acquired at a given instant by each of the three-dimensional vision sensors V, to obtain a single reconstructed point cloud N’ of the whole framed learning area.

Therefore, in the case in which the three-dimensional vision sensors V are more than one, they are connected to each other and to the processing unit E which, after having received the image I from all the sensors, carries out a union operation of the single images I obtained, so as to obtain a single reconstructed point cloud N’ of the whole framed field, starting from the single point clouds N of each three-dimensional vision sensor V. If the three-dimensional vision sensor V is just one, the union point cloud N’ generated by the processing unit E corresponds to the point cloud N at the output from the single sensor.

The processing unit E may carry out the computation of the union image N’ at runtime if the computational resources permit, or it may be carried out at a later time after storing the single clouds N.

In particular, in order to store the movements of the operator O during the machining operation of the reference object R, successive images n acquired for a certain amount of time are required, this time being necessary to complete the machining operation, from each of the three-dimensional vision sensors V, according to a time sampling providing an acquisition every m milliseconds. At the end n x x single point clouds will be obtained coming from each of the vision sensors, where x is the number of sensors, and n is the total point clouds N’, subject of the reconstruction of the whole scene at each time interval (union of the x single clouds N at time m, union of the x single clouds N at time 2 *m, etc.).

The three-dimensional vision sensors V shall have a common calibration point (triad in space) so that all three-dimensional images can be fitted according to a known reference.

The use of a plurality of three-dimensional vision sensors V may be necessary to avoid undercuts and because some movements may be visible, at certain instants, by one sensor and not by another (or vice versa). However, nothing prohibits simplifying the learning station to only one sensor.

The reference object R to be machined consists of a physical element (e.g. a mechanical part) which is exactly the same as the object to be machined by the robot in the industrial application.

This reference object R must be positioned within the learning station on an attachment tool F equal to the one the robot will use in the actual machining operation.

Conveniently, the processing unit E is configured to carry out the calibration step 11 for calibrating the at least one three-dimensional vision sensor V with respect to the at least one three-dimensional reference marker.

In particular, the system S comprises at least a first three-dimensional reference marker placed within the training station. Preferably, said first marker is associated with at least one attachment tool F of the reference object R within the training station.

This first marker represents the origin of the user coordinates of the workobject (main coordinates of the system).

Such first marker is also common to the real robot and to the simulated representation of the same.

Furthermore, the system S comprises at least a second three-dimensional reference marker associated with the reference object R to be machined.

This second marker represents the origin of the object coordinates of the workobject of the system and will be used to correct any deviations in roto- translation of the position of the real piece with respect to the main triad defined by the first marker (user).

The three-dimensional vision sensors V are calibrated with respect to the coordinates of the first marker (user coordinates of the workobject), which represents the origin of the coordinates and the reference triad of the whole system S.

The processing unit E is also configured to carry out the step 21 of identifying at least one tool U used by the operator O during the machining operation and of the different parts of the operator’s body, and the step 22 of determining coordinates X, Y, Z of movement axes relating to the tool U and to the different parts of the operator’s body.

The processing unit E of the learning station comprises an offline robotics simulation software of the known type. Such simulation software comprises a virtual CAD representation of an actual machining cell, including the robot (with motion control thereof), the attachment tool, the object to be machined, and the actual tool which will be mounted on board the robot (e.g., a spindle with a milling cutter).

The reference systems (workobject triads) of the virtual representation will be the same as those of the vision sensors and of the cell with the real robot.

In particular, the processing unit E is configured to carry out the step 31 of calculating a roto-translation matrix RX, RY, RZ of the tool U in space starting from the coordinates X, Y, Z of movement axes relating to the tool U and to the different parts of the body of the operator O.

In addition, the processing unit E is configured to carry out the step 32 of calculating the complete trajectory T of the tool U starting from the succession of the coordinates X, Y, Z of the movement axes of the tool U and from the succession of angles of the roto-translation matrix RY, RY, RZ identifying the spatial inclination of the movement axes of the tool itself.

Conveniently, the processing unit E is configured to carry out the step 33 of normalization and correction of the calculated trajectory T.

Furthermore, the processing unit E is configured to carry out the step 34 for converting the calculated trajectory T’ into a program module P for an anthropomorphic robot provided with at least one tool, wherein the program module P comprises a sequence of instructions for moving the tool along the trajectory T’.

It has in practice been ascertained that the described invention achieves the intended objects.

In particular, the fact is emphasized that the computer-implemented method and the self-learning system for an anthropomorphic robot according to the invention allow carrying out an automatic learning (auto-teach) of complex paths for an anthropomorphic robot, by “imitation” of the same movements performed by a human operator.

In addition, the computer-implemented method and the self-learning system for an anthropomorphic robot according to the invention make it possible to considerably reduce the programming time of an anthropomorphic robot.