Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-VIEWER FOR INTERACTING OR DEPENDING OBJECTS
Document Type and Number:
WIPO Patent Application WO/2015/006199
Kind Code:
A1
Abstract:
A multi-viewer for interacting or depending objects. The objects can both be robots or a robot and one or more other objects. The multi-viewer displays the sequences or steps to be performed by the objects and visually identifies in the display where those sequences or steps interact with each other or depend on each other. The multi-viewer can also display the different timing and instruction's statistics for each robot program and the statistics related to coordinated points.

Inventors:
ROSSANO GREGORY F (US)
MARTINEZ CARLOS (US)
MURPHY STEPHEN H (SE)
HEDELIND MIKAEL (SE)
FUHLBRIGGE THOMAS A (US)
Application Number:
PCT/US2014/045552
Publication Date:
January 15, 2015
Filing Date:
July 07, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ABB TECHNOLOGY AG (CH)
ROSSANO GREGORY F (US)
MARTINEZ CARLOS (US)
MURPHY STEPHEN H (SE)
HEDELIND MIKAEL (SE)
FUHLBRIGGE THOMAS A (US)
International Classes:
B25J9/16
Foreign References:
EP0269737A11988-06-08
US20040030452A12004-02-12
EP0642067A11995-03-08
EP1341066A22003-09-03
EP2546030A22013-01-16
Attorney, Agent or Firm:
RICKIN, Michael, M. (29801 Euclid AvenueWickliffe, OH, US)
Download PDF:
Claims:
What is claimed is:

1. A system comprising:

an object that is a robot that is capable of running or performing an associated procedure that causes said robot to perform a sequence of operations;

one or more other objects that are each capable of running or performing an associated procedure that causes said one or more other objects to perform a sequence of operations which when performed interact with said robot or depend on said robot when said robot performs said robot sequence of operations; and

a multi-viewer having the capability to display either before, during and/or after said robot and each of said one or more other objects run or perform said associated procedure said sequence of operations of said robot, said sequence of operations of said one or more other objects and one or more interactions between said robot sequence of operations and said one or more other objects' sequence (s) of operations.

2. The system of claim 1 wherein each of said one or more interactions occur at an associated one of one or more coordination points and each of said one or more coordination points is displayed in said multi-viewer by an associated one of one or more graphics and/or text.

3. The system of claim 2 wherein said associated procedure for said robot object and said associated procedure for each of said one or more other objects comprise a set of one or more instructions and said multi- viewer displays said associated graphics and/or text for a coordination point that indicates that either said robot can perform a specified one or more of said one or more robot set of instructions only after said one or more other objects have performed a specified one or more of said associated one of said one or more set of instructions or said one or more other objects can perform a specified one or more of said associated one of said one or more set of instructions only after said robot has performed a specified one or more of said one or more robot set of instructions.

4. The system of claim 2 wherein said associated procedure for said robot object and said associated procedure for each of said one or more other objects comprise a set of one or more instructions and said multi-viewer displays said associated graphics and/or text for a coordination point that indicates that said robot object and a specified one of said one or more other objects must simultaneously perform one or more instructions in their associated set of instructions.

5. The system of claim 2 wherein the multi-viewer graphically or textually indicates two or more coordination points that may generate a deadlock condition .

6. The system of claim 2 wherein said associated procedure for said robot object and said associated procedure for each of said one or more other objects comprise a set of one or more instructions and said multi-viewer displays a representation of said set of instructions for said robot object and for each of said one or more other objects and said system modifies said representation of said displayed set of instructions for said one or more sets of instructions so that said one or more coordination points are aligned with each other on said multi-viewer display.

7. The system of claim 6 wherein said alignment modification of said displayed representation of said set of instructions for one or more sets of instructions is such that said one or more coordination points are aligned with each other is accomplished by adding extra space in said multi-viewer display between instructions in at least one of said displayed sets of instructions.

8. The system of claim 6 wherein said alignment modification of said displayed sets of instructions for one or more sets of instructions so that said one or more coordination points are aligned with each other is accomplished by compressing said display between instructions in at least one of said displayed sets of instructions .

9. The system of claim 1 wherein said sequence of operations for at least one of said one or more other objects is a list of instructions for a mechanical device.

10. The system of claim 1 wherein said sequence of operations for at least one of said one or more other objects is a list of instructions for a computing device.

11. The system of claim 1 wherein said sequence of operations for at least one of said one or more other objects is a list of instructions to be performed by a human operator.

12. The system of claim 1 wherein said multi-viewer has the capability to display time related information for a selected one or more of each of said robot procedures associated with said one or more other objects, said associated procedures of said one or more other objects and said one or more interactions between said robot sequence of operations and said one or more other objects' sequence (s) of operations.

13. The system of claim 1 wherein said multi-viewer has the capability to display selected categories for a selected one or more of each of said robot procedures associated with said one or more other objects, said associated procedures of said one or more other objects and said one or more interactions between said robot sequence of operations and said one or more other objects' sequence (s) of operations.

14. The system of claim 13 wherein said selected categories each have a predetermined set of operations that identify which instructions belong to each category.

15. The system of claim 14 wherein said selected categories for display are selected from a group consisting of motion commands, hand commands, and waiting time .

16. The system of claim 13 wherein said display shows the sum of all time that all robots and other objects in the system spend executing instructions for the selected categories.

17. The system of claim 13 wherein said display shows the sum of all time that an individual robot or other object spends executing instructions for the selected categories.

Description:
Multi-viewer for Interacting or Depending Objects

1. Field of the Invention

This invention relates to systems that have two or more objects that interact or depend on each other and more particularly to visualizing the operation of each object, and visually identifying when the objects interact with or depend on each other .

2. Description of the Prior Art

There are many systems that have two or more objects that interact or depend on each other. Examples of such systems are those that have two robots as the term robot is defined below or have a robot and another object or a system that is the combination of the foregoing systems .

Traditionally the term "robot" has meant a single mechanical unit that has one arm. As used herein the term "robot" is broader and includes a single mechanical unit that has one or more actuated axes or a mobile platform.

There are systems that have two or more robots that need to work with each other to perform a task or share a work area and avoid colliding with each other. There are also two or more systems each with one or more robots that while operating independently of each other need to communicate with each other since for example they share a work area and must avoid colliding with each other.

Further there are systems that have a robot and at least one other object which is not a robot in which the robot and the at least one other object need to work with each other to perform a task or share a work area and avoid colliding with each other. The at least one other object may for example be any object that follows a procedure that requires the object to interact with or depend on a robot. The other object could run or perform a procedure within the system or in a separate system.

One of the main challenges of a programmer or sequence planner of such systems is to understand the sequence of operations performed by the objects before, during and after their performance and when they interact with or depend on each other .

Summary of the Invention

A system has an object that is a robot that is capable of running or performing an associated procedure that causes the robot to perform a sequence of operations. The system also has one or more other objects that are each capable of running or performing an associated procedure that causes the one or more other objects to perform a sequence of operations which when performed interact with the robot or depend on the robot when the robot performs the robot sequence of operations . The system further has a multi-viewer having the capability to display either before, during or after the robot and each of the one or more other objects run or perform the associated procedure or the sequence of operations of the robot, the sequence of operations of the one or more other objects and the interaction between the robot sequence of operations and the one or more other objects' sequence (s) of operations .

Description of the Drawing

Fig. 1 shows a robot system that has two or more robots that interact with each other.

Fig. 2 shows a screen from a standard text comparison tool.

Fig. 3 shows one example of the user interface for the multi-viewer utility described herein.

Fig. 4 shows a program in which three robots arms interact in a system.

Fig. 5 is an example of the present user interface that shows where robot arms interact with each other .

Fig. 6 shows two pointers in each program viewer one of which represents the last known position of the robot and the other of which shows the instruction that is being or will be executed.

Fig. 7 shows a flowchart for the procedure to identify the coordination points between two or more robot programs .

Fig. 8 shows a flowchart for validating the content of the programs that have been at 702 of the flowchart of Fig. 7.

Fig. 9 shows a flowchart for the steps that are performed to draw the viewer .

Fig. 10 shows an embodiment for the data structure of the coordination points .

Fig. 11 shows a system that has a robot arm that interacts with a robot that is a mobile platform.

Fig. 12 shows a system that has a robot arm that interacts with a communication module.

Fig. 13 shows an example of a multi-viewer editor that includes a performance profiling tool for the two robot arms whose programs are shown in the editor shown in Fig . 3.

Fig. 14 shows a Gantt chart with the profiling data shown in Fig. 13.

Fig. 15 shows a flowchart for identifying the candidate steps.

Detailed Description

One example of a robot system that has two or more robots interacting with each other is shown in Fig. 1.

In this system, as described in more detail in US Patent Application Publication No. 2007/0163107 in which this Fig. 1 is Fig. 2 of the publication, two robots 100 and 200 cooperate with each other and with a set of stationary tools 300 to install a piston in the associated cylinder bore of an engine block. The cooperation of the robots is, as is well known to those in this art, under the control of associated programs that are executed either in one or more robot controllers or one or more computing devices such as a PC neither of which are shown in Fig. 1.

The arm of robot 100 picks up an engine block from a pallet 400 and orients the engine block so that robot 200 can perform the piston insertion. The arm of robot 200 has a gripper 201 that robot 200 uses to pick up a piston subassembly. An example of a robot system that has two or more robots working in coordination on separate workpieces in the same work cell is a system known as the FlexArc 250R MultiMove Cell available from ABB. In this system, two robots each have one arm that holds an arc welding tool. Each of the robots has an associated fixture for holding a workpiece to be welded by that robot .

The second robot performs an arc welding operation on its workpiece while a workpiece previously welded by the first robot is unloaded from the first robot's fixture and a new workpiece to be arc welded by the first robot is loaded onto that fixture. After the second robot has completed arc welding its workpiece, the system indexes and the first robot begins to arc weld its associated workpiece. At the same time the workpiece that was welded by the second robot is unloaded from the fixture for that robot and a new workpiece to be welded is loaded onto that fixture. The system indexes to begin a new cycle of arc welding by both robots after the first robot has finished arc welding its workpiece.

There presently exists several different ways to coordinate objects in systems such as the systems described above.

The preferred approach to coordinate objects is to use a semaphore or flag implemented by shared signals or data. Some systems provide built-in functionality which automate and facilitate the interaction of two or more robot arms. An example of this built-in functionality is the ABB MultiMove® software, which provides an ABB RAPID® instruction to coordinate two or more robot arms during their operation.

To understand the whole operation of a cell in which two or more robot arms interact, programmers need to initially understand the sequence of each arm's program and identify when an arm interacts with other arm(s) . Text and graphic editors now exist to allow programmers to quickly visualize the sequence or program of a single robot arm. Even though programmers could put two of these editors side by side, there is no connection between the editors to indicate where the robot arms interact with each other.

Fig. 2 shows a screen from Araxis Merge a standard text comparison tool available from Araxis Ltd. As shown in Fig. 2, the standard comparison tool compares the text of two programs identified as "al" and "a.2" . This tool provides a connection between both editors, with the goal of identifying differences between two or more texts. However, this tool does not have the capability of parsing programs and identifying when these programs interact with each other. As can be appreciated from Fig. 2, this standard text comparison tool does not provide its user with a sense of flow and state in the operation of two programs .

There is described below a multi-viewer editor, that is, a utility, which serves as a visual aid where programs or procedures are displayed side by side, similar to text comparison tools but with functionality and features that are not in standard text comparison tools. In contrast to the text comparison tool described above, the multi-viewer utility promotes understanding by a programmer where robot arms interact or depend with each other of the sequence of each robot arm and their interactions before, during and after their performance by displaying each robot arm's program and graphically showing where the robot arms interact with each other. The multi-viewer utility also promotes understanding by a programmer or a sequence creator of a robot and an object such as a mobile platform, communication object or other object such as a human who will interact with the robot before, during or after the performance by the robot and the object. The showing of the interaction can be, but not limited to, with graphics and/or text such as, for example and without limitation, lines, color or font matching, icons, symbols and the like. Also, different types of communication points could be displayed by the same utility. One example of the user interface for this multi- viewer utility for two arms is shown in Fig. 3. The two arms are identified as left and right arms .

As shown in Fig. 3, the multi-viewer utility 30 has three columns 32, 34 and 36. Two of the three columns 32 for the left arm and 36 for the right arm are of a robot program viewer. The two columns 32 and 36 of the program viewer are separated from each other by one column 34 of a coordination viewer.

The multi-viewer utility 30 parses each of the programs for the two robots and identifies in the coordination viewer column 34 the different coordination points among the two robot arms in this example. Each coordination point is represented inside the robot program in different ways : some of them are represented by a single instruction, while others are represented in a data structure. There are different types of coordination points such as WaitForArm, or dependencies of robot arm tasks which can be a single instruction or a group of instructions . As is described in more detail below, column 34 has an icon that is unique for each type of coordination point .

Depending on which part of the overall program is being shown, the multi-viewer tool 30 displays the coordination points and the robot programs. The viewer makes distinctions between the different types of coordination points by using different graphical representations and/or text. The viewer also adjusts the robot program lines to be aligned to the coordination points.

Each of the robot program viewers 32 and 36 show their respective programs in the order the program is executed. A programmer can scroll down to understand the overall sequence of each robot arm. The coordination viewer 34 in the center column shows where the left and right arms interact with each other.

Fig. 3 shows a system in which the right arm whose robot program is shown in program viewer column 36 is picking a PCB and placing it in a position where the left arm whose robot program is shown in program viewer column 32 can clean a camera lens on the PCB . The left arm also needs to pick a frame before it starts cleaning the lenses.

There are two types of coordination points presented in this example. The first coordination point is a condition that is represented by a "stop light" 38. This icon indicates that the right arm can place the PCB only after the left arm has picked the frame. The second coordination point is a synchronization point represented by a "plug" 40. This icon indicates that both robots must be executing that specific instruction at the same time.

This view of all of the robot programs and their interactions allows programmers to understand the overall sequence of the entire system. Users could either scroll programs independently or at the same time depending on the coordination points. If the programs are not connected by coordination points, then a user might want to scroll only one of the programs. If the programs are connected by coordination points and the program viewers are already aligned, a user could scroll all of the viewers at the same time to keep the alignment; although a user is able to scroll only one of the programs. If a user scrolls within coordination points within a coordination viewer, then programs which are linked to that coordination view should also be scrolled.

The utility 30 provides automatic padding to align robot programs based on their coordination points. One example of the padding is the empty white lines shown in Fig. 3 added in the right column 36 of the robot program viewer to align the stop light. Another example is the ' 42 shown in Fig. 3 added in the right column 36 of the robot program viewer so that both arms have their WaitForArm instructions aligned. The ' 42 can also represent adjusting the padding of coordinated instructions by transforming and compressing the display of non-coordinated instructions such that the coordinated instructions are aligned.

Thus the multi-viewer tool shown in Fig. 3 displays each robot program within a robot program viewer that has a viewer column 32 for the left robot arm and a viewer column 36 for the right robot arm and a column 34 that displays the coordination points by an icon that is unique for each type of coordination point .

It should be appreciated that the user interface of Fig. 3 can be modified to support systems that use more than two robot arms . One way to support more than two arms is to only show two robots at a time with the user selecting which two robots to view.

Another way to support more than two arms is to show additional robot program viewer and coordination viewer columns . Fig 4 shows a program in which three robot arms interact in a system.

The user can arrange the columns based on the dependencies of the arms in adjacent columns. In this example, dependencies of arms that are not in adjacent columns can be highlighted graphically with a special icon, or color or with text. Also, the lines for coordination points between two non-adjacent arms could be shown selectively by several means, such as showing them only when the user moves the mouse over a coordination icon, the user pressing a button, or other similar means .

In another example, a program for a robot arm can interact with two other objects that are not robot arms. There can be coordination points between the robot arm and the two other objects as well as coordination points between the two other objects.

In addition, the user interface can also be used to graphically show when possible deadlock scenarios exist (i.e. when both arms are stopped because they are waiting for each other) . Fig. 5 shows an example of showing possible deadlocks by crossed lines 50 and 52.

It should be appreciated that it is also possible to indicate a potential deadlock by adding textual annotations to the coordination points causing the deadlock such as 'Possible Deadlock'. This aspect of the viewer is very useful in the sequence planning phase, before the sequence is executed.

The viewer can also display runtime information, such as a pointer of the current or next instruction to be executed, a pointer of the motion instruction currently being executed by the robot, the status of robot running, etc. Some robot systems, such as the ABB IRB5®, provide a mechanism to track the current instructions. In an ABB system, this is traceable via ABB's PC SDK® software using event handlers. The viewer can use a PC SDK® event handler to update the location of the pointer icons when the robot is running .

Fig. 6 shows two pointers in each program viewer. One is a "robot" 60 to represent the motion instruction currently being executed by the robot, and the other is an "arrow" 62 to show the instruction that will be executed next. The pointer shows the sequence while the objects are performing their operations (i.e. at runtime) and updating the viewer with information about which step each object has completed, is waiting to complete, and/or is currently performing.

Referring now to Fig. 7, there is shown a flowchart 700 for the procedure to identify the coordination points between two or more robot programs .

At 702, either the whole program or only selected portions (e.g. the current execution, a predefined routine or function, or just specific lines of the programs) are loaded into the multi-viewer utility. The next two steps 704 and the optional step 706 are to discover the coordination points.

At 704 the loaded programs are parsed to identify the items to be shown in the viewer: (1) instructions and/or functions and (2) coordination points. These items could be, but are not limited to:

Logical Flow - These instructions are usually combined with expressions, which control the program execution .

Synchronization - These instructions contain or represent dependencies or coordination points between different resources (i.e. robot programs).

At 706 is the optional step for obtaining the metadata. Some of the coordination points could be defined inside of the programs (e.g. data structures) or outside of the program (files saved in the robot memory or other software devices) .

The next two steps 708 and 710 are for obtaining the information of the coordination points. At 708, the question is asked if more points are needed. If the answer is yes, the flow proceeds to 710 where all dependencies are searched and linked for the needed information. When the searching and linking of all dependencies are completed the flow returns to 708 to ask again if more points are needed.

If the answer to the inquiry in 708 is no, then the flow 700 to identify the coordination points is ended.

Referring now to Fig. 8, there is shown a flowchart 800 for validating the content of the programs that have been loaded at 702 of flowchart 700.

At 802, the context of the viewer is defined. This is either the whole program or only selected portions (e.g. the current execution, a predefined routine or function, or just specific lines of the programs). At 804, the coordination points' information is obtained.

The next five steps 806, 808, 810, 812 and 814 are to validate the content of the program (s) .

At 806 it is asked if there are more coordination points in the context. If the answer is no, the flow ends. If the answer is yes, the flow proceeds to inquiry 808 where it is asked are each of the coordination points complete. A coordination point is considered complete only if all its dependencies are properly configured. For example, there could be scenarios in which the information of the dependency has not been configured or its status is not known. If the answer is no, then the point is flagged as incomplete at 810.

If the answer to the query of 808 is yes, that is the point is complete, the flow proceeds to 812 to determine if there is a dead-lock. In order to check if a coordination point could generate a dead-lock, its dependencies cannot be before or after the dependencies of other coordination points. One embodiment to accomplish this check is:

(1) For each program, enumerate all program lines inside the context. Each program line will have a unique sequence ID inside their program.

(2) Get the first coordination point (baseline)

(3) Get the second coordination point (test)

(4) Get the sequence IDs for all dependencies from the baseline and test points.

(5) Check that the sequence IDs of the baseline point are either all before or all after the sequence IDs of the test point.

(6) If they are not all before or all after, this coordination point could generate a dead-lock.

If the determination in 812 is that the coordination point can generate a dead-lock, then that point is flagged as a dead-lock at 814. If the determination in 812 is that the coordination point cannot generate a dead-lock, then the flow 800 proceeds to 806 described above .

Referring now to Fig. 9, there is shown a flowchart 900 for the steps that are performed to draw the viewer .

At 902 the context of the viewer is defined. Either the whole programs of interest or only selected portions, for example, the current execution or a predefined routine or function or specific lines of the programs, can be shown in the viewer. For ease of description, the defined context of the viewer is referred to below as the "programs" which can as described above be the programs of interest in their entirety or selected portions of those programs.

At 904, the text of the programs is obtained along with the coordination point information.

At 906 the content of the programs are validated using the technique described for flowchart 800 shown in Fig. 8. As is described above for Fig. 8, validation of the programs includes validating that all of the coordination points are valid and there is not a possible dead-lock. Step 906 is an optional step because the viewer could be used to only show programs independent of their validity.

The next three steps 908, 910 and 912 are the padding to align the coordination points.

At 908 it is asked if there are more coordination points in the context. If the answer is no, the flow proceeds to 914 which is described below. If the answer is yes, the flow proceeds to 910 where coordination points for the programs that are dependent on each other and the dependency are identified.

At 912 padding is added to the display in the viewer to align the points that are dependent and their dependencies. Empty lines can be inserted either at the dependent or dependency programs, based on the need to align the coordination points.

Each coordination point can have a different strategy to add padding. For example:

Task or Program Line dependency will insert padding in the dependent robot program.

Synchronization will insert padding in the program which has the lesser number of lines (defined in the context) . In this manner, the program with more number of lines will be kept the same, that is, no padding is inserted in that program.

While validation and padding are shown in Fig. 9 as occurring one after another in time it should be appreciated that validation and padding can occur at the same time.

At 914 and 916 respectively the program viewers and the coordination viewers are drawn. Referring now to Fig. 10 there is shown an embodiment for the data structure for the coordination points. The embodiment shows one example of how the information required to represent a coordination point can be stored. It should be appreciated that the program for the multi-viewer described herein may be resident in a robot controller or a separate computing device such as a PC. It should further be appreciated that the program to draw the viewer may take the form of a computer program product on a tangible computer-usable or computer-readable medium having computer-usable program code embodied in the medium. The tangible computer-usable or computer-readable medium may be any tangible medium such as by way of example but without limitation, a portable computer diskette, a flash drive, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable readonly memory (EPROM or Flash memory) , a portable compact disc read-only memory (CD-ROM) , an optical storage device, or a magnetic storage device.

It should also be appreciated that the above embodiment could be easily adapted to support procedures for objects other than robot arms. One or more of the procedures could be a sequence of steps to be executed by a mobile platform, communication module, or other object. The sequences can also be for a robot and a human who will interact with the robot. During the planning phase, the viewer can be used before the robot and the human each perform their sequence of steps to determine if when the sequences are performed there will be a conflict. Note that if the sequence of steps is not a program, it needs to be written in a consistent format so that the viewer's parser can identify the coordination points. For example, text descriptions such as "WaitForDevice StepNumber = 5" would describe a coordination point that is synchronized with step number 5 to be performed by another object.

Fig. 11 shows an example of a system that has a robot arm that interacts with a robot that is a mobile platform. In this example, the robot arm is mounted on the mobile platform, and the platform with the robot on it moves between two tables identified as the InTable and the OutTable. When the robot with the arm is docked at the InTable, the robot arm picks a part from the InTable that the robot arm will place on the OutTable when the mobile platform is docked at that table .

Fig. 11 shows three columns 1102, 1104 and 1106. Column 1102 is a viewer for the program for the robot arm. Column 1106 is a viewer for the operation of the program that controls the motion of the mobile platform. Column 1104 shows the coordination points between the two viewers.

Column 1104 shows three "stop lights" 38. The first stop light 38 is after the mobile platform has moved the robot arm to the InTable and has docked with that table. Thereafter the robot arm can perform the steps to pick a part from the InTable. The second stop light 38 is after the robot has picked the part from the InTable. The mobile platform remains stationary until the robot has completed the picking. Only then does the mobile platform move the robot holding the picked part to the OutTable. The third stop light 38 is after the mobile table has docked with the OutTable.

After that occurs the robot arm can perform the steps to place the picked part on the OutTable.

Fig. 12 shows an example of a system that has a robot arm that interacts with a communication module. In this example, the robot arm picks a part from a supply of parts such as a bin. The part has a bar code on it and the communication module communicates with a bar code reader that reads the bar code on the part and writes that information to a computing device.

Fig. 12 shows three columns 1202, 1204 and 1206. Column 1202 is a viewer for the robot program. Column 1206 is a viewer for the operation of the communication module that communicates with the bar code reader. Column 1204 shows the coordination points between the two viewers .

Column 1204 shows two "stop lights" 38. The first stop light 38 is after the robot arm has picked the part with the bar code on it and moved to the location of the bar code reader. By the end of that operation the communication module should be fully initialized. After that has happened the bar code reader can read the bar code on the picked part, process the data and write that data to the computing device. The second stop light 38 which is after the communication module has performed the reading, processing and writing steps ensures that the robot arm will not move to the position where the robot will place the picked part for further processing. Upon completion of the data processing resulting from the bar code reading, the robot arm will move to the position to place the picked part for further processing.

Referring now to Fig. 13, there is shown one example of a multi-view editor that includes a performance profiling tool for the same two robot arms whose programs are shown in the editor of Fig. 3. This editor shows the different timing and instruction statistics for each robot program and the statistics related to coordinated points. An instruction can represent one or more operations to be performed. The term "time" as used below includes not only absolute time but also averages, max, min etc. timing information. That is, "time" is time related information .

Fig. 13 shows the time used to execute a single instruction, a group of instructions, as well as the idle time for coordination points, e.g. waiting for a condition to be set by the other system or waiting to start a synchronous operation. The execution time information is displayed in the "left arm" and "right arm" columns. Depending on the program's profiling capabilities, the viewer could display the execution time for each instruction, a group of instructions, or for the whole operation. Wait times are displayed in the center column which also shows the coordination points between the robots. Specific colors or images, not shown in Fig. 13, could be used to highlight which operation, group of operations or waiting times are the longest, shortest or similar time related attributes.

Visually detecting the idle time of a specific robot in combination with the executing time of other operations allows users to easily understand and change the load of work or final sequence in order to optimize the cycle time of the operation. Users can save configuration files for a profile, which enables the user interface to highlight and record when a step, operation or a coordination point takes more time than the specified. For example, in Fig. 13, the difference in timing between the "left arm" and the "right arm" is two seconds . The profiling viewer shows that the "right arm" waits ten seconds for the "left arm" to finish the group of operations named "PickFrame". This coordination point is indicated by stop light 38. The user can move the last "MoveL" instruction from "PickFrame" to the "CleanCameraLens" to reduce the waiting time. The two coordination points associated with the wait times 6 seconds and 1 second are each indicated by plug 40.

Users can, as shown in Fig. 14, also generate a Gantt chart with the profiling data. The chart shows the sequence of steps of both arms, categories of operations performed in those steps, and the coordination points along with the whole cycle time. The Gantt chart of Fig. 14 shows an embodiment with three operation categories, namely, motion commands, hand commands and waiting time . The total time for executing all of the commands in the motion and hand commands categories and the total waiting time category are shown in the figure. In another embodiment, the Gantt chart can show the total time for each operation category further broken down to show operation category timing information for each arm individually as well, e.g. showing the total time one arm has spent executing motion commands . The user can also use the profiling data to calculate percentagewise how much time each different operation category took inside each step or during the whole cycle. The idle time (i.e. wait times) can also be calculated and displayed.

Also included is a method to generate a list of candidates of the undone operations. This list can be used to identify the operations that can be performed by an idle robot. The final list can organized by, but not limited to, the amount of time required to execute this operation (from the performance profile), the order in which such steps are ordered to be executed (i.e. order dependencies between operations), the required robots or resources used by the operations . The list can be filtered depending on the input data provided by users. For example, the list can include only independent steps if there is a data structure to store the precedents and dependencies among steps. This helps the user to pick the best feasible step in order to optimize the work load and the cycle time.

Referring now to Fig. 15, there is shown a flowchart 1500 for identifying the candidate steps. At action 1502, the program(s) are loaded. Either the whole program or only selected portions, e.g. the current execution, a predefined routine or function, or just specific lines of the programs, are loaded. At action 1504, the loaded program(s) are parsed. The goal of the parsing is to identify steps, that is, a group of one or more instructions and/or functions, by looking for: (1) singled and grouped instructions and/or functions, and (2) coordination points. These steps could be, but are not limited to:

Logical Flow - these instructions are usually combined with expressions, which control the program execution; and

Synchronization - these instructions contain or represent dependencies or coordination points.

The flow then proceeds to the optional action 1506 of obtaining metadata and then to action 1508 where the performance profile is obtained. At action 1508, at any point of execution (either while the program is running or not), each step, that is single and grouped instructions and/or functions, has the following information :

Mark :

+ Scheduled: it has not been started or prepared to be started. However, it is planned to be executed in the cycle

+ Executing: it is currently on progress

+ Finished: it has been done

Cycle time: either from the last cycle or an average of the historical cycle times.

The flow then proceeds to query 1510 where it is asked if there are more candidate steps that need to be processed. If the answer is yes, then the flow proceeds to query 1512 where the next candidate step is obtained and it is asked if this candidate step is already scheduled. If the answer is yes, the flow returns to query 1510 to determine if there are more candidate steps to be processed. If the answer to this query is still yes the flow proceeds to query 1512 to obtain the next step and determine if it is already scheduled. When at query 1510 there are no more candidate steps to be scheduled, the answer at that query is no and the flow proceeds directly to action 1516 described below.

If the answer at query 1512 is no, that is the obtained next step to be processed has not yet been scheduled, then the flow proceeds to action 1514 where the identified not yet scheduled step is added to the list of unscheduled steps. It should be appreciated that the result of queries 1510 and 1512 with the associated functions described above and action 1514 is a list of the unscheduled steps.

Action 1516 sorts the list of candidate steps and also has an input from action 1518, query 1520 and action 1522. Action 1518 is triggered when one of the candidates changes its status . If the program is running the changes in the step status will modify the list. Query 1520 asks if the step status is "Set to Scheduled?" If the answer is no, no action is taken. If the answer is yes, then at action 1522 the non- scheduled step is removed from the list of steps to be sorted by action 1516.

Action 1516 sorts the list of steps based on the user preference, either by:

o Cycle time (last or historical) o Order of execution

o Required resources.

Action 1524 filters the list of steps based on the user preference. One example it could be to filter the list to have only the independent steps. Action 1526 shows the filtered list of steps.

In addition to the above, timing information can be based on multiple cycles. The user can choose to view the average, minimum, or maximum times for operations and idle time, as well as standard deviations or other timing and statistical information. Viewing this information helps the user to decide how to change the sequence in order to optimize it.

It is to be understood that the description of the foregoing exemplary embodiment ( s ) is (are) intended to be only illustrative, rather than exhaustive, of the present invention. Those of ordinary skill will be able to make certain additions, deletions, and/or modifications to the embodiment ( s ) of the disclosed subject matter without departing from the spirit of the invention or its scope, as defined by the appended claims .