Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN OPTICAL POSITION DETECTOR
Document Type and Number:
WIPO Patent Application WO/2001/038823
Kind Code:
A1
Abstract:
An optical position detector comprises an image processing system (20) having an optical axis (16) directed towards an object to be located, and a target (18) positioned with a known relationship with respect to the object. The target has a pattern of lines defining a scale and the image processing system (20) is arranged to receive an image of the target and determine therefrom the location of the object relative to the optical axis. The target has a scale whose critical dimensions are encoded into the target image, so that the image processing system can compute the scaling factor between the measured image size and known, real world, image size and use this scaling factor to determine the misalignment of the object from the optical axis. The target (18) preferably comprises a series of concentric shapes.

Inventors:
MORCOM JOHN (GB)
APPERLEY RALPH (GB)
Application Number:
PCT/GB2000/004496
Publication Date:
May 31, 2001
Filing Date:
November 24, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INSTRO PREC LTD (GB)
MORCOM JOHN (GB)
APPERLEY RALPH (GB)
International Classes:
G01B11/27; (IPC1-7): G01B11/27
Foreign References:
US5943783A1999-08-31
US5974365A1999-10-26
US4272191A1981-06-09
US4155648A1979-05-22
Attorney, Agent or Firm:
Elkington, And Fife (Prospect House 8 Pembroke Road Sevenoaks Kent TN13 1XR, GB)
Download PDF:
Claims:
CLAIMS
1. An optical position detector, comprising : an image processing system having an optical axis directed towards an object to be located; a target positioned with a known relationship with respect to the object and arranged at least partially in line with said optical axis, wherein the target has a pattern of lines defining a scale, the physical dimensions of the scale being encoded into the pattern, and wherein the image processing system is arranged to receive an image of the target and determine therefrom the location of the object relative to said optical axis.
2. A detector as claimed in claim 1, wherein the target has a plurality of concentric shapes.
3. A detector as claimed in claim 2, wherein any adjacent pair of shapes encodes the physical dimensions of the scale.
4. An optical position detector as claimed in claim 3, wherein the ratio of the radii of each adjacent pair of shapes is different.
5. A detector as claimed in claim 2,3 or 4, wherein each shape defines a fixed point on the target.
6. A detector as claimed in claim 5, wherein the fixed point is the centre of the concentric shapes.
7. An optical position detector as claimed in claim 6, wherein the centre is obtained with centroiding software.
8. An optical position detector according to any one of claims 2 to 7, in which the shapes comprise circles and in which the image processing system is arranged to image at least two of the circles and determine the location of the object relative to said optical axis by comparing the radii of the at least two circles to obtain a scale, and by locating the centre of the concentric pattern to determine the position of the target.
9. An optical position detector according to claim 8, in which the area between every alternate pair of concentric circles is filled in.
10. An optical position detector according to any preceding claim, wherein the image processing system measures an offset between the optical axis and an identifiable point of the pattern by measuring the offset in first units, and converts the measurement in first units into real dimensions based on the physical dimensions of the scale.
11. A method of measuring the offset of an object from an optical axis, comprising: attaching a target to the object in a known positional relationship, the target having a pattern of lines defining a scale, the physical dimensions of the scale being encoded into the pattern, using image processing software to measure an offset between the optical axis and an identifiable point on the target in first units; and establishing a relationship between the first units and physical dimensions based on the physical dimensions of the scale which are obtained using the image processing software; and converting the offset in first units into a physical distance using the established relationship.
12. A method as claimed in claim 11, wherein the identifiable point is the centre of the pattern, and is obtained with centroiding software.
13. A method as claimed in claim 11 or 12, wherein the identifiable point is the centre of the pattern.
Description:
An Optical Position Detector Field of the Invention The present invention relates to an optical position detector, in particular for measuring the misalignment of an object or objects from a predetermined axis.

Background to the Invention Optical position detectors are typically used to enable the displacement of an object to be accurately measured with respect to a reference datum line in space. The position of the object may then be adjusted by the measured displacement to bring it into alignment with the reference datum line. Typical applications for these systems include the alignment of, amongst others, wing spars, machine beds and propeller shaft bearings. In each of these examples, the accuracy of the alignment is an important factor.

Figure 1 shows an example of a conventional optical position detector. The system includes a target 2, target holder 4 and an alignment telescope 6. An operator 8 views the target 2 through the alignment telescope 6. The telescope defines an image line of sight 13 and an object line of sight 14. The target holder 4 holds the target 2 in a precise spatial relationship to the object (not shown) whose alignment is to be measured. The position of the image of the target 2 is adjusted using calibrated controls 10 on the alignment telescope 6 until its centre is aligned with a fixed reticule (not shown) within the alignment telescope eyepiece. The displacement of the target 2 is then read from the calibrated controls 10.

One possible principle of operation of the alignment telescope controls will now be described

with reference to Figure 2 which shows an arrangement commonly used in such devices. A parallel block of dielectric 12, having a different refractive index than the cavity of the alignment telescope 6, is rotated by the controls 10. Rotation of the block 12 causes a displacement D in the object side line of sight 14 of the alignment telescope and hence in the position of the image of the target. The displacement D of the target and hence the object is proportional to the angle 6 of the parallel block relative to its original position, which is in turn set by the position of the control 10. Hence, the displacement of the line of sight 14, and of the imaged target, can be measured. The same principle can also be employed to measure displacement in two orthogonal axes.

Alignment telescopes have been in use for many years and have proven to be a reliable means of measuring and adjusting the alignment of many mechanical systems. However, they have a number of disadvantages.

Firstly, the measurement of displacement is dependant upon the operator's interpretation of the centre position of the target and estimation of the displacement from the instrument controls.

This introduces the opportunity for error and variability between operators. Secondly, the measurement process is labour intensive and time consuming as careful viewing and adjustment of the instrument controls are necessary to achieve an accurate result. To address these problems, video cameras have been used, in place of the operator's eye, coupled to an image acquisition and computer based processing system with centroiding software to automate the measurement.

This approach has provided some success as alignment telescopes usually have good linearity

over their field of view and the CCDs used in the video camera have very accurate geometry.

However, the magnification of the alignment telescopes varies significantly with focus over the operating range of the instrument. Therefore, a correction factor is needed to take account of this variation if accuracy is to be maintained.

In practice, this means that either the operator has to manually enter the focus position into the software for each measurement, reducing the benefits of the system and providing a potential source of user error. Alternatively, some form of digital encoder coupled to the focus control is required to enable the computer to make the necessary correction for magnification automatically. This substantially increases the cost of the instrument and makes retrofitting more difficult.

In addition, it has been shown that magnification varies between instruments so the software needs to be individually calibrated to a particular instrument, which is a time consuming process.

Summary of the Invention According to the present invention there is provided an optical position detector, comprising : an image processing system having an optical axis directed towards an object to be located; a target positioned with a known relationship with respect to the object and arranged at least partially in line with said optical axis, wherein the target has a pattern of lines defining a scale, the physical dimensions of the scale being encoded into the pattern, and wherein the image processing system is arranged to receive an image of the target and determine therefrom the

location of the object relative to said optical axis.

The present invention provides a position detector using a target with a scale whose critical dimensions are encoded into the target image. The image processing system can thus compute the scaling factor between the measured image size (which may be in units of number of pixels of the image sensor) and known, real world, image size and use this scaling factor to determine the actual offset of the object from the optical axis.

Any pattern will be suitable for which a known part of the target may be identified, for example the centre, and which encodes a scale. The known part of the target, for example the centre of the target, is positioned in a known relationship relative to the object.

The pattern may be a plurality of concentric shapes, and any adjacent pair of shapes can then encode the physical dimensions of the scale. This can be achieved by ensuring that the ratio of the radii of each adjacent pair of shapes is different. Thus, by measuring the ratio of a visible pair of shapes, the shapes can be identified uniquely, and the real dimensions can then be known (as they are constant and can thus be stored in a memory).

This arrangement means that the physical dimensions of the scale are encoded all over the target, and at different sizes. Thus, if the target is on a far away object, the image processing system may only be able to resolve the outer shapes. This is nevertheless sufficient to determine the physical size of the scale and thereby calibrate the measurement of the offset from the optical axis.

Each shape also defines the fixed point on the target, for example the centre of the concentric shapes.

The shapes preferably comprise circles and the image processing system is arranged to image at least two of the circles and determine the location of the object relative to said optical axis by comparing the radii of the at least two circles to obtain a scale, and by locating the centre of the concentric pattern to determine the position of the target.

The image processing system preferably measures an offset between the optical axis and an identifiable point of the pattern by measuring the offset in first units, and converts the measurement in first units into real dimension based on the physical dimensions of the scale.

The invention also provides a method of measuring the offset of an object from an optical axis, comprising: attaching a target to the object in a known positional relationship, the target having a pattern of lines defining a scale, the physical dimensions of the scale being encoded into the pattern; using image processing software to measure an offset between the optical axis and an identifiable point on the target in first units; and establishing a relationship between the first units and physical dimensions based on the physical dimensions of the scale which are obtained using the image processing software ; and converting the offset in first units into a physical distance using the established

relationship.

The applicant has recognised that it is possible to generate a target whose critical dimensions are encoded into the target image thereby removing the need for complex decoding of received signals and removing the need for a digital encoder coupled to the focus control, which is required in conventional systems to enable the associated computer to make the necessary correction for magnification automatically. In this case, no knowledge of the instrument magnification is required because the system is self calibrating. Furthermore, the system can then be used for measuring alignment over a wide range of distances from the image processing system.

Brief Description of the Drawings An example of the present invention will now be described in detail with reference to the accompanying drawings, in which: Figure 1 shows an example of a conventional optical position detector; Figure 2 shows an example of a control system used in the optical position detector shown in Figure 1; Figure 3 shows an optical position detector according to the present invention; Figure 4 shows a scaled pattern used on a target used in an optical position detector according to the present invention; and, Figure 5 shows a table of values used in the optical position detector of the present invention.

Detailed Description Figure 3 shows an example of an optical position detector for measuring the displacement of an object relative to an optical axis of the position detector according to the present invention. The system has a target 18 and an image processing system 20 having an optical axis 16. The target 18 is arranged at least partially in line with the optical axis of the position detector in that the target intercepts the optical axis 16 of the system. The image processing system is arranged to receive an image of the target 18 and from this determine the displacement of the centre of the target 18 from the optical axis 16 of the image processing system. The target may reflect ambient light, or it may be transmissive and a light source may then be provided behind the target.

The target 18 has a scaled pattern arranged on its surface which faces the optical processing system and, as will be explained below, this is used to determine the displacement of the article being measured automatically without the need for operator interference.

Figure 4 shows an example of a scaled pattern used on the target used in the optical position detector according to the present invention. The pattern has a number of concentric rings 22 having predetermined diameters. The diameter of the circular edge of one ring (either a light to dark or dark to light transition) is encoded in its ratio to the diameter of the next smallest circular edge (see Figure 5). In this example, the image processing system measures the diameters of the two largest whole edges it can detect at any time and from this uses a preprogrammed algorithm to determine the position of the target relative to the optical axis of the image processing system and hence the position of the object.

The ratio of the diameters of the two rings enables the system to detect which of the concentric circles of the pattern the system is viewing, and the absolute sizes of these circles are of course known. From this the system obtains the scale of the target with respect to the image acquisition system. Centroiding software enables the centre of the pattern to be obtained with high accuracy.

In use, the image of the target is obtained by the image processing system 20, and a measurement is made of the displacements between the edges of the pattern. For example, the edges of two consecutive circles in the pattern may be detected (e. g. black to white boundary and white to black boundary) with diameters of 600 pixels and 400 pixels respectively. The ratio of these diameters is 1.5 and so, from the table in Figure 5, the image processing system can deduce that the largest whole edge it can see must be 30mm in diameter. Therefore, the scaling factor between real world units and pixels is 30mm/600 pixels or 0. 05mm/pixel.

If the displacement (X, Y) of the centre of the largest whole edge circle is measured to be (20,15) pixels from the centre of the camera optical axis (or some other useful datum in the field of view) then in real world terms, the displacement is (1.0,0.75) mm. It is important to note that no knowledge of the system magnification was used in this measurement, because of the encoding of the target real world dimensions in the target image.

The distance of the target from the imaging system does not matter. For a distant target, the resolution of the system does not even need to be sufficient to be able to detect the smallest rings since the largest visible rings can be used for the image processing. Any adjacent pair of rings

encodes the real dimensions of the scale.

It may also be noted that if the pixels of the image acquisition system and processing system are not square, then the system would need to calculate two scaling factors, one related to the camera's horizontal scan axis and the other to the vertical scan axis.

Many alternative forms of pattern for the target could be employed, including for example, square, rectangular or linear targets provided that care is taken to encode their real world dimensions in such a way that they can be determined by the image processing system and used to calibrate the displacement measurement.

This system can be used in the alignment of a number of objects where it is required that they are accurately lined up. For example, the system could be used to arrange a series of elements such as wing spars, machine beds and propeller shaft bearings (amongst others) in a precise lined-up formation. Once each of the elements has been arranged in its correct position, it can be fixed there by any suitable means.

The optical axis may for example be defined by aligning the optical system with the first and the last objects in a line.

This principle can also be applied to other optical metrology systems included for example, auto- collimators. In these instruments, the target image is projected onto a reflective surface through a telescope and the reflected image is collected by the telescope and viewed through a beam splitter. The displacement of the reflected image, which is dependant upon the rotation of the reflective surface to the instrument line of sight, can then be measured using an appropriate target and processing system.