垂直多關節(jié)型工業(yè)機器人設計
垂直多關節(jié)型工業(yè)機器人設計,垂直,關節(jié),工業(yè),機器人,設計
附 錄
Modular Visual Servo Tracking for Industrial Robots
Robert Fuelep Biro
robert.biro@gtri.gatech.edu
Georgia Institute of Technology
Atlanta, GA, 30332
Gary McMurray
Georgia Tech Research Institute
Atlanta, GA, 30332
Harvey Lipkin
Mechanical Engineering
Georgia Institute of Technology
Atlanta, GA, 30332
Abstract
Visually servoing a robot to track a moving work-piece has been demonstrated in literature using specially customized equipment. In this work, a simple modular architecture is presented using off-the-shelf components with serial line interfaces. The method has widespread application for existing industrial robots whose capabilities can be upgraded with-out altering proprietary original equipment proven to be reliable. As proof of concept, an ADEPT robot is visually servoed to track work pieces on a conveyor belt moving up to 400 in/min. Simulation results areshown to compare reasonably well with experimental data. Advantages and limitations of the implementation are discussed including the crucial effect of delays.
1 Introduction
1.1 Visual servoing
The purpose of visual servoing is to track an object of unknown prior location using only relative visual feedback information provided by a camera.
Traditionally, the location and orientation of an object in an absolute frame must be determined before operation begins. Look and move schemes analyze a single picture o?-line to provide the world coordinates of the object. Visual servoing is different in that the robot trajectory depends on a continuously updated error signal in a relative frame.
1.2 Dynamic visual servoing
The goal of dynamic visual servoing is to achieve accurate high speed robotic tracking, performing trajectory control in the image plane. As robot speed increases, the update rate of the feedback device must increase. While customized high speed vision systems exist, most commercially available cameras provide 30 or 60 frames per second, resulting in an overall processing rate of 2 to 15 images per second.
The dominant effects of using slowly updating sensors are overshoot and oscillations which lead to limit cycles and instability. Consider a discrete closed loop system consisting of a feed forward integrator and a zero-order hold in the feedback, simulating sensor delay. The effects of changing the delay are shown in Figure 1. The system represented by a solid line has the same sampling rate of 16 ms throughout while the dotted{line system has a feedback delay of 200ms. Given a step input, the first system exhibits a first order exponential decay response while the other shows dramatic overshoot and oscillation.
1.3 Industrial implementation
Most implementation issues are not documented, relying on computer simulations[7] or high performance specialized equipment[4][10] in order to validate theories. In this paper, a paradigm is presented for retrofitting current industrial robots using separate, PC{based vision and controls modules. Only relative coordinate frames are used, eliminating the need for absolute coordinate frames. For demonstration purposes, an ADEPT ONE robot has been used for visual servoing.
2 Previous work
The problem with applying current research to an industrial environment can be divided into four areas,absolute coordinate frames, performance characteristics, stability, and specialized hardware.
2.1 Absolute coordinate frame
Some researchers[5][6] use the look and move approach which involves transforming the features to an absolute coordinate frame and then commanding the end-effector to move to the location. Bishop[2] concentrates on improved camera models. For these systems, the error signal is formed combining a world object position, calculated from the camera output, and the end-effector location, determined using a kinematic model applied to joint encoder data. The advantage in a relative frame is that the camera directly produces an error signal.
2.2 Performance characteristics
Another approach is to only perform static visual servoing where the goal is to reach a target position without regard to time[3]. By using very low controller gains, the serious effects of latency due to time delays and different sampling rates may be ignored.
2.3 Stability
While many researchers provide proofs that the tracking error converges to zero, they do not mention the bounded{input{bounded{output stability of the state variables and control effort[7][10][12][13]. There is no discussion of the possibility that the estimated parameters may tend towards non{realizable values arising from amplifier saturation or motor torque limits.
Most algorithms are derived in the continuous time domain, whereas digital implementation is strictly in the discrete time domain. When designing in the discrete time domain, it is straightforward to account for time delays due to implementation, including the sampling rate of the vision system, time for mathematical computations, and operating system overhead. Hosoda and Asada[8] find that their gains need to be bounded, even though the continuous time proof shows stability for all gains.
2.4 Specialized hardware
It is necessary to consider the design constraints of the available hardware when formulating a strategy for visual servoing. Both Corke[4] and Nelson[10] use a PUMA 560, but override the internal controller and implement their own controllers. While this method simplifies development for a specific robot system, it does not provide for a general solution applicable to most industrial robots.
3 Adaptation of current industrial robots
The most economical means of attaining visual servoing is to retain as much of the existing robot as possible, augmenting it with commercially available platform independent components. The methodology should be applicable to a wide variety of industrial robots. Open architectures will allow for performance improvements in computers and image processing hardware.
3.1 Methodology
Our strategy is to utilize the real{time trajectory modification capabilities available on many robots, such as the ADEPT ONE robot through the Alter command. The vision and trajectory control systems are located on separate personal computers, connected by serial interfaces as depicted in Figure 2. The vision computer captures an image and determines the Cartesian error vector from the center of gravity of the object to the camera lens. This information is then sent to the controls computer which computes a velocity command. Finally, the information is passed to the ADEPT controller which implements the velocity commands. At no point is customized hardware used.
3.2 Experimental setup
3.2.1 Industrial robot
The interface between the visual servoing components and the standard industrial robot is simply a serial communications connection at 19.2 kbps. The ADEPT MC robot controller calculates the inverse kinematics, using a velocity input from the controls computer. Alter is a fairly standard command whose input is simply a differential change in end-effector location. The only programming required on the robot is a host program that reads in the incremental movements over the serial communications line using the single precision IEEE floating{point format and places the movements into the Alter command which runs as a background task.
3.2.2 Controls computer
The controls computer, a 100 MHz Pentium based personal computer, acts as the system coordinator.
Connected by serial cables to both the industrial robot and vision system, the controls computer reads in the processed vision data, using the IEEE floating point format, and calculates a velocity command. It can also display various diagnostic information to the screen.
Most industrial robots use older processors and interpreted languages which execute slowly. Instead of being limited to the controller provided by the manufacturer, one can use any advanced control routine.
The ability exists to easily test several different algorithms on the same experimental setup. At first a simple PID control system with integrator anti-wind-up was used. Later, a crude form of
predictive control, using an exponential decay factor for the vision input, was added to compensate for delays in the system. Recently, there has been some experimentation with adaptive self-tuning controllers.
Details of this control system can be found in [1].
3.2.3 Vision computer
The current method utilizes a Pulnix TMC-74 color camera mounted on the robot end-effector in the eye-in-hand configuration. An Imaging Technologies ITIML Frame Grabber and AM-CLR Acquisition Module reside in a 33MHz 486 host computer. Pixels of specific colors are separated from the background and used to calculate a center of gravity. This allows object recognition without prior knowledge of shape, size or placement. The color contrast method is generally insensitive to changing light conditions, not requiring additional lighting sources. When necessary to determine height above the object, an additional algorithm uses information from the size of the object[11].
The vision system presents the largest bandwidth bottleneck of the system. Originally a Data Translation DT-2871 Frame Grabber with a DT-2858 Coprocessor was used, but was only able to provide an update rate of 2 Hz. The current system can require over 60 ms to acquire an image and then another 25 ms for processing, a total rate of 10 Hz. As soon as one cycle is completed, the features are sent to the controls computer and image acquisition begins again.
4 Results
4.1 Cycle times
The time to process one image and transport the information through the module to the ADEPT varies significantly, from less than 80 ms to almost 250 ms with a mode of roughly 100 ms. This has a tremendous impact on the controller gains. To compensate for this, a system of fixed cycle times has been developed to provide consistent sampling rates and allow for accurate modeling. Each module is forced to operate with a cycle time of 120 ms. In the event that a module has not finished processing data when the cycle is over, the last set of data is retransmitted.
An unforeseen benefit of this is to correct a feature inherent in implementing PID controllers which run at a faster rate than the feedback. Since integral control is a sum of all previous error inputs, the control action ramps up as the same data is used for several cycles, causing overshoot. When the feedback data is not updated, the rate of change of the error signal is zero, eliminating influence of derivative control.
4.2 Simulation accuracy
While computer simulations provide much insight into the effect of delays and multiple sampling rates, it is imperative that the simulation provide accurate results, leaving data collection for only fine tuning the PID parameters.
The system has been modelled using discrete-time elements in Simulink, a graphic user interface for MATLAB. This includes all known delays in the system. Step responses of the simulation at various gains are compared to actual results to fine tune the simulation. As can be viewed in Figure 4, the simulations provide a good approximation of the actual system step response for a stationary object approximately 4 in (100 mm) from the camera starting point.
4.3 Tracking performance utilizing Dynamic Visual Servoing
The system is able to track an object on a conveyor belt moving at 400 in/min with 0.4 in (10 mm)steady state error. The limiting factor in conveyor belt tracking is initial image acquisition. The stationary end-effector initially moves towards the object at high speed as soon as it comes into view. If the vision update rate is too slow, the end-effector will pass beyond and overshoot the object before the next image is captured. Tracking will only be possible if the object remains within the field of view as the end-effector decelerates and reverses direction.
When the object moves slower than 200 in/min, steady state error is less than one millimeter. Owing to limited workspace, it is not possible
to determine the steady state error when the object moves faster than 300 in/min, but it is less than 1 in (25 mm) and decreasing. The fifth line shows the response when the object is at first moving 300 in/min then accelerates to 500 in/min at 6 seconds. While overshoot is too great to track an object moving initially faster than 400 in/min, once the error is small, object speed may be greatly increased.
Importantly, the system is able to track a randomly moving object with velocities up to 150 in/min while maintaining the object within the camera field of view. The major limitation in tracking random movement arises from a change in direction between vision updates. The object may fall out of the camera field of view before corrective measures can be taken.
While better feedback bandwidth does not necessarily lead to better conveyor tracking, faster vision has a tremendous impact on random movements. Using the older vision system, the robot was only able to track random movements with a maximum velocity of 12 in/min.
4.4 Three dimensional operation
When only tracking an object on a conveyor, some overshoot may be acceptable. On the other hand, when servoing vertically to grasp the workpiece, overshoot can damage object, robot, and workcell. As the camera approaches the object, field of view reduces perceptibly allowing for the possibility that the object may disappear from the vision system. This is a problem inherent in high speed dynamic visual servoing using an offset eye{in{hand configuration.
This is alleviated by using a two stage vertical axis profile. The vertical axis trajectory, handled by a separate algorithm, allows fast servoing to a specified distance above the object, then slower, more precise movement as it nears the object. Final closure does not begin until the object is within a given radial distance from the center of the end-effector. The visual servoing system is able to grasp objects spaced 12 inches apart moving 75 in/min. A picture of the industrial visual servoing system in operation can be seen in Figure 6.
Figure 6: Three dimensional visual servoing system
4.5 Hardware limitations
4.5.1 Robot controller
The ADEPT provides real{time trajectory control only in the end-effector Cartesian frame. Direct joint control is not possible. The simple host program on the ADEPT still requires on average 80 ms to complete one full cycle of reading in 12 bytes, placing the converted numbers into the real{time trajectory algorithm, and performing four safety checks. While hardware upgrades exist to alleviate these problems, the relative cost for an ADEPT upgrade is extraordinary, approximately 50% of the original robot/controller purchase cost.
4.5.2 System robustness
Performance is limited not only by hardware constraints, but by safety measures as well. Robot workspace must be constrained to a rectangular area over the conveyor belt, preventing the end-effector from attempting to leave the workspace. The maximum robot velocity is clipped to eliminate jerky motion. Additional safety checks are required to ensure proper communication between the industrial robot and the controls computer. All of these measures must be run on the robot controller, slowing the controller cycle time.
4.5.3 Image processing
The Imaging Technologies hardware has a sampling rate up to 10 Hz. The addition of digital image processing hardware will not only increases the feedback bandwidth, but allow for more sophisticated image processing techniques, such as area of interest which dramatically reduces computation time. The increased performance will free time to be allotted for more complex software routines, enabling better noise rejection and more advanced image recognition algorithms.
4.6 Accurate system modeling
It would be desirable to form a single discrete-time transfer function from input/output data. The Simulink model has been revised to include an adaptive least-squares estimation algorithm to determine the input/output discrete{time transfer function. Since the system is greatly affected by the internal multiple sampling{rates, the estimator never converges to a single set of coefficients regardless of how persistently exciting the input signal may be. The variance in the individual coefficients is small enough, less than 10%, that an attempt to implement least squares parameter estimation of the visual servoing system is warranted.
While it is possible to collect the input and output data, synchronizing the time of each piece of data proves to be difficult. The data flow from both the vision to controls computer and from the controls computer to the industrial robot is one directional. Further modifications are necessary to collect the data centrally inside the controls module, where current robot positions can be matched to image processing information, allowing for discrete{time system identification.
5 Conclusions
The industrial visual servoing system presented in this paper has demonstrated the feasibility of adapting currently installed industrial robots to the visual servoing paradigm. This system is modular in nature to incorporate advances in both control theory and image processing. By moving trajectory control from the robot controller to an outside source using existing frameworks, the advent of improved computer hardware and programming techniques is now applicable to robot control.
The main drawback of this system is still the system bandwidth both in the feedback loop and in the feedforward connection from the controls computer to the industrial robot. Accurate modeling and predictive control measures may be able to compensate and provide performance to meet industrial requirements.
References
[1] Robert Biro. Visual Servoing Using Industrial Components." Masters Thesis, George W. Woodruff School of Mech. Eng., Georgia Inst. of Tech., Atlanta, Georgia, 1996.
[2] Brad Bishop, S. Hutchinson, and M. Spong. \On the Performance of State Estimation for Visual Servo Systems." Proc. 1994 IEEE Conf. Robotics and Automation, San Diego California, May 1994, pp. 168{
73.
[3] Fran? cois Chaumette. \Visual Servoing Using Image Features Defined Upon Geometric Primitives." Proc. 33rd Conf. Decision and Contr., Lake Buena Vista,
Florida, December 1994, pp. 3782{87.
[4] Peter Corke. High Performance Visual Closed-Loop Robot Control." Ph.D. Thesis, Dept. of Manufacturing Eng., Univ. of Melbourne, Melbourne, Australia, July, 1994.
[5] P.Y. Coulon and M. Nougaret. Use of a TV Camera System in Closed{Loop Position Control of Mechanisms." Proc. 6th Int. Conf. Robot Vision and Sensory Contrs., Paris, France, June 1986, pp. 119-27.
[6] John Feddema, O. Mitchell. \Vision{Guided Servoing with Feature{Based Trajectory Generation." IEEE Trans. Robotics and Automation, 5:691-700, 1989.
[7] Hideki Hashimoto, T. Kubota, M. Sato, and F. Harashima. \Visual Servo Control of Robotic Manipulators Based on Arificial Neural Network." Proc. 1989 IEEE Industrial Electronics Conf., Philadelphia, Pennsylvania, November 1989, 770-74.
[8] Koh Hosoda and M. Asada. \Versatile Visual Servoing without Knowledge of True Jacobian." Proc. IROS, August 1994, pp. 186-93.
[9] Martin J? agersand. Perception level learning, planning and robot control." PhD Thesis Proposal, Dept. of Comp. Sci., Univ. of Rochester, Rochester, New York, 1995.
[10] Bradley Nelson and P. Khosla. \Strategies for Increasing the Tracking Region of an Eye{in{Hand System by Singularity and Joint Limit Avoidance." The Int. J. Robotics Res., 14:255{69, 1995.
[11] Magnus Rognvaldsson. Machine Vision Approach for Visual Servo Controlled Robotics." Masters Thefisis, School of Industrial and Systems Eng., Georgia
Inst. of Tech., Atlanta, Georgia, 1996.
[12] Arthur Sanderson, and L. Weiss. Adaptive Visual Servo Control of Robots." Robot Vision, New York, New York,
收藏