Chain of refined perception in self-optimizing assembly of micro-optical systems
Today, the assembly of laser systems requires a large share of manual operations due to its complexity regarding the optimal alignment of optics. Although the feasibility of automated alignment of laser optics has been shown in research labs, the development effort for the automation of assembly does not meet economic requirements – especially for low-volume laser production. This paper presents a model-based and sensor-integrated assembly execution approach for flexible assembly cells consisting of a macro-positioner covering a large workspace and a compact micromanipulator with camera attached to the positioner. In order to make full use of available models from computer-aided design (CAD) and optical simulation, sensor systems at different levels of accuracy are used for matching perceived information with model data. This approach is named "chain of refined perception", and it allows for automated planning of complex assembly tasks along all major phases of assembly such as collision-free path planning, part feeding, and active and passive alignment. The focus of the paper is put on the in-process image-based metrology and information extraction used for identifying and calibrating local coordinate systems as well as the exploitation of that information for a part feeding process for micro-optics. Results will be presented regarding the processes of automated calibration of the robot camera as well as the local coordinate systems of part feeding area and robot base.