Microscopy & Automation
Long Way from Motorization to True Automation
Automation in Imaging?
In the mid-1990s, microscopes evolved into so-called imaging-systems, which were mainly characterized by a switch from analogue to digital cameras, by step-by-step motorization of components, and advances in software-control of components and detectors/cameras.During the last one or two decades, advanced data processing and storage, as well as robotic integration (fig. 1) was often added to motorized microscopes to further boost throughput and efficiency of imaging systems.
This did already help in achieving some of the automation goals, at least partially. These imaging systems save the researcher labor and time and improve accuracy, quality and precision of imaging experiments, and last but not least reproducibility. When Nature quizzed about 1500 scientists towards reproducibility, the majority of participants agreed that “there is a ‚crisis‘ of reproducibility”. ”Low statistical power” or “poor analysis” were among the considered most responsible factors underlying the reproducibility problem .
But is a so-called “fully automated imaging system” really fully automated, when you have to interact with the system at many points? During setup and conduction of experiments the operator is often still required and much time has to be invested to acquire, process, analyze and export images and data that fulfill high research and publication standards.
What are the steps during a typical work- flow at the imaging system that need a high degree of automation to perfectly support the researcher?
Some of the more advanced microscope systems on the market will offer features from Table 1. An example for a proper realization of many of these features is ZEISS Celldiscoverer 7, which is fully controlled by the ZEN software, and exhibits a variety of automation features, that render it a truly automated system. It can, besides many other capabilities, identify the type of sample carrier, measure bottom type and thickness and also calibrate the carrier. All without the user having to interact with the system or even knowing what steps are currently automatically undertaken to set the stage for the imaging experiment. The soft- ware will then for example carry out screening of a large area at low magnification, for specific objects (rare events), that are automatically acquired at high magnification in 3D over a longer time-period (fig. 2). Of course automation does not stop at this point, but the above steps alone save the researcher several minutes or even hours each time a new sample is inserted, let alone the hassle that is avoided.
There is a number of features that a researcher should look out for, when aiming for an increase in automation.
First and foremost a proper integration of all motorized components, sensors and input devices into a software environment is essential. State-of-the art machine learning, object recognition and advanced processing that can interact with, and influence, the acquisition engine, is the basis for successful imaging experiments of the next generation. Ideally the software does not only allow pre-made configurations, but also has well-documented interfaces to include new executables and code snippets, and lets the user edit and complement data- bases for dyes, hardware etc. Only then, the most recent innovations from the scientific community, can boost intelligent automation when needed.