//hier war das Sitetag

Glossary

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Arming Trigger

Capturing a single event requires to arm a streak camera system or a high-speed camera. For streak cameras this is done by activating the single-shot mode and starting a snap-shot acquisition. For high-speed video cameras the recoding is activated. Both camera types would remain in the armed state until the acquisition is triggered.

Backtrace

Triggered sweep units generate a deflection voltage to sweep the electron beam over the phosphor screen of the streak tube. This movement initiated by the trigger signal starts from the initial position and has a defined sweep speed. After this regular of forward sweep the deflection voltage will return to its initial value resulting in a sweep in reverse direction. This sweep is called backtrace and has no defined sweep speed. When photoelectrons are generated during the backtrace period they will generate a signal (trace) superimposed to the measurement signal.. Gating or blanking techniques can be used to eliminate this backtrace signal.

Bayer-Pattern

CCD and CMOS detectors can convert light intensity into an electrical signal but can not provide information about the color. Therefore, the pixels of color sensors are covered by color filters. A typical way to arrange these filters is called Bayer-Pattern. A group of 4 pixels allows to calculate intensity and color information.

Blanking

Generally blanking describes the inhibition of a signal for a defined period. In contrast to gating the term blanking is used to describe a technique where a voltage is applied to the deflection plates to sweep off the electron beam from the phosphor screen.

Blemish

Micro channel plates and phosphor screens of image intensifiers and streak tubes may have small areas of non perfect operation. If the transmission or gain at this area is lower compared to its surrounding area it appears as dark spot called blemish.

C-mount

C-mount defines a type of lens mount with fixed thread diameter (1” = 25.4mm) and fixed flange focal length (17.526mm). Generally, C-mount lenses can be used for small image dimensions of up to about 16mm diagonal.

CCD Sensor

Light sensitive semiconductor device used to convert image information into an electrical signal. The quantity of light hitting the sensor surface generates an equivalent electrical charge. After the exposure the charge is shifted to the output node where it is converted into an electrical output signal. CCD (Charge Coupled Device) sensors have high sensitivity and low dark signal noise. They are still typically used for sensitive and high-quality cameras.

CMOS Sensor

Light sensitive semiconductor device used to convert image information into an electrical charge image. Contrary to CCD devices, the charge is not shifted but locally detected before reset for the next integration period. The semiconductor technology (CMOS) allows integration of different processing units (as e.g. analog to digital converters) on the same chip. As charge shift isn’t required the readout architecture can be easily organized as a parallel process to attain higher readout speed. With these advantages CMOS sensors are the preferred devices used for high-speed video cameras.

CTF

This abbreviation stands for Contrast Transfer Function and can be compared to the MTF. The spatially modulated input signal shows a rectangular form. The measurement setup for CTF measurements is easier than that of the MTF measurement.

CW Laser

Contrary to pulsed lasers continuous wave (CW) lasers provide a constant and non-intensity-modulated laser beam.

Dark Noise

A light sensitive detector generates a signal even in complete darkness. A photocathode emits thermally generated electrons. This signal is given in electrons per unity area and time. See also EBI.
CCD and CMOS sensors have leakage currents that are accumulated for each pixel and detected during the readout. The leakage current is characterized by its mean value (electrons per area and time) and its statistical fluctuation. The fluctuation increases – when a normal distribution is presumed – proportionally to the square root of the number of electrons and is called dark noise. Cooling the sensor or reducing integration time allows to reduce leakage current and consequently also dark noise.

Dark Signal Noise

See dark signal

Dark Signal Uniformity

A dark signal is generated by each CCD and CMOS sensor even without any input light. Together with the transfer characteristics of the readout electronics each pixel may have its own dark signal. The dark signal uniformity describes this systematic signal variation. The temporal signal fluctuation needs to be considered separately.

Deflection Speed

see sweep speed

Deflection Unit

see sweep unit

Distortion

Optical and electro-optical imaging systems might have different magnification depending on the spatial position on the active area. This results in a non-proportional image. Pincushion and barrel distortion are typical for tapers and streak tubes with large photocathodes.

Dual Sweep

Streak tubes may provide two pairs of deflection plates in an orthogonal arrangement that allows dual sweep operation. Typically, the first deflection plates close to the photocathode are used to generate a fast sweep with high deflection frequency. The second deflection plates close to the phosphor screen are used to for a slow sweep speed at lower frequency. If no second sweep is required, the second deflection plates can remain without drive voltage.

EBI

The equivalent background illumination (EBI) can be defined for streak tubes and image intensifiers. It specifies an illumination level typically given in µlux that would generate the same number of electrons as electrons that are generated due to the thermal noise. To measure the EBI the black body radiation at 2850K is used and calibrated by using a luxmeter. Therefore, the EBI is depending on the absolute dark current but also on the spectral sensitivity of the detector.

EMVA1288

The EMVA1288 is a standard in the field of machine vision to characterize and describe sensors and cameras. This allows to obtain comparable camera data and radiometric information.

Exposure Time

Period during which a CCD or CMOS sensor is sensitive.

F-mount

The mechanical fixing for lenses manufactured for Nikon cameras is also called F-mount. This defines the bayonet fixing and also the back focal length.

Fill Factor

Image pixels of CCD and CMOS sensors have a specified fill factor. It is defined as ratio between the light sensitive pixel area and its physical area.

Frame Rate

Number of images per unit of time, typically given in frames per second (fps).

FWHM

The full width at half maximum (FWHM) specifies the width of a signal that is characterized by a maximum. (e.g. a peak). The temporal or spatial distance at 50 % of the maximum value is measured. Supposing a Gaussian signal form, there is a fixed relationship between FWHM and the standard deviation σ.
FWHM = 2.35 · σ

Gain Uniformity

The gain or sensitivity of electro-optical components like image intensifiers, streak tubes or CCD and CMOS sensors might be different depending on the spatial position on the active area. The gain difference from the mean value can be described as gain non-uniformity. Nevertheless, it is typically called gain uniformity.

GigE

This characterizes a camera interface based on the Gigabit Ethernet standard. GigE also called GigE VisionTM provides a specified set of control commands.

Gating

This term is typically used to describe the fast switch ON or OFF of an electro-optical gate or detector. During the ON time the detector is active, while during the OFF time no signal is generated. The suppression ratio between the ON and OFF is referred as shutter ratio. The photocathode of streak tubes or image intensifiers are typically used for gating purposes. The shutter ratio may reach values of more than 106.

High-Speed Camera

Video camera allowing to capture frames at significantly higher frame rate compared to standard video camera at 25 fps.

Image Intensifier

This is a vacuum tube integrating photocathode and phosphor screen. Photoelectrons emitted from the photocathode are accelerated towards the phosphor screen and generate a brighter image compared to the input. First generation tubes (Gen. 1) use an electro-static focusing system between the photocathode and the screen. Second generation tubes (Gen. 2) are built with a MCP to increase the number of electrons. Proximity focusing is used between the photocathode and the MCP as well as between MCP and screen. Higher generation tubes (Gen. 3) use different photocathode materials to obtain higher quantum efficiency.

Interframing Time

During the capture of successive images with CCD or CMOS cameras there is a time gap between two images where it is not defined whether an optical signal contributes to one or the other image. This time is called interframing time. Cameras used for PIV applications should have a short interframing time.

IR

This abbreviation stands for Infrared radiation and describes the non visible part of light with wavelengths between 780nm and 1mm. Related to high-speed cameras and streak cameras the range between 780nm and 1.500nm (NIR) can partly be detected.

Jitter

Jitter describes the fast and random temporal relation of different signals or parts of the same signal. On streak camera applications, the delay of an optical pulse related to the trigger signal of the sweep unit or the sweep voltage itself may fluctuate statistically. If many measurement signals are superimposed on the phosphor screen and when the pulses are not perfectly appearing on the same screen position, the resulting pulse will be spatially and consequently temporally broadened.

Limiting Resolution

This describes the performance of an optical system how many small elements are separately visible. The limiting resolution can be determined by visual observation on a spatially and sinusoidally modulated signal. The spatial frequency where the signal is still perceived as modulated signal is given. The modulation at this frequency is typically 5%. Assuming that the MTF and the LSF are Gaussian functions, the FWHM of the LSF and the limiting resolution RL are related.
FWHM = 0.92 / RL

LSF

The Line Spread Function is the profile of a very thin line transmitted trough an optical system.

Luminous Gain

This characterizes the perceived gain of an electro-optical system. Input and output signals are measured according the spectral eye sensitivity. Typically, a black body radiator with defined temperature is used to illuminate the system. An additional IR filter might be used. The output signal is measured with a lux meter.

Machine Vision

Machine vision means capturing and processing of images typically for industrial applications. Optronis uses the term machine vision for cameras of CamPerform-Series that capture images at high speed and transmit them in real time via special interfaces such as CoaXPress or CameraLink. These cameras are often used for industrial applications.

MCP

This abbreviation related to electro-optical detectors stands for micro-channel plate. It describes a thin plate having a huge number of tiny holes or channels. The distance between the holes might be in the 10 to 12 µm range and the hole diameter might be about 6 µm. The inner walls are covered with a high resistive layer. When a photoelectron hits this layer, secondary electrons are emitted. Due to the secondary electron emission characteristics, each channel operates as an electron multiplying device. MCPs are used in gen. 2 image intensifiers to obtain a high gain that can be controlled over a wide range by varying the MCP voltage.

Modulation

The term describes typically the variation of a carrier signal. Related to optical signals and streak cameras, modulation describes the image intensity variation as a function of time or space. If e.g. pulses with a given pulse-width are closely placed side by side, the resulting measurement signal looks like a continuous signal with remaining modulation on its top. The modulation (m) is calculated by measuring the maximum (Imax) and minimum (Imin) intensity.
m = (Imax – Imin) / (Imax + Imin)

MTF

The modulation transfer function is abbreviated by MTF. The spatial resolution of an electro-optical system can be described by this function. A spatially and sinusoidally modulated signal is applied on the input of the system. The modulation as function of the spatial frequency is measured.

NIR

Near infrared radiation. See also IR

Noise

Generally, all statistical fluctuations on the output of an optical detector are referred as noise. This noise can be separated into a temporal fluctuation and a fluctuation related to the spatial position of a two-dimensional sensor. Temporal fluctuations are related to the dark noise of the sensor and the shot noise caused by the statistical uncertainty of the number of photoelectrons. CCD and CMOS sensors additionally might show Fixed Pattern Noise (FPN) that is related to the pixel position. This fixed pattern noise is from the physical point of view a deterministic signal, that may be compensated completely by mathematical operations.

Optical Trigger

This function is used in high-speed cameras and can also be called image trigger. It allows to trigger the image capture when part of the image changes.

Photocathode

Light sensitive layer that emits photoelectrons under illumination. The photoelectron current is proportional over a wide range to the illumination intensity. Photocathodes are characterized by their spectral sensitivity. Typical sensitivity curves for visible light detection are named for example S1, S20 or S25. The electron emission takes place within less than one Picosecond.

PIV

The particle imaging velocimetry describes a measurement method where two pictures of particles are taken. As the time difference between these pictures is well defined, the position difference for each particle allows to calculate the local particle speed. High-Speed cameras allow to extend traditional PIV applications as they may acquire successive PIV images with high repetition rates.

Pixel (picture element)

Smallest element of a digitally captured or processed image

Quantum Efficiency

The quantum efficiency describes the ratio between the number of emitted electrons and the number of received photons. The quantum efficiency (QE) of photocathodes depends upon the wavelength and is typically in the range of 1 to 15%. For CCD and CMOS sensors the quantum efficiency might reach up to 80%.

Readout Area

The area on the streak tube phosphor screen captured by the readout camera is called readout area. Its dimension along the sweep direction is typically limited by the range of the constant sweep speed. Its dimension along the slit length depends on the electro-optical magnification of the streak tube and the length of the slit image on the photocathode.

Readout Camera

The image on the phosphor screen of the streak tube is captured by a sensitive camera and transferred to a computer for further processing. A streak camera readout camera should provide high sensitivity and low noise but also high coupling efficiency to the phosphor screen.

Readout Noise

High sensitive CCD and CMOS cameras are provided with a readout noise specification. The readout noise defines the temporal fluctuations of the output signal generated by the camera electronics. In order to relate this noise to CCD characteristics, it is defined in electrons. The readout noise should not be confused with the noise behavior of the detector.

Slow-Motion

Slow-motion describes the display of image sequences at a lower rate than the rate during they had been captured. This slow display allows to precisely analyse fast processes. Optronis uses the term slow-motion for cameras of CamRecord-Series that capture images at high rate and store them in real time. This storage feature is the key difference compared to machine vision cameras of CamPerform-Series.

Spatial Resolution

This describes the ability of an optical system to image or to reproduce small structures. The spatial resolution can be defined as limiting resolution or FWHM.

Streak Tube

The streak tube is the key component of a streak camera. This vacuum tube consists of a photocathode, a focusing system, deflection plates and a phosphor screen. Electrons generated by the photocathode are accelerated towards the phosphor screen. When passing the deflection plates, they can be deviated orthogonally to the propagation direction before they hit onto the phosphor screen. The image on the phosphor screen can be shifted proportionally to the applied deflection voltage. Streak tubes may be built with one pair of deflection plates or with two pairs that are oriented orthogonally with respect to each other.

Streak Unit

see sweep unit

Sweep Speed

A voltage ramp applied to the deflection plates results in a moving image on the streak tube phosphor screen. The speed of this movement is called sweep speed typically given in time units per dimension units (example ps/mm). This simplifies the conversion from position information into the time information of the measured signals.

Sweep Speed non-linearity

The sweep speed of an image that moves over the streak tube phosphor screen may change slightly along the sweep direction. This sweep speed variation referred to its mean speed is called sweep speed non-linearity. The sweep speed non-linearity may depend on the selected sweep speed.

Sweep Unit

Part of a streak camera that provides the deflection voltages to drive the deflection plates of the streak tube. The sweep unit offers a range of selectable sweep speeds. The sweep unit might be integrated in the streak camera body or it might be a separate and exchangeable module. Sweep units are separated in two different types according to the temporal behavior of the deflection voltage. Sweep units generating a linear voltage ramp after each trigger pulse are called triggered sweep units (TSU). If a high frequency sinusoidal voltage is generated, the corresponding unit is called synchroscan sweep unit (SSU).

Synchronization

Synchronization generally describes the temporal coincidence of two processes or signals. Related to streak cameras synchronization means the coincidence between the electron deflection and the optical signal that has to be measured. Related to high-speed cameras synchronization describes the capture of individual images controlled by an external signal.

Synchroscan

In synchroscan mode a sinusoidal sweep voltage is applied to the deflection plates of a streak tube. This mode allows to operate the streak system at high deflection frequencies of 40 to 250MHz. As many sweeps can be superimposed on the phosphor screen, this technique allows to measure very faint signals.

Taper

This describes rigid bundles of light guiding fibers. They are melted together and have different active areas on both ends. Tapers are typically used to guide light to the small area of CCD or CMOS sensors.

Temporal Resolution

This describes the ability of a system to capture a short optical pulse and to indicate its actual duration. The temporal resolution is often described as pulse duration indicated by the system when a very short pulse is measured.

Timebase

Related to streak cameras, this describes the duration of the measurement. It depends on the adjusted sweep speed and the size of the phosphor screen captured by the readout camera.

Trigger Mode

See triggered sweep

Triggered Sweep

In triggered sweep mode, the electrons emitted from the photocathode are swept over the phosphor screen only once per trigger pulse. This allows to use the trigger mode for single-shot applications. The sweep is controlled by a ramp voltage. The sweep speed can be adjusted over a few orders of magnitude.

UV

This abbreviation stands for ultraviolet radiation and describes the non visible part of light with wavelengths between ~10nm and 400nm

Video Camera

This describes a camera to capture a sequence of images showing real objects. Contrary to video camera are streak cameras capturing images showing non real object but intensity distributions to measure their temporal evolution.

Wavelength

Physical description for a property of light that is perceived as color.

X-ray

Electro-magnetic radiation with photon energies above about 100 eV.

YAG Laser

This describes a solid-state laser using an Yttrium Aluminum Garnet crystal as amplifying medium. The crystal can be doped with Neodymium (Nd) to obtain a Nd:YAG laser.

Zoom Lens

Object lens with variable focal distance. Compared to an object lens with fixed focal distance this provides higher flexibility when object distance and field of view has to be selected. Imaging geometry and lens aperture are typically lower than fixed focal lenses.

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z