How to choose a vision sensor?
At present, how to choose a machine vision sensor is more and more widely used in contemporary applications. How to choose a machine vision sensor is worth learning. Now we have a deep understanding of how to choose a machine vision sensor.
The camera is the eyes of the machine vision system, and the heart of the camera is the image sensor. Sensor selection depends on accuracy, output, sensitivity, cost of the machine vision system, and a good understanding of the application requirements. A basic understanding of the sensor's main properties can help developers quickly narrow their search to the right sensor.
Most users of machine vision systems recognize that the camera is a key element of the system, often thinking of it as the "chip" of the vision system. The camera itself is a complex system: including the lens, signal processor, communication interface, and the most core part - the device that converts photons into electrons: the image sensor. The lens and other components work together to support the function of the camera, and the sensor ultimately determines the highest performance of the camera.
Much of the discussion in the industry has focused on process technology and the relative merits of CMOS and CCD sensors. Both technologies have their advantages and disadvantages, and the processed sensors have different performances. What the end user cares about is not "how" the sensor is made, but how it will perform in the end application.
In a given application, three key elements determine sensor selection: dynamic range, speed, and responsiveness. Dynamic range determines the quality of the image the system is able to capture, also known as the ability to capture detail. Sensor speed refers to how many images the sensor can produce per second and the amount of image output the system can receive. Responsiveness refers to how efficiently the sensor converts photons into electrons, and it determines the level of brightness the system needs to capture a useful image. The technology and design of the sensor together determine the above characteristics, so system developers must have their own metrics when selecting sensors, and a detailed study of these characteristics will help make sound judgments.
Correct understanding of dynamic range
The dynamic range of the sensor is the most confusing and misunderstood place because machine vision systems are digital. The dynamic range of an image consists of two parts: one is the exposure range (multiples of brightness) that the sensor can work with; the second is the number of bits at which the sensor can digitize the pixel signal level. These two parts are usually closely related.
The dynamic range of the exposure represents the brightness level at which the sensor is able to function properly. Electrons are generated when photons strike the active pixel area of the image sensor, which the sensor captures and stores for reading by the system. The higher the number of photons hitting the active area, the higher the number of electrons produced, and the longer the process lasts between readings, the more electrons are stored. One of the parameters that determines the dynamic range of the sensor exposure is the exposure to fill the storage wells. The semiconductor process and circuit design used to make the sensor together determine the volume or depth of the well.
Electronic noise is the minimum exposure level at which the sensor can work, and the image sensor will also generate electrons in the form of heat emission, although none of the photons hit the active pixel area. To produce a recognizable signal, enough photons must hit the active pixel area so that there are more electrons in the storage well than would be produced by dark current noise. The minimum exposure of the sensor is to generate at least as many photoelectrons as noise electrons. The sensor can only produce useful information when it exceeds the noise-equivalent exposure level.
The exposure dynamic range of a sensor is a function of its physical and circuit design, while the digital dynamic range is only a function of its circuit design. The digital dynamic range of an image sensor is just an indication of the apparent exposure it can provide to the vision system. An 8-bit sensor has 256 gray levels, a 10-bit sensor has 1024, and so on. The number of bits representing the dynamic range is not necessary to reflect the highest exposure the sensor is capable of responding to, but the two usually correspond.
An equivalent amount of signal less than the dark current noise level does not yield useful information, and similarly no additional information is generated if the digitized value is greater than the sensor's maximum signal value. In practice, the sensor needs to be designed with equal signal intensity and dark current noise level, with sufficient signal steps to saturate the exposure signal level. Designed this way, a sensor's digital dynamic range and its exposure dynamic range describe the same thing: the ratio of the saturated exposure to the noise equivalent.
Interactions determine trade-offs
The dynamic range of the sensor determines the quality of the image produced by the machine vision system to a certain extent. The higher the number of bits, the finer the details of the image that the system can resolve. The increasing demand for lower dark current noise and high accuracy has made sensors more and more expensive. However, not all applications require fine-grained images. Therefore, designers have designed sensors with different dynamic ranges to choose from. For example, parcel sorting or electronic production inspection, 8 bits of dynamic range can work effectively. However, medical and aerial reconnaissance require 14 bits of dynamic range.
Application requirements also place requirements on the second characteristic speed of the sensor
Speed is a more intuitive characteristic than dynamic range, it is simply a measure of how fast the sensor can acquire and transmit images to the system. The speed of the sensor also includes two aspects: one is the frame rate, which is the time required for the sensor to transmit pixel data to the system. The other is the exposure time required by the sensor to capture a useful image. Frame rate can never be faster than exposure time, so frame rate is a general measure of sensor performance.
In process inspection applications, the speed of the sensor determines the output of the system. If each image represents a part under inspection, the system can inspect no more parts per second than the sensor can send in frames per second. When the imaged object is in motion, in order to prevent image blurring, a high acquisition speed must be required. Therefore, high-speed sensors are required for high-output detection systems and imaging applications for high-speed moving objects.
Speed and dynamic range are interrelated, and in order to deliver an image quickly, the sensor must quickly digitize every pixel of data. This means that the analog-to-digital converter needs to quickly develop a stable output.
Physically and design-wise, speed should give way to dynamic range. The faster a circuit runs, the more heat it generates. The dark current noise of a sensor increases with temperature, so the higher the speed of the sensor, the louder it will be and the lower the dynamic range will be. High-speed sensors are noisier than slow-speed sensors and can provide lower dynamic range.
The speed of the sensor is also related to its third characteristic responsivity
The higher the frame rate required in an application, the less time is spent on exposure. To reduce exposure time, designers need to increase the brightness of the light, and if they don't increase the brightness, they can only choose a highly responsive sensor.
Responsivity refers to the strength (V) of the signal produced under a given exposure condition. In an image sensor, three factors control the responsivity: The first is quantum efficiency, or the number of electrons produced per photon. The second element is the size of the capacitance (C) of the sensor output circuit that stores the charge (q). The formula for the signal voltage of the charge is V=q/C. The third element is the output amplifier gain of the sensor. Gain alone does not improve the responsiveness of the sensor if it is operating at the same exposure level as the noise.
Developers must make trade-offs between three key elements of dynamic range, speed and responsiveness when choosing sensors for their machine vision systems. High speeds and low light levels will result in increased noise and reduced dynamic range. The high demand for imaging detail also requires increased light intensity to compensate for the lower responsivity, as dynamic range allows. The physical properties of the sensor itself inevitably require a balance between these three key elements.
The three key elements mentioned above are not the only considerations that make up sensor selection, there are two other important factors: the resolution of the sensor and the pixel pitch, any of which can affect the quality of the image, and is related to the three above. key elements interact.
Resolution refers to how many pixels make up an image, and it is a measure of sensor size and pixel spacing. The resolution of the sensor required for an application depends on several related factors: field of view, working distance, sensor size and pixel pitch, and the number of pixels required by the system to capture spatial detail. The higher the resolution of the sensor, the faster its clock must run to achieve the desired frame rate. Therefore, the resolution of the sensor has a very large effect on the speed.
The pixel pitch defines the size of a single pixel area and works with the sensor size to determine the sensor's resolution. Since sensors are usually only available in limited sizes, the smaller the pixel pitch, the higher its resolution. Pixel pitch can affect responsivity, but the smaller the pitch, the smaller the active area each pixel can collect photons.
Ultimately, all of these sensor elements interact with the rest of the camera. The resolution of a camera lens is measured by the modulation and demodulation function (MTF), for example, the resolution of the lens must match the pixel pitch of the sensor for ideal image quality. As far as the sensor resolution allows, a black-and-white line pattern with a 5-micron MTF lens on a sensor with a 3-micron pixel pitch can only produce a gray image. Therefore, other system components that match it must be purchased when purchasing sensors.
The most important point is to fully understand the application requirements for sensor dynamic range, speed and responsiveness. Requirements determine what performance is within acceptable limits, and ultimately, the requirements of other components of the system.
What are the differences between industrial cameras and ordinary cameras? Here is the answer for you:
(1) The shutter time of industrial cameras is very short, which can capture fast-moving objects.
For example, an industrial camera captures an image of a computer host rotating a fan at a high speed, and can clearly identify the manufacturer's sign on the fan blade. Shooting with an ordinary camera is impossible to achieve this effect. This mainly involves the issue of exposure time, and the exposure of industrial cameras can reach 1/10,000 or 1/100,000.
(2) The image sensor of an industrial camera is scanned progressively, while the image sensor of a general camera is interlaced, or even scanned every three lines.
Progressive scan image sensors are difficult to produce, with low yields and small shipments. Only a few companies in the world can provide such products, which are expensive. The price of a million-level progressive scan CCD ranges from several thousand yuan to tens of thousands of yuan. Only by using a progressive scan image sensor, it is possible to clearly capture fast moving objects. smear phenomenon.
(3) The shooting speed of industrial cameras is much higher than that of ordinary cameras.
Industrial cameras can shoot ten to one hundred frames per second, and high-speed industrial cameras can shoot more than a thousand frames, while general cameras can only shoot 2-3 images.
(4) The output of industrial cameras is raw data, and its spectral range is often relatively wide, which is more suitable for high-quality image processing algorithms. And the pictures taken by ordinary cameras, its spectral range is only suitable for human vision.
(5) Industrial cameras can work for a long time, such as working 24 hours a day for 6 months, but not for civilian use.
(6) Industrial cameras can work in relatively harsh environments, such as colder temperatures. Ordinary digital cameras can't do this.
(7) Industrial cameras are more expensive than ordinary cameras.
Compyriht2020© Shenzhen JeeNew Intelligent Equipment Co., Ltd. ALL Rights Reserved. 粤ICP备17030005号-1 粤公网安备 44030702001749号