In image sensors, signal formation does not end when photons generate photoelectrons. After exposure, the collected charge must still be read out, measured, and converted into digital values before it can appear as image data.
This digitization process plays an important role in how scientific cameras represent signal.It affects not only how image intensity is expressed numerically, but also how performance parameters such as bit depth, readout speed, and data interpretation should be understood.
This article explains how sensor signal moves from collected charge to digital output, and why that process matters in scientific imaging.
What Happens After Photoelectrons Are Collected?
At the end of an exposure, each pixel contains collected charge generated by incoming light. At this stage, the signal still exists as stored photoelectrons rather than as digital image data.
How that charge enters the readout chain depends on sensor architecture. In rolling shutter designs, the signal is typically read from the pixel well. In global shutter designs, it may first be transferred to a dedicated storage node before readout begins. In either case, the important point is that the signal has been collected, but it has not yet been measured or digitized.
This distinction matters because image formation in a scientific camera involves more than photon detection alone. After charge collection, the signal must still pass through several stages of readout and conversion before it becomes the digital gray level value seen by the user.
How Is Sensor Signal Read Out and Digitized?
Once exposure is complete, the collected charge is transferred into the readout chain row by row. The goal of this process is to convert the stored signal into a stable digital value that can be used to form the image.
Although this conversion happens very quickly inside the camera, it involves several distinct steps. The collected charge is first converted into a measurable voltage, then buffered to preserve its value during readout, and finally digitized by the analogue-to-digital converter (ADC).
Figure 1: Pixel exposure and measurement process
The four stages of typical signal exposure and measurement
From Charge to Voltage
The collected signal is not read out directly as an electron count. Instead, the charge must first be stored in a capacitor, across which a voltage can then be measured.
This step is essential because the rest of the sensor electronics works by measuring voltage rather than directly counting photoelectrons. In this way, the stored charge is converted into an analog electrical representation of the signal.
Why the Pixel Amplifier Is Needed
The voltage generated by a small number of collected electrons can be very weak. Before that signal can be measured reliably, it must be buffered so that its value is preserved during readout.
This is the role of the pixel amplifier. Often implemented as a source follower, the amplifier helps isolate the signal from the rest of the readout circuitry and maintain its integrity during measurement. It does not create the signal itself, but it helps ensure that the signal can be read out accurately.
Where the ADC Converts Signal into Digital Data
The actual digitization takes place in the analogue-to-digital converter, or ADC. At this stage, the analog voltage is measured and assigned a digital value.
That digital output becomes the pixel’s gray level intensity in the final image. In CMOS architectures, rows of ADCs can operate in parallel, allowing every pixel column in a row to be measured simultaneously. This parallel readout is one reason CMOS cameras can achieve high-speed digitization and efficient signal output.
What Does the Digital Output Represent?
The final digital output does not represent light directly. Instead, it represents the measured signal level after the collected charge has passed through the full readout and digitization chain.
By the time the signal appears as image data, it has already undergone several stages of conversion: photoelectrons were collected, transformed into a measurable voltage, buffered during readout, and then assigned a digital value by the ADC. The resulting number is the pixel’s digital gray level intensity.
This is important because image data should not be understood as a direct count of photons. What the user ultimately sees and processes is a digitized representation of the sensor signal. That representation reflects both the collected charge and the way the camera converts that signal into numerical output.
Understanding this helps explain why digital image values are meaningful, but also why they depend on more than exposure alone. They are the result of the full signal chain, not just photon detection at the sensor surface.
How Digitization Affects Camera Performance?
Signal digitization does more than turn analog sensor data into a digital image. It also affects how precisely the signal can be represented, how quickly it can be read out, and how reliably image data can be interpreted in scientific applications.
Bit Depth and Signal Representation
Bit depth determines how many discrete digital levels are available to represent the measured signal. A higher bit depth allows the output to describe smaller differences in signal intensity with finer numerical resolution.
This does not create additional photons or improve the sensor’s physical light collection, but it does affect how accurately the collected signal can be expressed in digital form. In scientific imaging, this is especially important when small intensity differences need to be distinguished or measured.
Readout Speed and Frame Rate
Digitization is also part of the camera’s timing performance. Because analogue-to-digital conversion is one of the most time-sensitive stages in the readout chain, it can strongly influence overall readout speed and frame rate.
In CMOS architectures, rows of ADCs can operate in parallel, allowing all pixel columns in a row to be measured simultaneously. This parallel operation is one reason CMOS cameras can support efficient high-speed readout.
Dynamic Range and Quantitative Interpretation
Dynamic range depends on more than digitization alone, but digitization still plays an important role in how signal levels are represented across the image. The analog signal must be converted with sufficient precision so that useful intensity differences are preserved in digital form.
This is especially important in quantitative imaging, where image values are used not only for visualization, but also for comparing signal magnitude across pixels, regions, or time points. In that context, digitization affects how faithfully the final digital output reflects the measured sensor signal.
Why Signal Digitization Matters in Scientific Imaging?
In scientific imaging, signal is often limited, and the numerical output of the camera is used not only for visualization, but also for analysis and comparison. This makes signal digitization more than a technical back-end process.
● Weak signals must be preserved through the full readout chain: In low-light and photon-limited imaging, the usefulness of the final image depends on how well the collected signal is maintained and represented during digitization.
● Digital values support measurement, not just display: In many scientific workflows, such as Calcium Imaging, pixel intensities are interpreted as meaningful data. This makes the reliability of the digitization process important for quantitative analysis.
● Camera performance depends on more than photon collection alone: Even when light is successfully detected at the pixel level, the signal must still be converted into digital form in a way that preserves useful intensity differences.
How to Read These Concepts in a Camera Datasheet?
Understanding signal digitization helps turn camera specifications into a more complete picture of sensor behavior.
● Bit depth indicates how finely the signal can be represented digitally: It describes the number of available output levels, not the amount of light collected by the sensor.
● Readout speed depends partly on how quickly the signal can be digitized: ADC architecture and parallel readout can influence how efficiently image data is produced.
● Digital output values are the result of a full signal chain: They reflect not only exposure and charge collection, but also voltage conversion, buffering, and analogue-to-digital conversion.
● Performance specifications should be read in context: Understanding digitization helps users interpret image data, compare cameras more accurately, and better understand how numerical image values are formed.
Conclusion
Signal digitization is the process that turns collected charge into usable digital image data. After exposure, the signal must move through several stages, including charge storage, voltage conversion, buffering, and ADC measurement, before it becomes the gray level value seen in the final image.
Understanding this chain helps explain how scientific cameras represent signal and why digitization matters for image interpretation, readout speed, and quantitative imaging performance.
Tucsen Photonics Co., Ltd. All rights reserved. When citing, please acknowledge the source: www.tucsen.com
2026/03/27