How pixel binning makes your Galaxy S22 photos great

The smartphone industry continues its all-out war for camera supremacy, with brands trying to cram as many pixels into as many cameras as possible. From those paltry 2-megapixel macro and depth cameras to 108-megapixel snappers on phones like the Galaxy S22 Ultra, the numbers only seem to go up.

Soon Samsung’s 200-megapixel camera sensor will take things to the next level, but at the heart of all that megapixel magic is a technology called pixel binning – and it’s key to a camera’s success. . However, not all pixel groupings are the same. Samsung uses the 4-in-1 “tetra” pixel grouping on the Galaxy S22 and the 9-in-1 “nona” pixel grouping on the Galaxy S22 Ultra. Does all of this make a difference? We discovered.

Why pixel binning is necessary

What does pixel binning do? In short, it allows adjacent pixels to function as one large “super pixel”, collecting more data to deliver brighter photos with more accurate color and less noise. Before getting into the technical details, it’s important to understand why this is happening in the first place.

Your phone’s camera sensor is the component that collects and processes all of the optical information that comes to it from the lens on the front. The sensor, in turn, is essentially a plate of pixels. Millions of them, in fact. Just like cells in a plant, pixels absorb light, which then undergoes signal conversion to produce the image we see on our phone screen.

Samsung galaxy s22 ultra and s22+ side by side.
Digital Trends / Andy Zahn

But here’s the weird part. The higher the number of pixels, the higher the resolution of the image, which allows for more detail and sharpness. However, as we continue to add more pixels, the size of the sensors should also increase to accommodate them. Going from 10MP to 200MP should result in a 20x larger camera sensor. But since the space available inside a smartphone’s chassis is limited to accommodate the image sensors, this increase in size cannot occur.

To solve the problem, the pixel size is reduced, fitting these photosensitive elements more on the sensor plate without increasing its size too much. However, the smaller a pixel, the worse it absorbs light, resulting in dull detail and color. This is where pixel clustering technology comes to the rescue by algorithmically creating larger pixels that can absorb more light. When this happens, you get higher quality photos.

The benefits of pixel clustering are easy to see

When this algorithm kicks in, a larger super pixel is created that absorbs more light data. This is especially important in low-light environments where the camera sensor needs to collect as much light as possible. In the case of tetrapixel clustering on the Galaxy S22, when four neighboring pixels of the same color are merged into one, their sensitivity to light increases four times.

Galaxy s22 50-megapixel low-light camera sample

As a result, pixel-clustered photos turn out brighter with higher sharpness and contrast. The image above was captured at the native 50MP resolution of the main camera of the Galaxy S22. Notice the level of grain and blurry edges. Below is a 12.5-megapixel shot of the same subject captured by the S22, showing well-defined lines and much better color reproduction, with a brighter edge profile.

Galaxy s22 12-megapixel low-light camera sample

But the benefits of pixel binning aren’t limited to low-light photography. In fact, the technology also elevates the HDR (High Dynamic Range) output. When taking pictures of a high contrast subject or environment, pixel binning technology again produces tangible benefits.

Each group of pixels (based on its color) has a different level of photosensitivity and exposure time, which means they collect light information in a segmented form and with greater precision. As a result, when HDR processing is applied to the optical data collected by each pixel array, photos look punchy, with higher color accuracy and improved dynamic range.

Samsung’s different approaches to pixel binning

The scale of the pixel grouping depends on the number of pixels themselves. For example, a 48MP camera combines four pixels into one artificially enlarged super pixel to deliver 12MP photos. That’s why brands market it as a 4-in-1 pixel grouping. Similarly, camera sensors with 50 million or 64 million pixels produce 12.5MP and 16MP images, respectively. In Samsung marketing jargon, you may come across the name “Tetracell” to define this process.

Tetracell pixel grouping on samsung camera sensor.

Technically, pixels do not physically move or combine. In place, this is done at the software level using remosaic algorithms. Individual pixel layout continues to be the usual RGB affair. Tetracell’s job is to group pixels with the same color filter next to each other in a 2×2 pixel array and merge them to create a larger artificial RGB pixel array to collect more light. Take a look at the image above to see how it goes.

The Galaxy S22’s 50MP camera uses 1-micron pixels, but when pixel-clustering technology kicks in, it merges a 2×2 array of adjacent 1-micron pixels. This gives us a larger super pixel that measures 2 microns in diameter. This is the tetra method. But when you have a 108MP camera on a phone like the Galaxy S22 Ultra, the pixel size gets even smaller.

Grouping of non-cellular pixels on samsung camera sensor.

Instead of a 4-in-1 grouping of pixels, this 108MP sensor relies on what Samsung calls “Nonacell” technology. It combines nine neighboring pixels into one. This merging of a 3×3 pixel array creates a larger super pixel with a size of 2.4 microns. By doing so, the resolution drops from the native 108MP to 12MP, but the photos turn out to be brighter with better color accuracy. This is the non-a pixel binning method.

Full resolution comparison between the galaxy s22 ultra and the standard galaxy s22
A cropped segment of a 108MP image clicked by a Samsung Galaxy S22 Ultra (left) compared to a 50MP image clicked by the Galaxy S22.

As mentioned above, smaller pixels struggle to collect light data, so they lose detail in photos. The image above left is a segment of a 108MP full-resolution image taken by the Galaxy S22 Ultra’s main camera sensor, which comes with smaller 0.8-micron pixels. On the right is a cropped segment from a 50MP photo taken by the Galaxy S22’s main camera, which contains 1-micron larger pixels. Thanks to larger pixels, the Galaxy S22’s camera sensor collects more light data and as a result, you can see more detail on the leather strap, with improved sharpness and much better exposure.

However, when pixel binning kicks in, the Galaxy S22 Ultra’s camera sensor creates a larger 2.4-micron super pixel that collects more light data than the Galaxy S22’s main camera. which artificially creates a 2 micron smaller super pixel. Unsurprisingly, the results are reversed.

Galaxy s22 ultra vs galaxy s22 night mode grouped in pixels.
A 9-in-1 Night mode photo with pixel binning taken by a Samsung Galaxy S22 Ultra (right) against a 4-in-1 photo with pixel binning taken by a Galaxy S22.

As you can see from the image above, the Galaxy S22 Ultra’s larger super pixel provides better subject separation with better sharpness control, more surface detail and better color accuracy. But pixel binning isn’t just about bringing out low-light detail. It also plays a huge role in color reproduction, dynamic range management and other crucial parameters.

A cropped segment of a 108mp image clicked by the galaxy s22 ultra (left) versus a 50mp image clicked by the galaxy s22
A cropped segment of a 50MP image clicked by a Samsung Galaxy S22 (left) versus a 108MP image clicked by the Galaxy S22 Ultra.

In the image above left, the Galaxy S22 does a much better job of subject exposure, depth estimation, and color reproduction in a 50MP full-resolution shot, compared to to the 108MP snapshot of the same scene from the Galaxy S22 Ultra. The smaller pixels of the Galaxy S22 Ultra’s main camera result in washed out colors on buildings and an overall less punchy profile.

Daylight-pixel-binned-standard-s22-vs-s22-ultra
Daylight pixel sample from a Samsung Galaxy S22 (left) compared to an image taken by a Galaxy S22 Ultra.

Just like the low light scenario, pixel grouping again highlights the difference and reverses the results. Thanks to the larger super pixels created by the Galaxy S22 Ultra’s camera sensor, the image on the right above captures the grooves of the brick more accurately in the image and the colors turned out to be closer of reality than in the photo taken by the vanilla Galaxy S22. However, it should be emphasized here that pixel clustering is not the only factor determining image quality. Much depends on the sensor brand, underlying algorithms, and aperture, among other factors.

The future of pixel binning on smartphones

With no end of pixel wars in sight, the next evolution is 200MP camera sensors. In fact, Motorola is rumored to be releasing the first phone with such powerful imaging hardware. In this case, the remosaic algorithms will combine no less than 16 pixels into a single large unit. Take Samsung’s own 200MP ISOCELL HP-1 sensor, for example, which introduces a new hybrid form of pixel clustering.

4x4 pixel binning on the samsung hp1 camera sensor.

Depending on the lighting situation, it performs a hybrid 4×4 pixel binning process that takes place in two stages. First, the sensor performs 4-in-1 clustering which involves a 2×2 array of 0.64 micron pixels. This creates a larger super pixel that measures 1.28 microns and produces photos at 50 megapixel resolution. Next, the sensor performs another round of 4-in-1 arraying that involves a 2×2 array of 1.28 micron pixels, creating an even larger super pixel that measures 2.56 microns. At the end of this process, the final image resolution drops to a manageable 12.5 megapixels.

Therein lies the reason why pixel grouping is so necessary. As smartphone camera sensors receive more and more pixels, the need for quality pixel grouping becomes all the more important. And it is a technology that is constantly evolving. Whether it’s tetra, nona, or the hybrid pixel grouping mentioned above, companies are still figuring out which methods work best for different cameras.

Editors’ Recommendations






Source link

Related Posts

error: Content is protected !!