Making a piece of software that dumps camera frames from V4L2 into a file is not very difficult to do, that's only a few hundred lines for C code. Figuring out why the pictures look cheap is a way harder challenge.

For a long time Megapixels had some simple calibrations for blacklevel (to make the shadows a bit darker) and whitelevel (to make the light parts not grey) and later after a bit of documentation studying I had added calibration matrices all the way back in part 4 of the Megapixels blog series.

The color matrix that was added in Megapixels is a simple 3x3 matrix that converts the color sensitivity of the sensor in the PinePhone to calibrated values for the rest of the pipeline. Just a simple 3x3 matrix is not enough to do a more detailed correction though. Luckily the calibration software I used produces calibration files that contain several correction curves for the camera. For example the HSV curve that changes the hue, saturation and brightness of specific colors.

Even though this calibration data is added by Megapixels I still had issues with color casts. Occasionally someone mentions to be how "filmic" or "vintage" the PinePhone pictures look. This is the opposite of what I'm trying to do with the picture processing. The vintage look is because color casts that are not linear to brightness are very similar on how cheap or expired analog film rolls reproduce colors. So where is this issue coming from?

I've taken a closer look to the .dcp files produced by the calibration software. With a bit of python code I extracted the linearization curve from this file and plotted it. It turns out that the curve generated after calibration was perfectly linear. It makes a bit of sense since this calibration software was never made to create profiles for completely raw sensor data. It was made to create small corrections for professional cameras that already produce nice looking pictures. Looks like I have to produce this curve myself

Getting a sensor linearization curve

As my first target I looked into the Librem 5. Mainly because that's the phone that currently has the most battery charge. I had hoped there was some documentation about the sensor response curves in the datasheet for the sensor. It turns out that even getting a datasheet for this sensor is problematic. So the solution is to measure the sensor instead.

Measuring this pretty hard though, the most important part is having calibrated reference for most solutions. I've thought about figuring out how to calibrate a light to produce precise brightness dimming steps and measuring the curve of the light with a colorimeter to fix any color casts of the lights. Another idea was taking pictures of a printed grayscale curve but that has the issue that the light on the grayscale curve needs to be perfectly flat.

But after thinking about this in the background for some weeks I had a thought: instead of producing a perfect reference grayscale gradient it's way easier to point the camera at a constant light source and then adjust the shutter speed of the camera to produce the various light levels. Instead of a lot of external factors with calibrated lights which can throw off measurements massively I assume that the shutter speed setting in the sensor is accurate.

The reason I can assume this is accurate is because the shutter speed setting in these phone sensors is in "lines". These cameras don't have shutters, it's all electronic shutter in the sensor. This means that if the shutter is set to 2 lines it means that the line being read by the sensor at that moment is cleared only 2 scanlines before. This is the "rolling shutter" effect. If the shutter is set to 4 lines instead every line has exactly twice the amount of time to collect light after resetting. This should result in a pretty much perfectly linear way to control the amount of light to calibrate the response with.

In the case of the Librem 5 this value can be set from 2 lines to 3118 lines where the maximum value means that all the lines of the sensor have been reset by the time the first line is read out giving the maximum amount of light gathering time.

With libmegapixels I have enough control over the camera to make a small C application that runs this calibration. It goes through these steps:

  1. Open the specified sensor and set the shutter to the maximum value
  2. Start measuring the brightness of the 3 color channels and adjust the sensor gain so that with the current lighting the sensor will be close to clipping. If on the lowest gain setting the light source is still too bright the tool will ask to lower the lamp brightness.
  3. Once the target maximum brightness has been hit the tool will start lowering the shutter speed in regular steps and then saving the brightness for the color channels at that point.
  4. The calibration data is then written to a csv file

The process looks something like this:

This is a short run for testing where only 30 equally spaced points are measured. I did a longer run for calibration with it set to 500 points instead which takes about 8 minutes. This is a plot of the resulting data after scaling the curves to hit 1.0 at the max gain:

The response of the sensor is not very linear at all... This means that if a picture is whitebalanced on the midtones the shadows will have a teal color cast due to the red channel having lower values. If the picture would be corrected with whitebalance to correct the darker colors it would result in the brighter colors to turn magenta.

The nice thing is that I don't have to deal with actually correcting this. This curve can just be loaded into the .dng file metadata and the processing software will apply this correction at the right step in the pipeline.


It is at this point that I figured out that the LinearizationTable DNG tag is a grayscale correction table so it can't fix the color cast. At least it will improve the brightness inconsistencies between the various cameras.

With some scripting I've converted the measured response curve into a correction curve for the LinearizationTable and then wrote that table into some of my test pictures with exiftool.

This is the result. The left image is a raw sensor dump from the Librem 5 rear camera that does not have any corrections at all applied except the initial whitebalance pass. On the left is the exact image pipeline but with the LinearizationTable tag set in the dng before feeding it to dcraw.

The annoying thing here is that both pictures don't look correct. The first one has the extreme gamma curve that is applied by the sensor so everything looks very bright. The processed picture is a bit on the dark side but that might be because the auto-exposure was run on the first picture causing underexposure on the corrected data.

The issue with that though is that some parts of the image data are already clipping while they shouldn't be and exposing the picture brighter would only make that worse.

Maybe I have something very wrong here but at this point I'm also just guessing how this stuff is supposed to work. Documentation for this doesn't really exist. This is all the official documentation:

No, chapter 5 not helpful

Maybe it all works slightly better if the input raw data is not 8-bits but that's a bunch more of kernel issues to fix on the Librem 5 side.


So not that much progress on this at all as I hoped. I made some nice tools to produce data that makes pictures worse. Once the clipping in the highlights is fixed this might be very useful though since practically everything in the DNG pipeline expects the input raw data to be linear and it just isn't.

The sensor measuring tool is included in the libmegapixels codebase now though.

To fix auto-exposure I also need to figure out a way to apply this correction curve before running the AE algorithms on the live view. More engineering challenges as always :)

Development Funding

The current developments of Megapixels are funded by... You! The end-users. It takes a lot of time and a lot of weird expertice to make Linux cameras work and I've not been able to do it without your support.

The donations are being used for the occasional hardware required for Megapixels development (Like a nice Standard Illuminant A for calibration) and the various other FOSS applications I develop for the Linux ecosystem. Every single bit helps to not do all this work entirely for free.