Megapixels - BrixIT BlogMegapixels and related toolshttps://blog.brixit.nl/tag/megapixels/page/1Sat, 11 May 2024 14:45:17 -000060Megapixels contributionshttps://blog.brixit.nl/megapixels-contributions/99MegapixelsMartijn BraamSat, 11 May 2024 14:45:17 -0000<p>I've been working on the code that has become libmegapixels for a bit more as a year now. It has taken several thrown-away codebases to come to a general architecture I was happy with and it it has been quite a task to split off media pipeline tasks from the original Megapixels codebase.</p> <p>After staring at this code for many months I thought I've made libmegapixels a nearly perfect little library. That's the problem with working on a codebase without anyone else looking at it.</p> <p>About two weeks ago libmegapixels and the general Megapixels 2.x codebase had it's first contact with external contributors and that has put a spotlight on all the low hanging fruit in documentation and codebase issues. A great example of that is this commit:</p> <div class="highlight"><pre><span></span><span class="gh">diff --git a/src/parse.c b/src/parse.c</span><span class="w"></span> <span class="gh">index bfea3ec..93072d0 100644</span><span class="w"></span> <span class="gd">--- a/src/parse.c</span><span class="w"></span> <span class="gi">+++ b/src/parse.c</span><span class="w"></span> <span class="gu">@@ -403,6 +403,8 @@ libmegapixels_load_file(libmegapixels_devconfig *config, const char *file)</span><span class="w"></span> <span class="w"> </span> config_init(&amp;cfg);<span class="w"></span> <span class="w"> </span> if (!config_read_file(&amp;cfg, file)) {<span class="w"></span> <span class="w"> </span> fprintf(stderr, &quot;Could not read %s\n&quot;, file);<span class="w"></span> <span class="gi">+ fprintf(stderr, &quot;%s:%d - %s\n&quot;,</span><span class="w"></span> <span class="gi">+ config_error_file(&amp;cfg), config_error_line(&amp;cfg), config_error_text(&amp;cfg));</span><span class="w"></span> <span class="w"> </span> config_destroy(&amp;cfg);<span class="w"></span> <span class="w"> </span> return 0;<span class="w"></span> <span class="w"> </span> }<span class="w"></span> </pre></div> <p>A simple patch that massively improves the usablility for people writing libmegapixels config files: Actually printing the parsing errors from libconfig when a file could not be read. Because I generally run libmegapixels through the IDE and have all the syntax highlighting etc set up for the files I simply haven't triggered this codepath enough to actually implement this part.</p> <p>These last two weeks there have also been some significantly more complicated fixes like tracing segfault issues in Megapixels 2.x which helps a lot with getting the new codebase ready for daily use. Figuring out some API issues in libmegapixels like not correctly setting camera indexes in the returned data. Also the config files have now been updated to work with the latest versions of the PinePhone Pro kernel instead of the year old build I've been developing against.</p> <h2>Video recording</h2> <p>I've been saying for a long time that video recording on the PinePhone won't be possible, especially not to the level of support on Android and iOS due to hardware limitations. The only real hope for proper video recording would be that someone gets H.264 hardware encoding to work on the A64 processor.</p> <p>I can happily report that I was wrong. Pavel Machek has made significant progress in PinePhone video recording with a few large contributions that implement the UI bits to add video recording. A new second postprocessing pipeline for running external video encoding scripts just like Megapixels already lets you write your own custom scripts for processing the raw pictures into JPEGs.</p> <p>Video recording is a complicated issue though, mainly due to the sheer amount of data that needs to be processed to make it work smoothly. On the maximum resultion of the sensor in the PinePhone the framerate isn't high enough for recording normal videos (unless you enjoy 15fps video files) but on lower resolutions the pipeline can run at normal video framerates. The maximum framerates from the sensor for this are 1080p at 30fps and 720p at 60fps.</p> <p>For 720p60 the bandwidth of the raw sensor data is 442 Mbps and for 1080P30 this is 497 Mbps. This is a third of the expected bandwidth because the raw sensor data is essentially a greyscale image where every pixels has a different color filter in front. This is too much data to write out to the eMMC or SD card to process later and the PinePhone also struggles already to encode 720p30 video live without even running a desktop environment.</p> <p>There are two implementations of video recording right now. One that saves the raw DNG frames to a tmpfs since RAM is the only thing that can keep up with the data rate. This should give you roughly 30 seconds of video recording capabilities and after that recording time it will take a while to actually encode the video.</p> <p>Pavel has posted an <a href="https://social.kernel.org/notice/AhFxeCMdslrRIhQjE8">example of this video recording</a> on his mastodon.</p> <p>The second way is putting the sensor in a YUV mode instead of raw data. This gives worse picture quality on the sensor in the PinePhone but the data format matches more closely to the way frames are stored in video files so the expensive debayer step can be skipped while video recording. This together with encoding H.264 video with the ultrafast preset should make it just about possible to record real-time encoded video on the PinePhone.</p> <h2>Many thanks</h2> <p>It's great to see contributions to Megapixels 2 and libmegapixels. It's a big step towards getting the Megapixels 2.x codebase production ready and it's simply a lot more fun to work on a project together with other people.</p> <p>It's great to have contributors working on the UI code, the camera support fixes for devices and the many bugfixes to the internals. It's also very helpful to actually have issues created by people building and testing the code on other distributions. This already ironed out a few issues in the build system.</p> <p>There also has been some nice contributions to the Megapixels 1.x codebase, all of those should by now already have been merged into your favorite PinePhone distribution :)</p> <p>The last few Megapixels update blogposts have all been around Megapixels 2.x and the supporting libraries so none of the improvements are immediately usable by actual PinePhone{,Pro} and Librem 5 users until there is an actual release. It will take a bunch more polish until feature parity with Megapixels 1.x is reached.</p> Fixing the Megapixels sensor linearizationhttps://blog.brixit.nl/fixing-the-megapixels-sensor-linearization/95MegapixelsMartijn BraamThu, 25 Jan 2024 22:45:44 -0000<p>Making a piece of software that dumps camera frames from V4L2 into a file is not very difficult to do, that's only a few hundred lines for C code. Figuring out why the pictures look cheap is a way harder challenge.</p> <p>For a long time Megapixels had some simple calibrations for blacklevel (to make the shadows a bit darker) and whitelevel (to make the light parts not grey) and later after a bit of documentation studying I had added calibration matrices all the way back in <a href="https://blog.brixit.nl/pinephone-camera-pt4/">part 4</a> of the Megapixels blog series.</p> <p>The color matrix that was added in Megapixels is a simple 3x3 matrix that converts the color sensitivity of the sensor in the PinePhone to calibrated values for the rest of the pipeline. Just a simple 3x3 matrix is not enough to do a more detailed correction though. Luckily the calibration software I used produces calibration files that contain several correction curves for the camera. For example the HSV curve that changes the hue, saturation and brightness of specific colors.</p> <p>Even though this calibration data is added by Megapixels I still had issues with color casts. Occasionally someone mentions to be how "filmic" or "vintage" the PinePhone pictures look. This is the opposite of what I'm trying to do with the picture processing. The vintage look is because color casts that are not linear to brightness are very similar on how cheap or expired analog film rolls reproduce colors. So where is this issue coming from?</p> <p>I've taken a closer look to the .dcp files produced by the calibration software. With a bit of python code I extracted the linearization curve from this file and plotted it. It turns out that the curve generated after calibration was perfectly linear. It makes a bit of sense since this calibration software was never made to create profiles for completely raw sensor data. It was made to create small corrections for professional cameras that already produce nice looking pictures. Looks like I have to produce this curve myself</p> <h2>Getting a sensor linearization curve</h2> <p>As my first target I looked into the Librem 5. Mainly because that's the phone that currently has the most battery charge. I had hoped there was some documentation about the sensor response curves in the datasheet for the sensor. It turns out that even getting a datasheet for this sensor is problematic. So the solution is to measure the sensor instead.</p> <p>Measuring this pretty hard though, the most important part is having calibrated reference for most solutions. I've thought about figuring out how to calibrate a light to produce precise brightness dimming steps and measuring the curve of the light with a colorimeter to fix any color casts of the lights. Another idea was taking pictures of a printed grayscale curve but that has the issue that the light on the grayscale curve needs to be perfectly flat.</p> <p>But after thinking about this in the background for some weeks I had a thought: instead of producing a perfect reference grayscale gradient it's way easier to point the camera at a constant light source and then adjust the shutter speed of the camera to produce the various light levels. Instead of a lot of external factors with calibrated lights which can throw off measurements massively I assume that the shutter speed setting in the sensor is accurate.</p> <p>The reason I can assume this is accurate is because the shutter speed setting in these phone sensors is in "lines". These cameras don't have shutters, it's all electronic shutter in the sensor. This means that if the shutter is set to 2 lines it means that the line being read by the sensor at that moment is cleared only 2 scanlines before. This is the "rolling shutter" effect. If the shutter is set to 4 lines instead every line has exactly twice the amount of time to collect light after resetting. This should result in a pretty much perfectly linear way to control the amount of light to calibrate the response with.</p> <p>In the case of the Librem 5 this value can be set from 2 lines to 3118 lines where the maximum value means that all the lines of the sensor have been reset by the time the first line is read out giving the maximum amount of light gathering time.</p> <p>With libmegapixels I have enough control over the camera to make a small C application that runs this calibration. It goes through these steps:</p> <ol><li>Open the specified sensor and set the shutter to the maximum value</li> <li>Start measuring the brightness of the 3 color channels and adjust the sensor gain so that with the current lighting the sensor will be close to clipping. If on the lowest gain setting the light source is still too bright the tool will ask to lower the lamp brightness.</li> <li>Once the target maximum brightness has been hit the tool will start lowering the shutter speed in regular steps and then saving the brightness for the color channels at that point.</li> <li>The calibration data is then written to a csv file</li> </ol> <p>The process looks something like this:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706114157/image.png" class="kg-image"></figure> <p>This is a short run for testing where only 30 equally spaced points are measured. I did a longer run for calibration with it set to 500 points instead which takes about 8 minutes. This is a plot of the resulting data after scaling the curves to hit 1.0 at the max gain:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706114296/image.png" class="kg-image"></figure> <p>The response of the sensor is not very linear at all... This means that if a picture is whitebalanced on the midtones the shadows will have a teal color cast due to the red channel having lower values. If the picture would be corrected with whitebalance to correct the darker colors it would result in the brighter colors to turn magenta.</p> <p>The nice thing is that I don't have to deal with actually correcting this. This curve can just be loaded into the .dng file metadata and the processing software will apply this correction at the right step in the pipeline.</p> <h2>Oops</h2> <p>It is at this point that I figured out that the LinearizationTable DNG tag is a grayscale correction table so it can't fix the color cast. At least it will improve the brightness inconsistencies between the various cameras.</p> <p>With some scripting I've converted the measured response curve into a correction curve for the LinearizationTable and then wrote that table into some of my test pictures with <code>exiftool</code>. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706221766/compare-linearizationtable.jpg" class="kg-image"></figure> <p>This is the result. The left image is a raw sensor dump from the Librem 5 rear camera that does not have any corrections at all applied except the initial whitebalance pass. On the left is the exact image pipeline but with the LinearizationTable tag set in the dng before feeding it to <code>dcraw</code>.</p> <p>The annoying thing here is that both pictures don't look correct. The first one has the extreme gamma curve that is applied by the sensor so everything looks very bright. The processed picture is a bit on the dark side but that might be because the auto-exposure was run on the first picture causing underexposure on the corrected data.</p> <p>The issue with that though is that some parts of the image data are already clipping while they shouldn't be and exposing the picture brighter would only make that worse.</p> <p>Maybe I have something very wrong here but at this point I'm also just guessing how this stuff is supposed to work. Documentation for this doesn't really exist. This is all the official documentation:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706222220/image.png" class="kg-image"><figcaption>No, chapter 5 not helpful</figcaption></figure> <p>Maybe it all works slightly better if the input raw data is not 8-bits but that's a bunch more of kernel issues to fix on the Librem 5 side.</p> <h2>Conclusion</h2> <p>So not that much progress on this at all as I hoped. I made some nice tools to produce data that makes pictures worse. Once the clipping in the highlights is fixed this might be very useful though since practically everything in the DNG pipeline expects the input raw data to be linear and it just isn't.</p> <p>The <a href="https://gitlab.com/megapixels-org/libmegapixels/-/commit/f6686d7a5a176384da3b5a1eaf93985aeb29d7be">sensor measuring tool</a> is included in the libmegapixels codebase now though.</p> <p>To fix auto-exposure I also need to figure out a way to apply this correction curve before running the AE algorithms on the live view. More engineering challenges as always :)</p> <hr> <div style="width: 75%; margin: 0 auto; background: rgba(128,128,128,0.2); padding: 10px;"> <h4>Development Funding</h4> <p>The current developments of Megapixels are funded by... You! The end-users. It takes a lot of time and a lot of weird expertice to make Linux cameras work and I've not been able to do it without your support.</p> <p>The donations are being used for the occasional hardware required for Megapixels development (Like a nice Standard Illuminant A for calibration) and the various other FOSS applications I develop for the Linux ecosystem. Every single bit helps to not do all this work entirely for free.</p> <a href="https://blog.brixit.nl/donations/">Donations</a> </div> <p></p> The dilemma of tagging library releaseshttps://blog.brixit.nl/the-dilemma-of-tagging-library-releases/94MegapixelsMartijn BraamSun, 14 Jan 2024 16:11:17 -0000<p>I've been working on the libmegapixels library for quite a bit now. The base of the library is pretty solid which is configuring a V4L2 pipeline so you can get camera frames on modern ARM platforms. Most of the work on the library side is figuring the AWB/AE/AF code and how that will fit together with applications.</p> <p>Due to the AAA code not working yet and the API not being being fully defined on how those parts will fit together I've been holding of on tagging an actual release on the libmegapixels library.</p> <p>A lot of my projects, especially libraries, are written in Python so I've long enjoyed the luxury of APIs being duck-typed and having the possibility of adding optional arguments to methods in the future. Sadly in C libraries I can't get away with never defining the types for arguments that might change in the future or adding optional arguments.</p> <p>My original plan was to tag a release on libmegapixels together with the first 2.x release of Megapixels since these pieces of software are intended to fit together but after thinking about it some more (and some convincing from other people interested in the libmegapixels release) I've decided to tag a 0.1 release.</p> <p>In an ideal world I can just release code when it's fully done and tested. In this case the long time it takes to get everything ready for use will mean that potential contributors to the code will also be held back from experimenting with the codebase. Especially since a large part of libmegapixels is the config files it ships for specific hardware configurations. If I wouldn't make any releases then at some point users/developers will be forced to just ship random git commits which is a way worse situation to be in for bug tracking.</p> <p>With this 0.1 release I want to make it possible to start writing config files for various phones and platforms to test camera pipelines. Hopefully this will also mean any issues with the configuration file format that people might hit will be figured out before I have to tag a "final" 1.x release.</p> <h2>The release</h2> <p>So the initial tagged release of <code>libmegapixels</code>:</p> <ul><li>located at <a href="https://gitlab.com/megapixels-org/libmegapixels/-/tags/0.1.0">https://gitlab.com/megapixels-org/libmegapixels/-/tags/0.1.0</a></li> <li>Build instructions at <a href="https://libme.gapixels.me/building.html">https://libme.gapixels.me/building.html</a></li> <li>Comes with absolutely no guarantee of stability for the C api of the library</li> <li>Most likely the config file format is stable but might have small tweaks before the 1.x release</li> </ul> <p>Hopefully this will allow people to start experimenting with the codebase and generate some feedback on it so I'm not just developing this for months and completely overfitting it to the three devices I'm testing on.</p> <p>I'm planning to make a similar release for <code>libdng</code> soon. That library is also mostly stable but I need to fix up the last parts of the API to allow reading and writing all the required metadata.</p> Megapixels 2.0: DNG loading and Autowhitebalancehttps://blog.brixit.nl/megapixels-2-0-dng-loading-and-whitebalancing/93MegapixelsMartijn BraamFri, 22 Dec 2023 01:25:46 -0000<p>After getting some nice DNG exporting code to work with libdng in the <a href="https://blog.brixit.nl/megapixels-2-0-dng-exporting/">last post</a> I decided to go mess with auto white-balancing again on the Librem 5.</p> <p>I got the Megapixels 2.x codebase to the point where it smoothly displays the camera feed on the Librem 5 and the PinePhone Pro. One of the things that Just Worked(tm) on the original PinePhone is the auto white-balance correction of the rear camera. This has also not worked on the front camera on that device and the results of lacking AWB code is very obvious: the pictures are very green.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1703201479/gc2145.jpg" class="kg-image"><figcaption>Great example of lack of AWB on the PinePhone front camera</figcaption></figure> <p>This was very easy on the rear camera of the PinePhone. The camera module inside the phone can automatically do white-balance corrections by having an AWB algorithm running on the 8051 core inside the sensor which adjusts the analog gain of the camera. The only thing that Megapixels does on the PinePhone is turning that feature on and it just works. The front camera of the PinePhone should have a similar feature but it does not work due to the state of the Linux driver for that sensor.</p> <p>For the PinePhone Pro and the Librem 5 (and most other devices) the white-balancing is a lot harder to deal with. The sensor does not have any automatic way of dealing with this and it has to be done on the CPU side. For this there's also two options:</p> <ul><li>Get the unbalanced camera feed and correct those frames in software while displaying them and storing the correction factors in the DNG files for the pictures that have been taken.</li> <li>Send the corrections back to the sensor instead so the camera feed is already balanced. This should lead to higher quality pictures because the use of the analog to digital converter in the sensor is more optimal. It&#x27;s also harder because now there&#x27;s latency between changing the gain and receiving the corrected data.</li> </ul> <p>But the nice thing of doing hardware support on multiple platforms is that I have to support both these cases :(</p> <p>In case of the Librem 5 I'm implementing the first option since the sensor driver for the rear camera on that device does not implement the necessary controls to do the second option, it's also a bit easier to get working right.</p> <h2>The Algorithmâ„¢</h2> <p>There are many ways to actually implement a white-balance algorithm. I'll be going with the most simple one. The gray-world algorithm.</p> <p>This algorithm works on the assumption that if you average out all the colors in your picture you'd get something that's roughly grey.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1703202886/grey-world.jpg" class="kg-image"></figure> <p>Intuitively you would think that nothing is ever that nicely balanced or that colorful walls for example would skew the results massively. But as you can see in the demonstration above the more you start to blur the picture the less saturated it will become. This works for a surprising amount of pictures.</p> <p>To calculate the white-balance correction for a picture it's not blurring the picture though as in the demonstration above. The way it's calculated is by taking the average of all the pixels in the picture and then using the inverse of that result to set the gain.</p> <p>Another thing that's different for white-balancing is that it's the first step in the color processing pipeline, it should not be ran on the final picture like this example but on the raw color values from the sensor since there it would be the closest in the pipeline to the ADC gain applied in the sensor.</p> <h2>The Megapixels implementation</h2> <p>taking the average of a full raw frame is not very fast. It also complicates the internal Megapixels code a lot to do extra image processing on the raw frame stage of the code. That's why in Megapixels I'm <i>not</i> running the white-balance code on the raw frames. It's instead being run on the preview feed shown to the user before taking the picture since that way it takes advantage of the GPU debayering and scaling of the input data.</p> <p>To make this work the libmegapixels code does the average of an entire processed RAW frame to get my average R, G and B values. After running this average the code is no longer dealing with a lot of data anymore so it's a lot easier to write quick code. To fix the code the inverse of the color matrix is ran to get a value that's close to what the average would've been of the raw data before scaling it and doing the preview color corrections.</p> <p>The result of that code gives a new R, G and B value that represent the balance of the color of the picture, the new gain for the color channels is then calculated as <code>1/R</code> and then normalized so the gain for the green channel is always 1.0. This is because on the sensors there's only a control for the red and blue gains.</p> <p>Except that on the Librem 5 there's no controls for the red and blue gains at all, so in that case the new gains are fed into the GPU shader that calculates the preview again where it will be applied as gains right after the debayering step.</p> <h2>The white-balance in the DNG output</h2> <p>With the two scenarios above there's also two cases for the DNG exporting. Either the RAW data in the DNG is already balanced or it's completely unbalanced. Luckily the DNG specification has me covered!</p> <p>When the raw image data is completely unbalanced like it is on most professional cameras the gains for balancing the picture are stored in the <code>AsShotNeutral</code> tag. This tells the DNG developing software the gains the camera used to display the preview and it will be available in the white-balance section of the developing software as "As Shot" or "Camera" white-balance.</p> <p>In the case where the ADC gains are manipulated to apply the white-balance this doesn't work since the gains written to the <code>AsShotNeutral</code> tag would be 1.0 for all channels. This <i>does</i> produce the correct picture for simple cases except that the whitebalance shown for the image in the editing software would always be 5612K.</p> <p>Having the wrong whitebalance is not just an issue of metadata neatness though. Practically all the color pipeline calculations after loading the DNG file and applying the RAW white-balance are dependent on the color temperature. The metadata in the DNG stores two color matrices and two correction LUTs. The guidelines for this calibration data is that one of the sets of calibration data is for D65 lighting which is basically outdoors on a cloudy day; pretty blue-ish lighting around 6500k. The second one is for "Standard Illuminant A" which is a reference tungsten light around 2856k. The developing software takes the data for both color temperatures and interpolates between the two to produce the matrices and curves for the color temperature of the white-balanced picture.</p> <p>To deal with the case where the the sensor already produced white-balanced raw data using the ADC gains the white-balance gains can be written to the <code>AnalogBalance</code> tag. This will be used to invert the white-balance gains in the sensor again before running the rest of the processing pipeline which means the correct color temperature will be used.</p> <h2>So does it work?</h2> <p>Yeah, mostly. It could use a bit of tweaking and the calibration I'm using for the sensor is just wrong.</p> <video src="https://brixitcdn.net/white-balance-encoded.mp4" controls style="max-width: 50%"></video> <p>The video here is extremely janky, most likely due to the auto-gain in this test build being completely broken and my code being sloppy. There's a few things that need to be fixed here aside from figuring out more preformance regressions:</p> <ul><li>The whitebalance code stops working when there&#x27;s not enough light and it jumps to the full green picture. At this point it should keep holding the old white-balance to make it less jarring.</li> <li>There needs to be smoothing applied to the whitebalance changes. It&#x27;s mostly pretty solid since this doesn&#x27;t have any latency with sensor adjustments but when the camera moves to the pumpkins you can see it being unstable.</li> </ul> <p>Overall it mostly works though. The performance is a bit more stable when there's daylight. Cameras simply work better when there's more light available. The various sources of artificial light here is also throwing off the camera a lot with a lot of light coming from my monitor and some very poor quality light coming from the room lighting.</p> <h2>The SEGFAULT button</h2> <p>So Megapixels somewhat balances the pictures but the second half of the process is not something I've been able to test yet: storing DNG files with this whitebalance metadata. The Megapixels 1.x codebase had the code for saving the <code>AsShotNeutral</code> and <code>AnalogBalance</code> tags and I've re-implemented that in libdng. The issue is that in the current state of the Megapixels code pressing the shutter button just causes the whole application to segfault.</p> <p>This segfault occurs somewhere in the interaction with the color profile curves loaded from the calibration .dcp file with the libtiff library when saving though libdng. This being 3 threads deep into the Megapixels codebase makes this a bit annoying to debug so I decided I needed to yak-shave this a bit further and add more tooling to the libdng codebase...</p> <h2>The mergedng tool</h2> <p>My solution for making this easier to debug is adding an utility in libdng that actually uses the feature to load a calibration file to append the curves to the final picture. Due to me just not stopping to write code I've implemented basic DNG reading support for this in libdng and as frontend the <code>mergedng</code> utility.</p> <p>The functionality for this tool is pretty simple. It reads an input DNG file and takes the picture data and metadata from that. It then takes a .dcp file as second argument which provides the calibration curves for the camera and it then merges those TIFF tags and writes out a new DNG file. This is an utility I needed anyway since I've been searching for it, it makes it easy to "upgrade" pictures taken with earlier versions of Megapixels with new calibration data from more recent .dcp files.</p> <p>Writing the code for this functionality was pretty straightforward. The .dcp loading and appending code already existed in the libdng codebase since that's the code which already causes the SEGFAULT in Megapixels when taking a picture. The extra added code in libdng is the new functions for reading a DNG file and taking that image metadata for writing a new picture.</p> <p>After implementing all this and adding some unit tests for the DCP loading code I've come to the realization that... it just works...</p> <p>In this simplified codebase everything touching the data just simply works so my original crashing issue in Megapixels is somewhere unrelated. This is where I'm at now and where I've decided to write a blog post instead of diving deep into the Megapixels codebase again :)</p> <hr> <div style="width: 75%; margin: 0 auto; background: rgba(128,128,128,0.2); padding: 10px;"> <h4>Development Funding</h4> <p>The current developments of Megapixels are funded by... You! The end-users. It takes a lot of time and a lot of weird expertice to make Linux cameras work and I've not been able to do it without your support.</p> <p>The donations are being used for the occasional hardware required for Megapixels development (Like a nice Standard Illuminant A for calibration) and the various other FOSS applications I develop for the Linux ecosystem. Every single bit helps to not do all this work entirely for free.</p> <a href="https://blog.brixit.nl/donations/">Donations</a> </div> <p></p> Megapixels 2.0: DNG exportinghttps://blog.brixit.nl/megapixels-2-0-dng-exporting/89MegapixelsMartijn BraamSat, 18 Nov 2023 14:17:38 -0000<p>It seems overkill to make a whole seperate library dedicated to replacing 177 lines of code in Megapixels that touches libtiff, but this small section of code causes significant issues for distribution packaging and compatability with external photo editing software. Most importantly the adjusted version in Millipixels for the Librem 5 does not output DNG files that are close enough to the Adobe specifications to be loaded into the calibration software.</p> <p>Making this a seperate library would make it easier to test. In the Adobe DNG SDK there is a test utility that can verify if a TIFF file is up to DNG spec and it can (with a lot of complications) be build for Linux.</p> <h2>The spec</h2> <p>The first thing after copying over the code block from Megapixels to a seperate project is reading the Adobe DNG specification.</p> <p>When I wrote the original export code in Megapixels it was based around some example code I found on Github for using Libtiff that I can no longer find and it results in something that's close enough to a valid DNG file for the <code>dcraw</code> utility. This is also a DNG 1.0 file that is generated.</p> <p>I have spend the next day reading the <a href="https://www.kronometric.org/phot/processing/DNG/dng_spec_1.4.0.0.pdf">DNG 1.4 specification</a> from Adobe to understand what a valid DNG file is absolutely minimally required to have. These are my notes from that:</p> <div class="highlight"><pre><span></span><span class="gu">## Inside a DNG file</span> <span class="k">*</span> SubIFDType 0 is the original raw data <span class="k">*</span> SubIFDType 1 is the thumbnail data <span class="k">*</span> The recommendation is to store the thumbnail as the first IFD <span class="k">*</span> TIFF metdata goes in the first IFD <span class="k">*</span> EXIF tags are preferred <span class="k">*</span> Camera profiles are stored in the first IFD <span class="gu">## Required tags</span> <span class="k">*</span> DNGVersion <span class="k">*</span> UniqueCameraModel </pre></div> <h2>Validation</h2> <p>I also spend a long time to build the official Adobe DNG SDK. This is mostly useless for developing any open source software due to licensing but it does provide a nice <code>dng_validate</code> utility that can be used to actually test the DNG files. Building this utility is pretty horrifying since it requires some specific versions of dependencies and some patches to work on modern compilers.</p> <p>The libdng codebase now has the <a href="https://gitlab.com/megapixels-org/libdng/-/blob/master/adobe_dng_sdk.sh">adobe_dng_sdk.sh</a> script that will build the required libraries and the validation binary.</p> <p>with the Megapixels code adjusted with the info from the documentation above I fed some random noise as data to the library to generate a DNG file and run it through the validator.</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>dng_validate out.dng <span class="go">Validating &quot;out.dng&quot;...</span> <span class="go">*** Warning: This file has Chained IFDs, which will be ignored by DNG readers ***</span> <span class="go">*** Error: Unable to find main image IFD ***</span> </pre></div> <p>Well that's not a great start... There's also a <code>-v</code> option to get some more verbose info</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>dng_validate -v out.dng <span class="go">Validating &quot;out.dng&quot;...</span> <span class="go">Uses little-endian byte order</span> <span class="go">Magic number = 42</span> <span class="go">IFD 0: Offset = 308, Entries = 10</span> <span class="go">NewSubFileType: Preview Image</span> <span class="go">ImageWidth: 20</span> <span class="go">ImageLength: 15</span> <span class="go">BitsPerSample: 8</span> <span class="go">Compression: Uncompressed</span> <span class="go">PhotometricInterpretation: RGB</span> <span class="go">StripOffsets: Offset = 8</span> <span class="go">StripByteCounts: Count = 300</span> <span class="go">DNGVersion: 1.4.0.0</span> <span class="go">UniqueCameraModel: &quot;LibDNG&quot;</span> <span class="go">NextIFD = 10042</span> <span class="go">Chained IFD 1: Offset = 10042, Entries = 6</span> <span class="go">NewSubFileType: Main Image</span> <span class="go">ImageWidth: 320</span> <span class="go">ImageLength: 240</span> <span class="go">Compression: Uncompressed</span> <span class="go">StripOffsets: Offset = 441</span> <span class="go">StripByteCounts: Count = 9600</span> <span class="go">NextIFD = 0</span> <span class="go">*** Warning: This file has Chained IFDs, which will be ignored by DNG readers ***</span> <span class="go">*** Error: Unable to find main image IFD ***</span> </pre></div> <p>Let's have a look at what the DNG spec says about this:</p> <blockquote>DNG recommends the use of SubIFD trees, as described in the TIFF-EP specification. SubIFD chains are not supported.<br><br>The highest-resolution and quality IFD should use NewSubFileType equal to 0. Reduced resolution (or quality) thumbnails or previews, if any, should use NewSubFileType equal to 1 (for a primary preview) or 10001.H (for an alternate preview). <br><br>DNG recommends, but does not require, that the first IFD contain a low-resolution thumbnail, as described in the TIFF-EP specification.</blockquote> <p>So I have the right tags and the right IFDs but I need to make an IFD tree instead of chain in libtiff. I have no idea how IFD trees work so up to the next specification!</p> <p>It seems like TIFF trees are defined in the Adobe PageMaker 6 tech notes from 1995. That document describes that the NextIFD tag that libtiff used for me is used primarily for defining multi-page documents, not multiple encodings of the same document like what happens here with a thumbnail and the raw data. You know this is a 1995 spec because it gives a Fax as example of a multi-page document.</p> <p>In the examples provided in that specification the first image is the main image and the NextIFD tag is just replaced by a subIFD tag. In case of DNG the main image is the thumbnail for compatibility with software that can't read the raw camera data.</p> <p>Switching over to a SubIFD tag is suprisingly simple, just badly documented. Libtiff will create the NextIFD tag automatically for you but if you create an empty SubIFD tag then libtiff will fill in the offset for the next IFD for you when closing the file:</p> <div class="highlight"><pre><span></span><span class="n">TIFF</span><span class="w"> </span><span class="o">*</span><span class="n">tif</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">TIFFOpen</span><span class="p">(</span><span class="n">path</span><span class="p">,</span><span class="w"> </span><span class="s">&quot;w&quot;</span><span class="p">);</span><span class="w"></span> <span class="c1">// Set the tags for IFD 0 like normal here</span> <span class="n">TIFFSetField</span><span class="p">(</span><span class="n">tif</span><span class="p">,</span><span class="w"> </span><span class="n">TIGTAG_SUBFILETYPE</span><span class="p">,</span><span class="w"> </span><span class="n">DNG_SUBFILETYPE_THUMBNAIL</span><span class="p">);</span><span class="w"></span> <span class="c1">// Create a NULL reference for one SubIFD</span> <span class="kt">uint64_t</span><span class="w"> </span><span class="n">offsets</span><span class="p">[]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="mf">0L</span><span class="w"> </span><span class="p">};</span><span class="w"></span> <span class="n">TIFFSetField</span><span class="p">(</span><span class="n">tif</span><span class="p">,</span><span class="w"> </span><span class="n">TIFFTAG_SUBIFD</span><span class="p">,</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="o">&amp;</span><span class="n">offsets</span><span class="p">);</span><span class="w"></span> <span class="c1">// Write the thumbnail image data here</span> <span class="c1">// Close the first IFD</span> <span class="n">TIFFWriteDirectory</span><span class="p">(</span><span class="n">tif</span><span class="p">);</span><span class="w"></span> <span class="c1">// Start IFD1 describing the raw data</span> <span class="n">TIFFSetField</span><span class="p">(</span><span class="n">tif</span><span class="p">,</span><span class="w"> </span><span class="n">TIFFTAG_SUBFILETYPE</span><span class="p">,</span><span class="w"> </span><span class="n">DNG_SUBFILETYPE_ORIGINAL</span><span class="p">);</span><span class="w"></span> <span class="c1">// write raw data and close the directory again</span> <span class="n">TIFFWriteDirectory</span><span class="p">(</span><span class="n">tif</span><span class="p">);</span><span class="w"></span> <span class="c1">// Close the tiff, this will cause libtiff to patch up the references</span> <span class="n">TIFFCLose</span><span class="p">(</span><span class="n">tif</span><span class="p">);</span><span class="w"></span> </pre></div> <p>So with the code updated the validation tool neatly shows the new SubIFD tags and finds actual errors in my DNG file data now</p> <pre><code>Uses little-endian byte order Magic number = 42 IFD 0: Offset = 308, Entries = 11 NewSubFileType: Preview Image ImageWidth: 20 ImageLength: 15 BitsPerSample: 8 Compression: Uncompressed PhotometricInterpretation: RGB StripOffsets: Offset = 8 StripByteCounts: Count = 300 SubIFDs: IFD = 10054 DNGVersion: 1.4.0.0 UniqueCameraModel: &quot;LibDNG&quot; NextIFD = 0 SubIFD 1: Offset = 10054, Entries = 6 NewSubFileType: Main Image ImageWidth: 320 ImageLength: 240 Compression: Uncompressed StripOffsets: Offset = 453 StripByteCounts: Count = 9600 NextIFD = 0 *** Error: Missing or invalid SamplesPerPixel (IFD 0) *** *** Error: Missing or invalid PhotometricInterpretation (SubIFD 1) ***</code></pre> <p>Ah, so these two tags are actually required but not described as such in the DNG specification since these are TIFF tags instead of DNG tags (while it does explicitly tells other TIFF required data).</p> <p>Patching up these errors is easy, just slightly annoying since the validation tool seemingly gives only a single error per IFD requiring to iterate on the code a bit more. After a whole lot of iterating on the exporting code I managed to get the first valid DNG file:</p> <pre><code>Raw image read time: 0.000 sec Linearization time: 0.002 sec Interpolate time: 0.006 sec Validation complete</code></pre> <p>Now the next step is adding all the plumbing to make this usable as library and making an actually nice command line utility.</p> <h2>First actual test</h2> <p>Now I have written the first iterations of libmegapixels and libdng it should be possible to actually load a picture in some editing software. So let's try some end-to-end testing with this.</p> <p>With the <code>megapixels-getframe</code> utility from libmegapixels I can get a frame from the sensor (In this case the rear camera of the Librem 5) and then feed that raw data to the <code>makedng</code> utility from libdng.</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>getframe -o test.raw <span class="go">Using config: /usr/share/megapixels/config/purism,librem5.conf</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">Stored frame to: test.raw</span> <span class="go">Format: 4208x3120</span> <span class="go">Pixfmt: GRBG</span> <span class="gp">$ </span>makedng -w <span class="m">4208</span> -h <span class="m">3120</span> -p GRBG test.raw test.dng <span class="go">Reading test.raw...</span> <span class="go">Writing test.dng...</span> </pre></div> <p>No errors and the file passes the DNG validation, let's load it into RawTherapee :)</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1700184535/image.png" class="kg-image"><figcaption>The first frame loaded into RawTherapee</figcaption></figure> <p>I had to boost the exposure a bit since the <code>megapixels-getframe</code> tool does not actually control any of the sensor parameters like the exposure time so the resulting picture is very dark. There's also no whitebalance or autofocus happening so the colors look horrible. </p> <p>But... </p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1700184873/compare-checker.jpg" class="kg-image"></figure> <p>The colors are correct! The interpetation of the CFA pattern of the sensor and the orientation of the data is all correct.</p> <h2>Integration testing</h2> <p>The nice thing about having the seperate library is that testing it becomes a lot easier than testing a GTK4 application. I have added the first simple end-to-end test to the codebase now that feeds some data to makedng and checks if the result is a valid DNG file using the official Adobe tool.</p> <div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span> <span class="nb">set</span> -e <span class="k">if</span> <span class="o">[</span> <span class="nv">$#</span> -ne <span class="m">1</span> <span class="o">]</span><span class="p">;</span> <span class="k">then</span> <span class="nb">echo</span> <span class="s2">&quot;Missing tool argument&quot;</span> <span class="nb">exit</span> <span class="m">1</span> <span class="k">fi</span> <span class="nv">makedng</span><span class="o">=</span><span class="s2">&quot;</span><span class="nv">$1</span><span class="s2">&quot;</span> <span class="nb">echo</span> <span class="s2">&quot;Running tests with &#39;</span><span class="nv">$makedng</span><span class="s2">&#39;&quot;</span> <span class="c1"># This testsuite runs raw data through the makedng utility and validates the</span> <span class="c1"># result using the dng_validate tool from the Adobe DNG SDK. This tool needs</span> <span class="c1"># to be manually installed for these tests to run.</span> <span class="c1"># Create test raw data</span> mkdir -p scratch magick -size 1280x720 gradient: -colorspace RGB scratch/data.rgb <span class="c1"># Generate DNG</span> <span class="nv">$makedng</span> -w <span class="m">1280</span> -h <span class="m">720</span> -p RG10 scratch/data.rgb scratch/RG10.dng <span class="c1"># Validate DNG</span> dng_validate scratch/RG10.dng </pre></div> <p>This is launched from ctest in my cmake files for now since I'm developing most of this stuff using CLion which only properly supports cmake projects. This is why a lot of my C projects have both meson and cmake files to build them but only the meson project file has install commands in it.</p> <p>For more advanced testing it would be neat to have raw sensor dumps of several sensors in different formats which are all pictures of a colorchecker like the picture above. Then have some (probably opencv) utility that can validate that a colorchecker is present in the picture with the right colors.</p> <p>There also needs to be a non-adobe-propriatary validation tool that can be easily run as testsuite for distribution packaging so at build time it's possible to validate that the combination of libdng and the distribution version of libtiff can produce sane output. This has caused several issues in Megapixels before after all.</p> <h2>Overall architecture</h2> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1700232871/path4862-1-4.png" class="kg-image"><figcaption>I&#x27;ve spent too much time drawing this</figcaption></figure> <p>With the addition of libdng the architecture for Megapixels 2.0 starts to look like this. Megapixels no longer has any pipeline manipulation code, that is all handled by the library which after configuration just passes the file descriptor for the sensor node to Megapixels to handle the realtime control of the sensor parameters.</p> <p>The libdng code replaces the plain libtiff exporting done in Megapixels and generate the DNG files that will be read by postprocessd. Postprocessd reads the dng files with the help of the dcraw library which already has custom DNG reading code that does not use libtiff.</p> <p>The next steps now is to flesh out the library public interface for libdng so it can do all the DNG metadata that Megapixels requires and then hooking it up to Megapixels to actually use it.</p> <hr> <h3>Funding update</h3> <p>Since my <a href="https://blog.brixit.nl/adding-hardware-to-libmegapixels/">previous post</a> about the libmegapixels developments and the <a href="https://blog.brixit.nl/megapixels-2-0/">Megapixels 2.0 post</a> I wrote before that I've almost doubled the funding for actually working on all the FOSS contributions. I'm immensely thankful for all the new patrons and it also made me notice that the <a href="https://blog.brixit.nl/donations/">donations</a> page on this site was no longer being regenerated. That is fixed now.</p> <p>I'm also still trying to figure out if I can add some perks for patrons to all of this but practically all options just amount to making things slightly worse for non-patrons. I hope just making the FOSS ecosystem better one of code line at a time is enough :)</p> Adding hardware to libmegapixelshttps://blog.brixit.nl/adding-hardware-to-libmegapixels/88MegapixelsMartijn BraamMon, 13 Nov 2023 17:59:48 -0000<p>Since in the last post I only showed off the libmegapixels config format and made some claims about configurablility without demonstrating it. I thought that it might be a good idea to actually demonstrate and document it.</p> <p>As example device I will use my Xiaomi Mi Note 2 with a broken display, shown above. Also known in PostmarketOS under the codename <a href="https://wiki.postmarketos.org/wiki/Xiaomi_Mi_Note_2_(xiaomi-scorpio)">xiaomi-scorpio</a>. I picked this device as demo since I have already used this hardware in Megapixels 1.x so I know the kernel side of it is functional. I have not run any libmegapixels code on this device before writing this blogpost so I'm writing it as a I go along debugging it. Hopefully this device does not require any ioctl that has not been needed by the existing supported devices.</p> <p>What makes it possible to get camera output from this phone is two things:</p> <ul><li>The camera subsystem in this device is supported pretty well in the kernel, in this case it&#x27;s a Qualcomm device which has a somewhat universal driver for this</li> <li>The sensor in this phone has a proper driver</li> </ul> <p>The existing devices that I used to develop libmegapixels are based around the Rockchip, NXP and Allwinner platforms so this will be an interesting test if my theory works.</p> <h2>The config file name</h2> <p>Just like Megapixels 1.x the config file is based around the "compatible" name of the device. This is defined in the device tree passed to Linux by the bootloader. Since this is a nice mainline Linux device this info can be found in the kernel source: <a href="https://github.com/torvalds/linux/blob/b85ea95d086471afb4ad062012a4d73cd328fa86/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-scorpio.dts#L17">https://github.com/torvalds/linux/blob/b85ea95d086471afb4ad062012a4d73cd328fa86/arch/arm64/boot/dts/qcom/msm8996pro-xiaomi-scorpio.dts#L17</a></p> <pre><code>compatible = &quot;xiaomi,scorpio&quot;, &quot;qcom,msm8996pro&quot;, &quot;qcom,msm8996&quot;;</code></pre> <p>This device tree specifies three names for this device ranking from more specific to less specific. <code>xiaomi,scorpio</code> is the exact hardware name, <code>qcom,msm8996pro</code> is the variant of the SoC and the <code>qcom,msm8996</code> name is the inexact name of the SoC. Since this configuration defined both the SoC pipeline and the configuration for the specific sensor module the only sane option here is <code>xiaomi,scorpio</code> since that describes that exact hardware configuration. Other <code>msm8996</code> devices might be using a completely different sensor.</p> <p>The most specific option is not always the best option, in the case of the PinePhone for example the compatible is:</p> <pre><code>&quot;pine64,pinephone-1.1&quot;, &quot;pine64,pinephone&quot;, &quot;allwinner,sun50i-a64&quot;;</code></pre> <p>In this hardware the camer system for the 1.0, 1.1 and 1.2 revision is identical so the config file just uses the <code>pine64,pinephone</code> name.</p> <p>Knowing this the config file name will be <code>xiaomi,scorpio.conf</code> and can be placed in three locations. <code>/usr/share/megapixels/config</code>, <code>/etc/megapixels/config</code> and just the plain filename in your current working directory.</p> <p>Now we know what the config path is the hard part starts, figuring out what to put in this config file.</p> <h2>The media pipeline</h2> <p>The next step is figuring out the media pipeline for this device. If the kernel has support for the hardware in the device it should create one or more <code>/dev/media</code> files. In the case of the Scorpio there's only a single one for the camera pipeline but there might be additional ones for stuff like hardware accelerated video encoding or decoding. </p> <p>You can get the contents of the media pipelines with the <code>media-ctl</code> utility from <code>v4l-utils</code>. Use <code>media-ctl -p</code> to print the pipeline and you can use the <code>-d</code> option to choose another file than <code>/dev/media0</code> if needed. For the Scorpio the pipeline contents are:</p> <pre><code>Media controller API version 6.1.14 Media device information ------------------------ driver qcom-camss model Qualcomm Camera Subsystem serial bus info platform:a34000.camss hw revision 0x0 driver version 6.1.14 Device topology - entity 1: msm_csiphy0 (2 pads, 5 links) type V4L2 subdev subtype Unknown flags 0 device node name /dev/v4l-subdev0 pad0: Sink [fmt:UYVY8_2X8/1920x1080 field:none colorspace:srgb] &lt;- &quot;imx318 3-001a&quot;:0 [ENABLED,IMMUTABLE] pad1: Source [fmt:UYVY8_2X8/1920x1080 field:none colorspace:srgb] -&gt; &quot;msm_csid0&quot;:0 [] -&gt; &quot;msm_csid1&quot;:0 [] -&gt; &quot;msm_csid2&quot;:0 [] -&gt; &quot;msm_csid3&quot;:0 [] [ Removed A LOT of entities here for brevity ] - entity 226: imx318 3-001a (1 pad, 1 link) type V4L2 subdev subtype Sensor flags 0 device node name /dev/v4l-subdev19 pad0: Source [fmt:SRGGB10_1X10/5488x4112@1/30 field:none colorspace:raw xfer:none] -&gt; &quot;msm_csiphy0&quot;:0 [ENABLED,IMMUTABLE] - entity 228: ak7375 3-000c (0 pad, 0 link) type V4L2 subdev subtype Lens flags 0 device node name /dev/v4l-subdev20 </code></pre> <p>The header shows that this is a media device for the <code>qcom-camss</code> system, which handles cameras on Qualcomm devices. There is also a node for the <code>imx318</code> sensor which further confirms that this is the right media pipeline.</p> <p>Analyzing the pipeline in this format is pretty hard when there's more than two nodes though, that's why there is a neat option in media-ctl to output the mediagraph as an actual graph using Graphviz.</p> <pre><code>$ apk add graphviz $ media-ctl -d 0 --print-dot | dot -Tpng &gt; pipeline.png</code></pre> <p>Which produces this image:</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1699888898/pipeline.png" class="kg-image"></figure> <p>In a bunch of cases you can copy most of the configuration of this graph from another device that uses the same SoC but since this is the first Qualcomm device I'm adding I have to figure out the whole pipeline.</p> <p>The only part that's really specific to the Xiaomi Scorpio is the top two nodes. The <code>imx318</code> is the actual camera module in the phone connected with mipi to the SoC. The <code>ak7375</code> is listed as a "Motor driver". This means that it is the chip handeling the lens movements for autofocus. There are no connections to this node since this device does not handle any graphical data, the entity only exists so you can set v4l control values on it to move the focus manually.</p> <p>All the boxes in the graph are called entities and correspond with the <code>Entity</code> blocks in the <code>media-ctl -p</code> output. The boxes are yellow if they are entities with the type <code>V4L</code>, these are the nodes that will show up als <code>/dev/video</code> nodes to actually get the image data out of this pipeline.</p> <p>The lines between the boxes are called links, the dotted lines are disabled links and solid lines are enabled links. On this hardware a lot of the links are created by the kernel driver and are hardcoded. These links show up in the text output as <code>IMMUTABLE</code> and mostly describe fixed hardware paths for the image data.</p> <p>The goal of configuring this pipeline is to get the image data from the IMX sensor all the way down to one of the /dev/video nodes and figuring out the purpose of the entities in between. If you are lucky there is actual documentation for this. In this case I have found documentation at <a href="https://www.kernel.org/doc/html/v4.14/media/v4l-drivers/qcom_camss.html">https://www.kernel.org/doc/html/v4.14/media/v4l-drivers/qcom_camss.html</a> which is for the v4.14 kernel but for some reason is removed on later releases.</p> <p>This documentation has neat explanations for these entities:</p> <ul><li>2 CSIPHY modules. They handle the Physical layer of the CSI2 receivers. A separate camera sensor can be connected to each of the CSIPHY module;</li> <li>2 CSID (CSI Decoder) modules. They handle the Protocol and Application layer of the CSI2 receivers. A CSID can decode data stream from any of the CSIPHY. Each CSID also contains a TG (Test Generator) block which can generate artificial input data for test purposes;</li> <li>ISPIF (ISP Interface) module. Handles the routing of the data streams from the CSIDs to the inputs of the VFE;</li> <li>VFE (Video Front End) module. Contains a pipeline of image processing hardware blocks. The VFE has different input interfaces. The PIX (Pixel) input interface feeds the input data to the image processing pipeline. The image processing pipeline contains also a scale and crop module at the end. Three RDI (Raw Dump Interface) input interfaces bypass the image processing pipeline. The VFE also contains the AXI bus interface which writes the output data to memory.</li> </ul> <p>This documentation is not for this exact SoC so the amount of entities of each type is different.</p> <p>Configuring the pipeline and connecting it all up is now just a lot of trial and error, in the case of the Scorpio it has already been trial-and-error'd so there is an existing config file for the old Megapixels at <a href="https://gitlab.com/postmarketOS/megapixels/-/blob/master/config/xiaomi,scorpio.ini?ref_type=heads">https://gitlab.com/postmarketOS/megapixels/-/blob/master/config/xiaomi,scorpio.ini</a></p> <p>In this old pipeline description format the path is just enabling the links between the first <code>csiphy</code>, <code>csid</code>, <code>ispif</code> and <code>vfe</code> entity. Since this release of Megapixels did not really support further configuration it just tried to then set the resolution and pixel format for the sensors on all entities after it and hoped it worked. On an unknown platform just picking the left-most path will pretty likely bring up a valid pipeline, the duplicated entities are mostly useful for cases where you are using multiple cameras at once.</p> <h2>Initial config file</h2> <p>The first thing I did is creating a minimal config file for the scorpio that had the minimal pipeline to stream unmodified data from the sensor to userspace.</p> <pre><code>Version = 1; Make: &quot;Xiaomi&quot;; Model: &quot;Scorpio&quot;; Rear: { SensorDriver: &quot;imx318&quot;; BridgeDriver: &quot;qcom-camss&quot;; Modes: ( { Width: 3840; Height: 2160; Rate: 30; Format: &quot;RGGB10&quot;; Rotate: 90; Pipeline: ( {Type: &quot;Link&quot;, From: &quot;imx318&quot;, FromPad: 0, To: &quot;msm_csiphy0&quot;, ToPad: 0}, {Type: &quot;Link&quot;, From: &quot;msm_csiphy0&quot;, FromPad: 1, To: &quot;msm_csid0&quot;, ToPad: 0}, {Type: &quot;Link&quot;, From: &quot;msm_csid0&quot;, FromPad: 1, To: &quot;msm_ispif0&quot;, ToPad: 0}, {Type: &quot;Link&quot;, From: &quot;msm_ispif0&quot;, FromPad: 1, To: &quot;msm_vfe0_rdi0&quot;, ToPad: 0}, {Type: &quot;Mode&quot;, Entity: &quot;imx318&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_csiphy0&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_csid0&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_ispif0&quot;}, ); }, ); }; </code></pre> <p>This can be tested with the <code>megapixels-getframe</code> command.</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>./megapixels-getframe <span class="go">Using config: /etc/megapixels/config/xiaomi,scorpio.conf</span> <span class="go">[libmegapixels] Could not link 226 -&gt; 1 [imx318 -&gt; msm_csiphy0] </span> <span class="go">[libmegapixels] Capture driver changed pixfmt to UYVY</span> <span class="go">Could not select mode</span> </pre></div> <p>This command tries to output as much debugging info as possible, but the reality is that you'll most likely need to look at the kernel source to figure out what is happening and what arbitrary constraints exist.</p> <p>So the iterating and figuring out errors starts. First the most problematic line is the <code>UYVY</code> format one. This most likely means that the pipeline pixelformat I selected was not correct and to fix that the kernel helpfully selects a completely different one. <code>getframe</code> will detect this and show this happening. In this case the RGGB10 format is wrong and it should have been RGGB10p. The kernel implementation is a bit inconsistent about which format it actually is while MIPI only allows one of these two in the spec. Changing that removes that error.</p> <p>The other interesting error is the link that could not be created. If you look closely at the Graphviz output you'll see that this link is already enabled by the kernel and in the text output it is also <code>IMMUTABLE</code>. This config line can be dropped because this is not configurable.</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>./megapixels-getframe <span class="go">Using config: /etc/megapixels/config/xiaomi,scorpio.conf</span> <span class="go">VIDIOC_STREAMON failed: Broken pipe</span> </pre></div> <p>Progress! At least somewhat. The mode setting commands succeed but now the pipeline can not actually be started. This is because some drivers only validate options when starting the pipeline instead of when you're actually setting modes. This is one of the most annoying errors to fix because there's no feedback whatsoever on <i>what</i> or <i>where</i> the config issue is.</p> <p>My suggestion for this is to first run <code>media-ctl -p</code> again and see the current state of the pipeline. This output shows the format for the pads of the pipeline so you can find a connection that might be invalid by comparing those. My pipeline state at this point is:</p> <ul><li><code>imx318</code>: <code>SRGGB10_1X10/3840x2160@1/30</code></li> <li><code>csiphy0</code>: <code>SRGGB10_1X10/3840x2160</code></li> <li><code>csid0</code>: <code>SRGGB10_1X10/3840x2160</code></li> <li><code>ispif0</code>: <code>SRGGB10_1X10/3840x2160</code></li> <li><code>vfe0_rdi0</code>: <code>UYVY8_2X8/1920x1080</code></li> </ul> <p>AHA! the last node is not configured correctly. It's always the last one you look at. It turns out the issue was that I'm simply missing a mode command in my config file that sets the mode on that entity so it's left at the pipeline defaults. Let's test the pipeline with that config added:</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>/megapixels-getframe <span class="go">Using config: /etc/megapixels/config/xiaomi,scorpio.conf</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> <span class="go">received frame</span> </pre></div> <p>The pipeline is streaming! This is the bare minimum configuration needed to make Megapixels 2.0 use this camera. For reference after all the changes above the config file is:</p> <pre><code>Version = 1; Make: &quot;Xiaomi&quot;; Model: &quot;Scorpio&quot;; Rear: { SensorDriver: &quot;imx318&quot;; BridgeDriver: &quot;qcom-camss&quot;; Modes: ( { Width: 3840; Height: 2160; Rate: 30; Format: &quot;RGGB10p&quot;; Rotate: 90; Pipeline: ( {Type: &quot;Link&quot;, From: &quot;msm_csiphy0&quot;, FromPad: 1, To: &quot;msm_csid0&quot;, ToPad: 0}, {Type: &quot;Link&quot;, From: &quot;msm_csid0&quot;, FromPad: 1, To: &quot;msm_ispif0&quot;, ToPad: 0}, {Type: &quot;Link&quot;, From: &quot;msm_ispif0&quot;, FromPad: 1, To: &quot;msm_vfe0_rdi0&quot;, ToPad: 0}, {Type: &quot;Mode&quot;, Entity: &quot;imx318&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_csiphy0&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_csid0&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_ispif0&quot;}, {Type: &quot;Mode&quot;, Entity: &quot;msm_vfe0_rdi0&quot;}, ); }, ); };</code></pre> <h2>Camera metadata</h2> <p>The config file not only stores information about the media pipeline but can also store information about the optical path. Every mode can define the focal length for example because changing the cropping on the sensor will give you digital zoom and thus a longer focal length. With modern phones with 10 cameras on the back it is also possible to define all of them as the "rear" camera and have multiple modes with multiple focal lengths so camera apps can switch the pipeline for zooming once zooming is implemented in the UI.</p> <p>Finding out the values for this optical path is basically just using search engines to find datasheets and specs. Sometimes the pictures generated by android have the correct information for this in the metadata as well.</p> <p>This information is also mostly absent from sensor datasheets since that only describe the sensor itself, you either need to find this info from the camera module itself (which is the sensor plus the lens) or the specifications for the phone.</p> <p>From spec listings and review sites I've found that the focal length for the rear camera is 4.06mm and the aperture is f/2.0. This can be added to the mode section:</p> <pre><code>Width: 3840; Height: 2160; Rate: 30; Format: &quot;RGGB10p&quot;; Rotate: 90; FocalLength: 4.06; FNumber: 2.0;</code></pre> <h2>Reference for pipeline commands</h2> <p>Since this is now practically the main reference for writing config files until I get documentation generation up and running for libmegapixels I will put the complete documentation for the various commands here.</p> <p>While parsing the config file there are four values stored as state : <code>width</code>, <code>height</code>, <code>format</code> and <code>rate</code>. The values for these default to the ones set in the mode and they are updated whenever you define one of these values explicitly in a command. This prevents having to write the same resolution values repeatedly on every line but it still allows having entities in the pipeline that scale the resolution.</p> <h3>Link</h3> <pre><code>{ Type: &quot;Link&quot;, From: &quot;msm_csiphy0&quot;, # Source entity name, required FromPad: 1, # Source pad, defaults to 0 To: &quot;msm_csid0&quot;, # Target entity name, required ToPad: 0 # Target pad, defaults to 0 }</code></pre> <p>Translates to an <code>MEDIA_IOC_SETUP_LINK</code> ioctl on the media device.</p> <h3>Mode</h3> <pre><code>{ Type: &quot;Mode&quot;, Entity: &quot;imx318&quot; # Entity name, required Width: 1280 # Horisontal resolution, defaults to previous in pipeline Height: 720 # Vertical resolution, defaults to previous in pipeline Pad: 0 # Pad to set the mode on, defaults to 0 Format: &quot;RGGB10p&quot; # Pixelformat for the mode, defaults to previous in pipeline }</code></pre> <p>Translates to an <code>VIDIOC_SUBDEV_S_FMT</code> ioctl on the entity.</p> <h3>Rate</h3> <pre><code>{ Type: &quot;Rate&quot;, Entity: &quot;imx318&quot;, # Entity name, required Rate: 30 # FPS, defaults to previous in pipeline }</code></pre> <p>Translates to an <code>VIDIOC_SUBDEV_S_FRAME_INTERVAL</code> ioctl on the entity.</p> <h3>Crop</h3> <pre><code>{ Type: &quot;Crop&quot;, Entity: &quot;imx318&quot;, # Entity name, required Width: 1280 # Area width, defaults to previous width in pipeline Height: 720 # Area height resolution, defaults to previous height in pipeline Top: 0 # The vertical offset, defaults to 0 Left: 0 # The horisontal offset, defaults to 0 Pad: 0 # Pad to set the crop on, defaults to 0 }</code></pre> <p>Translates to an <code>VIDIOC_SUBDEV_S_CROP</code> ioctl on the entity.</p> <h2>The future of libmegapixels</h2> <p>It has been quite a bit of work to create libmegapixels and it has been a mountain of work to rework Megapixels to integrate it. The first 90% of this is done but the trick is always in getting the second 90% finished. In the <a href="https://blog.brixit.nl/megapixels-2-0/">Megapixels 2.0</a> post I already mentioned this has burned me out. On the other hand it's a shame to let this work go to waste.</p> <p>There is a few parts of autofocus, autoexposure and autowhitebalance that are very complicated and math heavy to figure out, I can't figure it out. The loop between libmegapixels and Megapixels exists to pass around the values but I can't stop the system from oscillating and can't get it to settle on good values. There seems to be no good public information available on how to implement this in any case.</p> <p>Another difficult part is sensor calibration. I have the hardware and software to create calibration profiles but this system expects the input pictures to come from... working cameras. The system completely lacks proper sensor linearisation which makes setting a proper whitebalance not really possible. You might have noticed the specific teal tint that gives away that a picture is taken on a Librem 5 for example. If that teal tint is corrected for manually then the midtones will look correct but highlights will become too yellow. Maybe there's a way to calibrate this properly or maybe this just takes someone messing with the curves manually for a long while to get correct.</p> <p>There also needs to be an alternative to writing dng files with libtiff so for my own sanity it is required to write libdng. The last few minor releases of libtiff have all been messing with the tiff tags relating to DNG files which have caused taking pictures to not work for a lot of people. The only way around this seems to be stop using libtiff like all the Linux photography software has already done. This is not a terribly hard thing to implement, it just has been prioritized below getting color correct so far and I have not had the time to work on it.</p> <p>There is also still segfaults and crashes relating to the GPU debayer code in Megapixels for most of the pixel formats. This is very hard to debug due to the involvement of the GPU in the equation.</p> <h3>How can you help</h3> <p>If you know how to progress with any of this I gladly accept any patches for this to push it forward.</p> <p>The harder part of this section is... money. I love working on photography stuff, I can't believe the Megapixels implementation has even gotten this far but it basically takes me hyperfocusing for weeks for 12 hours per day on random camera code to get to this point, that is not really sustainable. It's great to work on this for some days and making progress, it's really painful to work for weeks on that one 30 line code block and making no progress whatsoever. At some point my dream is that I can actually live off doing open source work but so far that has still been a distant dream.</p> <p>I've had the <a href="https://blog.brixit.nl/donations/">donations</a> page now for some years and I'm incredibly happy that people are supporting me to work on this at all. It's just forever stuck on receiving enough money that you feel like a responsibility to produce progress but not nearly enough to actually fund that progress. So in practice only extra pressure.</p> <p>So I hate asking for money, but it would certainly help towards the dream of being an actual full time FOSS developer :)</p> Megapixels 2.0https://blog.brixit.nl/megapixels-2-0/87LinuxMartijn BraamThu, 09 Nov 2023 18:33:39 -0000<p>The Megapixels camera application has long been the most performant camera application on the original PinePhone. I have not gotten the Megapixels application to that point alone. There have been several other contributors that have helped slowly improving performance and features of this application. Especially Benjamin has leaped it forward massively with the threaded processing code and GPU accelerated preview.</p> <p>All this code has made Megapixels very fast on the PinePhone but also has made it quite a lot harder to port the application to other hardware. The code is very much overfitted for the PinePhone hardware.</p> <h2>Finding a better design</h2> <p>To address the elephant in the room, yes libcamera exists and promises to abstract this all away. I just disagree with the design tradeoffs taken with libcamera and I think that any competition would only improve the ecosystem. It can't be that libcamera got this exactly right on the first try right?</p> <p>Instead of the implementation that libcamera has made that makes abstraction code in c++ for every platform I have decided to pick the method that libalsa uses for the audio abstraction in userspace.</p> <p>Alsa UCM config files are selected by soundcard name and contain a set of instructions to bring the audio pipeline in the correct state for your current usecase. All the hardware specific things are not described in code but instead in plain text configuration files. I think this scales way better since it massively lowers the skill floor needed to actually mess with the system to get hardware working.</p> <p>The first iteration of Megapixels has already somewhat done this. There's a config file that is picked based on the hardware model that describes the names of the device nodes in /dev so those paths don't have to be hardcoded and it describes the resolution and mode to configure. It also describes a few details about the optical path to later produce correct EXIF info for the pictures.</p> <div class="highlight"><pre><span></span><span class="k">[device]</span><span class="w"></span> <span class="na">make</span><span class="o">=</span><span class="s">PINE64</span><span class="w"></span> <span class="na">model</span><span class="o">=</span><span class="s">PinePhone</span><span class="w"></span> <span class="k">[rear]</span><span class="w"></span> <span class="na">driver</span><span class="o">=</span><span class="s">ov5640</span><span class="w"></span> <span class="na">media-driver</span><span class="o">=</span><span class="s">sun6i-csi</span><span class="w"></span> <span class="na">capture-width</span><span class="o">=</span><span class="s">2592</span><span class="w"></span> <span class="na">capture-height</span><span class="o">=</span><span class="s">1944</span><span class="w"></span> <span class="na">capture-rate</span><span class="o">=</span><span class="s">15</span><span class="w"></span> <span class="na">capture-fmt</span><span class="o">=</span><span class="s">BGGR8</span><span class="w"></span> <span class="na">preview-width</span><span class="o">=</span><span class="s">1280</span><span class="w"></span> <span class="na">preview-height</span><span class="o">=</span><span class="s">720</span><span class="w"></span> <span class="na">preview-rate</span><span class="o">=</span><span class="s">30</span><span class="w"></span> <span class="na">preview-fmt</span><span class="o">=</span><span class="s">BGGR8</span><span class="w"></span> <span class="na">rotate</span><span class="o">=</span><span class="s">270</span><span class="w"></span> <span class="na">colormatrix</span><span class="o">=</span><span class="s">1.384,-0.3203,-0.0124,-0.2728,1.049,0.1556,-0.0506,0.2577,0.8050</span><span class="w"></span> <span class="na">forwardmatrix</span><span class="o">=</span><span class="s">0.7331,0.1294,0.1018,0.3039,0.6698,0.0263,0.0002,0.0556,0.7693</span><span class="w"></span> <span class="na">blacklevel</span><span class="o">=</span><span class="s">3</span><span class="w"></span> <span class="na">whitelevel</span><span class="o">=</span><span class="s">255</span><span class="w"></span> <span class="na">focallength</span><span class="o">=</span><span class="s">3.33</span><span class="w"></span> <span class="na">cropfactor</span><span class="o">=</span><span class="s">10.81</span><span class="w"></span> <span class="na">fnumber</span><span class="o">=</span><span class="s">3.0</span><span class="w"></span> <span class="na">iso-min</span><span class="o">=</span><span class="s">100</span><span class="w"></span> <span class="na">iso-max</span><span class="o">=</span><span class="s">64000</span><span class="w"></span> <span class="na">flash-path</span><span class="o">=</span><span class="s">/sys/class/leds/white:flash</span><span class="w"></span> <span class="k">[front]</span><span class="w"></span> <span class="na">...</span><span class="w"></span> </pre></div> <p>This works great for the PinePhone but it has a significant drawback. Most mobile cameras require an elaborate graph of media nodes to be configured before video works, the PinePhone is the exception in that the media graph only has an input and output node so Megapixels just hardcodes that part of the hardware setup. This makes the config file practically useless for all other phones and this is also one of the reason why different devices have different forks to make Megapixels work.</p> <p>So a config file that only works for a single configuration is pretty useless. Instead of making this an .ini file I've switched the design over to libconfig so I don't have to create a whole new parser and it allows for nested configuration blocks. The config file I have been using on the PinePhone with the new codebase is this:</p> <div class="highlight"><pre><span></span><span class="k">Version</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">1</span><span class="err">;</span><span class="w"></span> <span class="k">Make</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;PINE64&quot;</span><span class="err">;</span><span class="w"></span> <span class="k">Model</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;PinePhone&quot;</span><span class="err">;</span><span class="w"></span> <span class="k">Rear</span><span class="err">:</span><span class="w"> </span><span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="k">SensorDriver</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;ov5640&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">BridgeDriver</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;sun6i-csi&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FlashPath</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;/sys/class/leds/white:flash&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">IsoMin</span><span class="err">:</span><span class="w"> </span><span class="m">100</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">IsoMax</span><span class="err">:</span><span class="w"> </span><span class="m">64000</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Modes</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">2592</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">1944</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rate</span><span class="err">:</span><span class="w"> </span><span class="m">15</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;BGGR8&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rotate</span><span class="err">:</span><span class="w"> </span><span class="m">270</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FocalLength</span><span class="err">:</span><span class="w"> </span><span class="m">3</span><span class="k">.</span><span class="m">33</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FNumber</span><span class="err">:</span><span class="w"> </span><span class="m">3</span><span class="k">.</span><span class="m">0</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Pipeline</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Link&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">From</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;ov5640&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">FromPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="k">To</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;sun6i-csi&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">ToPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;ov5640&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">2592</span><span class="p">,</span><span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">1944</span><span class="p">,</span><span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;BGGR8&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">)</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">1280</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">720</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rate</span><span class="err">:</span><span class="w"> </span><span class="m">30</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;BGGR8&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rotate</span><span class="err">:</span><span class="w"> </span><span class="m">270</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FocalLength</span><span class="err">:</span><span class="w"> </span><span class="m">3</span><span class="k">.</span><span class="m">33</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FNumber</span><span class="err">:</span><span class="w"> </span><span class="m">3</span><span class="k">.</span><span class="m">0</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Pipeline</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Link&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">From</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;ov5640&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">FromPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="k">To</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;sun6i-csi&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">ToPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;ov5640&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">)</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="p">}</span><span class="w"></span> <span class="w"> </span><span class="p">)</span><span class="err">;</span><span class="w"></span> <span class="p">}</span><span class="err">;</span><span class="w"></span> <span class="k">Front</span><span class="err">:</span><span class="w"> </span><span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="k">SensorDriver</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;gc2145&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">BridgeDriver</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;sun6i-csi&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">FlashDisplay</span><span class="err">:</span><span class="w"> </span><span class="k">true</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Modes</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">1280</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">960</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rate</span><span class="err">:</span><span class="w"> </span><span class="m">60</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;BGGR8&quot;</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Rotate</span><span class="err">:</span><span class="w"> </span><span class="m">90</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Mirror</span><span class="err">:</span><span class="w"> </span><span class="k">true</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="k">Pipeline</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Link&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">From</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;gc2145&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">FromPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="k">To</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;sun6i-csi&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">ToPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;gc2145&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">)</span><span class="err">;</span><span class="w"></span> <span class="w"> </span><span class="p">}</span><span class="w"></span> <span class="w"> </span><span class="p">)</span><span class="err">;</span><span class="w"></span> </pre></div> <p>Instead of having a hardcoded preview mode and main mode for every sensor it's now possible to make many different resolution configs. This config recreates the 2 existing modes and Megapixels now picks faster mode for the preview automatically and use higher resolution modes for the actual picture. </p> <p>Every mode now also has a <code>Pipeline</code> block that describes the media graph as a series of commands, every line translates to one ioctl called on the right device node just like Alsa UCM files describe it as a series of amixer commands. Megapixels no longer has the implicit PinePhone pipeline so here it describes the one link it has to make between the sensor node and the csi node and it tells Megapixels to set the correct mode on the sensor node.</p> <p>This simple example of the PinePhone does not really show off most of the config options so lets look at a more complicated example:</p> <div class="highlight"><pre><span></span><span class="k">Pipeline</span><span class="err">:</span><span class="w"> </span><span class="p">(</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Link&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">From</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;imx258&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">FromPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">,</span><span class="w"> </span><span class="k">To</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_csi&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">ToPad</span><span class="err">:</span><span class="w"> </span><span class="m">0</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;imx258&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;RGGB10P&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">1048</span><span class="p">,</span><span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">780</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_csi&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_isp&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_isp&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Pad</span><span class="err">:</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="k">Format</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;RGGB8&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Crop&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_isp&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Crop&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_isp&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Pad</span><span class="err">:</span><span class="w"> </span><span class="m">2</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_resizer_mainpath&quot;</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Mode&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_resizer_mainpath&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Pad</span><span class="err">:</span><span class="w"> </span><span class="m">1</span><span class="p">},</span><span class="w"></span> <span class="w"> </span><span class="p">{</span><span class="k">Type</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;Crop&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Entity</span><span class="err">:</span><span class="w"> </span><span class="s2">&quot;rkisp1_resizer_mainpath&quot;</span><span class="p">,</span><span class="w"> </span><span class="k">Width</span><span class="err">:</span><span class="w"> </span><span class="m">1048</span><span class="p">,</span><span class="w"> </span><span class="k">Height</span><span class="err">:</span><span class="w"> </span><span class="m">768</span><span class="p">},</span><span class="w"></span> <span class="p">)</span><span class="err">;</span><span class="w"></span> </pre></div> <p>This is the preview pipeline for the PinePhone Pro. Most of the Links are already hardcoded by the kernel itself so here it only creates the link from the rear camera sensor to the csi and all the other commands are for configuring the various entities in the graph.</p> <p>The <code>Mode</code> commands are basically doing the <code>VIDIOC_SUBDEV_S_FMT</code> ioctl on the device node found by the entity name. To make configuring modes on the pipeline not extremely verbose it implicitly takes the resolution, pixelformat and framerate from the main information set by the configuration block itself. Since several entities can convert the frames into another format or size it automatically cascades the new mode to the lines below it.</p> <p>In the example above the 5th command sets the format to <code>RGGB8</code> which means that the mode commands below it for <code>rkisp1_resizer_mainpath</code> also will use this mode but the <code>rkisp1_csi</code> mode command above it will still be operating in <code>RGGB10P</code> mode.</p> <h2>Splitting of device management code</h2> <p>Testing changes in Megapixels is pretty hard. To develop the Megapixels code I'm building it on the phone and launching it over SSH with a bunch of environment variables set so the GTK window shows up on the phone and I get realtime logs on my computer. If there's anything that's going on after the immediate setup code it is quite hard to debug because it's in one of the three threads that process the image data.</p> <p>To implement the new pipeline configuration I did that in a new empty project that builds a shared library and a few command line utilities that help test a few specific things. This codebase is <code>libmegapixels</code> and with it I have split off all hardware access from Megapixels itself making both these codebases a lot easier to understand.</p> <p>It has been a lot easier to debug complex camera pipelines using the commandline utilities and only working on the library code. It should also make it a lot easier to make Megapixels-like applications that are not GTK4 to make it integrate more with other environments. One of the test applications for libmegapixels is <code>getframe</code> which is now all you need to get a raw frame from the sensor.</p> <p>Since this codebase is now split into multiple parts I have put it into a seperate gitlab organisation at <a href="https://gitlab.com/megapixels-org">https://gitlab.com/megapixels-org</a> which hopefully keeps this a bit organized.</p> <p>This is also the codebase used for <a href="https://fosstodon.org/@martijnbraam/110775163438234897">https://fosstodon.org/@martijnbraam/110775163438234897</a> which shows off libmegapixels and megapixels 2.0 running on the Librem 5.</p> <h2>Burnout</h2> <p>So now the worse part of this blog post. No you can't use this stuff yet :(</p> <p>I've been working on this code for months, and now I've not been working on this code for months. I have completely burned out on all of this.</p> <p>The libmegapixels code is in pretty good state but the Megapixels rewrite is still a large mess:</p> <ul><li>Saving pictures doesn&#x27;t really work and I intended to split that off to create libdng</li> <li>The QR code support is not hooked up at all at the moment</li> <li>Several pixelformats don&#x27;t work correctly in the GPU decoder and I can&#x27;t find out why</li> <li>Librem 5 and PinePhone Pro really need auto-exposure, auto-focus and auto-whitebalance to produce anything remotely looking like a picture. I have ported the auto-exposure from Millipixels which works reasonably well for this but got stuck on AWB and have not attempted Autofocus yet.</li> </ul> <p>The mountain of work that's left to do to make this a superset of the functionality of Megapixels 1.x and the expectations surrounding it have made this pretty hard to work on. On the original Megapixel releases nothing mattered because any application that could show a single frame of the camera was already a 100% improvement over the current state.</p> <p>Another issue is that whatever I do or figure out it will always be instantly be put down with "Why are you not using libcamera" and "libcamera probably fixes this". </p> <p>Some things people really need to understand is that an application not using libcamera does <i>not</i> mean other software on the system can't support libcamera. If Firefox can use libcamera to do videocalls that's great, that's not the usecase Megapixels is going for anyway.</p> <p>What also doesn't help is receiving bugreports for the PinePhone Pro while Megapixels does not support the PinePhone Pro. There's a patchset added on top to make in launch on the PinePhone Pro but there's a reason this patchset is not in Megapixels. The product of the Megapixels source with the ppp.patch added on top probably shouldn't've been distributed as Megapixels...</p> <p>What also doesn't help is that if Megapixels 2.0 were finished and released it would also create a whole new wave of criticism and comparisons to libcamera. I would have to support Megapixels for the people complaining that it's not enough... You could've not had a camera application at all...</p> <p>It also doesn't help that the libcamerea developers are also the v4l2 subsystem maintainers in the kernel. I have during development of libmegapixels tried sending a simple patch for an issue I've noticed that would massively improve the ease of debugging PinePhone Pro cameras. I've sent this 3 character patch upstream to the v4l2 mailing lists and it got a Reviewed-by in a few days.</p> <p>Then after 2 whole months of radio silence it got rejected by the lead developer of libcamera on debatable grounds. Now this is only a very small patch so I'm merely dissapointed. If I had put more work into the kernel side improving some sensor drivers I might have been mad but at this point I'm just not feeling like contributing to the camera ecosystem anymore. </p> <hr> <p><b>Edit:</b> I've been convinced to actually try to do this full-time and push the codebase forward enough to make it usable. This is continued at <a href="https://blog.brixit.nl/adding-hardware-to-libmegapixels/">https://blog.brixit.nl/adding-hardware-to-libmegapixels/</a></p> PinePhone Camera pt5https://blog.brixit.nl/pinephone-camera-pt5/620934a515b5040189838db3LinuxMartijn BraamSun, 13 Feb 2022 20:20:05 -0000<p>It's been a while since I've written anything about the Megapixels picture processing. The last post still showcases the old GTK3 version of Megapixels even!</p> <p>In the meantime users figured out how to postprocess the images better to get nicer results from the PinePhone camera. One of the major improvements that has landed was the sigmoidal contrast curve in ImageMagick.</p> <pre><code>convert img.tiff -sharpen 0x1.0 -sigmoidal-contrast 6,50% img.jpg</code></pre> <p>This command slightly sharpens the image and adds a nice smooth contrast curve to the image. This change has a major issue though, this is a fixed contrast curve added to all images and it does not work that great for a lot of cases. The best result was running this against pictures that were taken with the manual controls in megapixels so they have the right exposure.</p> <p>On the PinePhone the auto exposure in the sensor tends to overexpose images though. Adding more contrast after that will just make the issues worse. In the header image of this post there's three images shown generated from the same picture. The first one is the unprocessed image data, the second one is the .jpg created by the current version of Megapixels and the third one is the same data with my new post-processing software.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072511/image.png" class="kg-image"><figcaption>Waveform visualisation of the banner image</figcaption></figure> <p>This screenshot shows the waveform of the same header image. This visualizes the distribution of image data on the horizontal axis it's the horizontal position of the image and on the vertical axis it's the brightness of all the pixels in that column plotted. Here you can still see the 3 distinct images from the header image but with different distribution of the color/brightness data.</p> <p>One of the main issues with the data straight from the sensor is that it's mostly in the upper part of the brightness range, there's no data at all in the bottom quarter of the brightness range and this is visible as images that have no contrast and look grayish. </p> <p>The sigmoidal contrast curve in the middle image takes the pixels above the middle line and makes them brighter and pixels below the middle line and makes them darker. The main part that's improving is the data extending further in the lower part here, but due to the curve the bright parts of the image become even brighter and the top line shows that the data is clipping.</p> <p>The third image with the new algorithm instead moves the data down by keeping the bright pixels in the same spot but "stretching" the image to the bottom. This corrects for the blacklevel of the sensor data and also creates contrast without clipping the data.</p> <h2>How</h2> <p>This started with me trying to make the postprocessing faster. Currently the postprocessing is done with a shell script that calls various image manipulation utilities to generate the final image.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/old.png" class="kg-image"></figure> <p>Megapixels will take a burst of pictures and saves those as seperate .dng files in a temporary directory. From that series the second one is always used and the rest is ignored. With dcraw the image will be converted to rgb data and stored as tiff. Imagemagick will take that and apply the sharpness/contrast adjustment and save a .jpg</p> <p>Because these tools don't preserve the exif data about the picture exiftool is ran last to read the exif from the original .dng files and save that in the final .jpg</p> <p>Importing and exporting the image between the various stages is not really fast, and for some reason the processing in Imagemagick is just really really slow. My plan was to replace the 3 seperate utilities with a native binary that uses libraw, libjpeg, libtiff and libexif to deal with this process instead. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/postprocessd-v1.png" class="kg-image"><figcaption>version 1 of postprocessd</figcaption></figure> <p>The new tool is postprocessd (because it's supposed to run in the background and queue processing) It uses libraw to get rgb data, this is the same library that's used in dcraw. Then the resulting data is written directly to to libjpeg to create the final jpegs without any processing in between. This is what actually generated the first image shown in the banner. Processing a single .dng to a .jpg in this pipeline is pretty fast compared to the old pipeline, a full processing takes 4 seconds on the PinePhone.</p> <p>The downside is that the image looked much worse due to the missing processing. Also just having a bunch of .jpeg files isn't ideal. The solution I wanted is still the image stacking to get less noise. With the previous try to get stacking running with HDR+ it turned out that that process is way way way too slow for the PinePhone and the results just weren't that great. In the meantime I've encountered <a href="https://github.com/luigi311/Low-Power-Image-Processing">https://github.com/luigi311/Low-Power-Image-Processing</a> which uses opencv to do the stacking instead. This seemed easy to fit in.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/postprocessd-v2.png" class="kg-image"><figcaption>Version 2 with opencv for stacking</figcaption></figure> <p>This new code takes all the frames and converts them with libraw. Then the opencv code filters out all the images that are low contrast or fully black, because sometimes Megapixels glitches out. The last .dng file is then taken as a reference image and all the other images are aligned on top of that with a full 4 point warping transform to account for the phone slightly moving between taking the multiple pictures. After the aligning the pictures are averaged together to get a much less noisy image without running an actual denoiser.</p> <p>This process produced an image that's exactly the same as the output files from v1 but with less noise. </p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/stacked.png" class="kg-image"><figcaption>Before and after stacking</figcaption></figure> <p>This is a zoomed in crop of a test image that shows the difference of noise. The results are amazing for denoising without having artifacts that make the image blurry. But for every upside there's a downside. This is very slow.</p> <p>Stacking 2 images together with the current code takes 38 seconds. For great results it's better to stack 2 images though.</p> <h2>Color processing</h2> <p>Now the opencv dependency is added it's pretty easy to just use that to handle the rest of the postprocessing tasks.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/blacklevel-correction.png" class="kg-image"></figure> <p>The main improvement here is the automatic blacklevel and whitelevel correction. The code slightly blurs the image and then finds the darkest and brightest point. Then it's simply substracting the value of the darkest point to shift the colors in the whole image down and the colored haze is removed. Then the pixels get multiplied with a calculated value to make the brightest pixel pure white again which "stretches" the brightness range so it fills the full spectrum. This process adds the contrast like the old imagemagick code did, but in a way more carefully tuned way.</p> <p>After this a regular "unsharp mask" sharpening filter is run that's fairly agressive, but tuned for the sensor in the PinePhone so it doesn't look oversharpened.</p> <p>A last thing that's done is a slight gamma correction to darken the middle gray brightness a bit to compensate for the PinePhone sensor overexposing most things. The resulting contrast is pretty close to what my other Android phones took, except the resolution for those phones is a lot better.</p> <h2>What's left to do</h2> <p>The proof of concept works, now the integration work needs to happen. The postprocessing is quite CPU intensive so one of the goals of postprocessd is to make sure it never processes multiple images at the same time but instead queues the processing jobs up in the background so the CPU is free to actually run Megapixels. There's also still some bugs with the exif processing and the burst length in the current version of Megpixels is a bit too short. This can probably be made dynamic to take more pictures in the burst when the sensor gain is set higher.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072513/compare.jpg" class="kg-image"></figure> <p></p> DNG is not greathttps://blog.brixit.nl/dng-is-not-great/6189d3144e67c22384930706PhotographyMartijn BraamMon, 29 Nov 2021 09:00:00 -0000<p>Many who read this will probably not know DNG beyond "the annoying second file Megapixels produces". DNG stands for Digital Negative, an old standard made by Adobe to store the "raw" files from cameras.</p> <p>The standard has good ideas and it is even an open standard. There's a history of the DNG development on the <a href="https://en.wikipedia.org/wiki/Digital_Negative">wikipedia page</a> that details the timeline and goals of this new specification. My problem with the standard is also neatly summarized in one line of this article:</p> <blockquote><i>Format based on open specifications and/or standards</i>: DNG is compatible with <a href="https://en.wikipedia.org/wiki/Tag_Image_File_Format_/_Electronic_Photography">TIFF/EP</a>, and various <a href="https://en.wikipedia.org/wiki/Open_format">open formats</a> and/or <a href="https://en.wikipedia.org/wiki/Open_standard">standards</a> are used, including <a href="https://en.wikipedia.org/wiki/Exchangeable_image_file_format">Exif metadata</a>, <a href="https://en.wikipedia.org/wiki/Extensible_Metadata_Platform">XMP metadata</a>, <a href="https://en.wikipedia.org/wiki/IPTC_Information_Interchange_Model">IPTC metadata</a>, <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space">CIE XYZ coordinates</a> and <a href="https://en.wikipedia.org/wiki/JPEG">JPEG</a></blockquote> <p>This looks great at first glance, more standards! Reusing existing technologies! The issue is that it's so many standards though.</p> <h2>TIFF</h2> <p>DNG is basically nothing more than a set of conventions around TIFF image files. This is possible because TIFF is an incredibly flexible format. The problem is that TIFF is an incredibly flexible format. The format is flexible to the point that it's completely arbitrary where your data is. The only thing that's solid is the header that describes that the files is a TIFF file and a pointer to the first IFD chunk. The ordering of image data and IFD chunks within the file is completely arbitrary. If you want to store all the pixels for the image directly after the header and then have the metadata at the end of the file, that's completely possible. If you want to have half the metadata before the image and half after it, completely valid. As long as the IFD points to the right next offset in the file for another IFD and the IFD points to the right start of image data.</p> <p>This makes parsing a TIFF file more complicated. It's not really possible to parse TIFF from a stream unless you buffer the full file first, since it's basically a filesystem that contains metadata and images.</p> <p>This format supports having any number of images inside a single file and every image can have its own metadata attached and it's own encoding. This is used to store thumbnails inside the image for example. The format not just supports having multiple images, it supports an actual tree of image files and blobs of metadata.</p> <p>Every image in a TIFF file can have a different colorspace, color format, byte ordering, compression and bit depth. This is all without adding any of the extensions to the TIFF format.</p> <p>To get information about every image in the file there's the TIFF metadata tags. The tags a number for the identifier and one or more values. Every extension and further version of the TIFF specification adds more tags to describe more detailed things about the image. And the DNG specification also adds a lot of new tags to store information about raw sensor data.</p> <p>All these tags are not enough though, There's more standards to build upon! There's a neat tag called 0x8769, also known as "Exif IFD". This is a tag that is a pointer to another IFD that contains EXIF tags, from jpeg fame, that also describe the image. To make things complete the information that you can describe with TIFF tags and with EXIF tags overlaps and can ofcourse contradict eachother in the same file.</p> <p>The same way it is also possible to add XMP metadata to an image. This is made possible by the combination of letters developers will start to fear: TIFFTAG_XMLPACKET. Because everything is better with a bit of XML sprinkled on top.</p> <p>Then lastly there's the IPTC metadataformat which I luckily have never heard of and never encountered and I look forward to never learning about it.</p> <p>Shit I looked it up anyway, This is a standard for... what... newspaper metadata? Let's quickly close this tab.</p> <h2>Writing raw sensor data to a file</h2> <p>So what would be the bare minimum to just write sensor dumps to a file. Obviously that's just <code>cat sensor > picture</code> but that will lack the metadata to actually show the picture.</p> <p>The minimum data to render something that looks roughly like a picture would be:</p> <ul><li>width and height of the image</li> <li>pixel format as fourcc</li> <li>optionally the color matrices for making the color correct</li> </ul> <p>The first two are simple. This would just be 2 numbers for the dimensions since it's unlikely that 3 dimensional pictures would be supported , and the pixel format can be encoded as the 4 ascii characters representing the pixel format. The linux kernel has a lot of them defined already in the v4l2 subsystem already.</p> <p>To do proper color transforms a lot more metadata would be needed which would probably mean that it's smarter to have a generic key/value storage in the format.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072509/text859-5-7-5.png" class="kg-image"></figure> <p>This format can be extremely simple to read and write except for the extra metadata that needs a bit of flexibility. The extra metadata should probably be some encoding that saves number of entries, the key length and the value length and write that as length prefixed strings.</p> <p>The absolute minimum to test a sensor would be writing 16 bytes which can even be done by hand to make a header for a specific resolution and then append the sensor bytes to that. </p> <h2>The hard part</h2> <p>Making up a random image file format is easy, getting software to support it is hard. Luckily there are open source image editors and picture editors, so some support could always be patched in initially for testing. Also this has quite a high XKCD-927 factor.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072509/standards-1.png" class="kg-image"><figcaption>Source: XKCD of course!</figcaption></figure> <p>Still would be great to know why a file format for this could not be this simple.</p> PinePhone Camera pt4https://blog.brixit.nl/pinephone-camera-pt4/5f79b6a67725960d859ca4fdPhonesMartijn BraamMon, 05 Oct 2020 19:39:01 -0000<p>I keep writing these because things keep improving. One of the main improvements is visible in the picture above, autofocus is working. </p> <p>The OV5640 sensor in the PinePhone is pretty small, this limits possible image quality but as upside it means the camera has a way larger area of the picture in focus. Due to this it can get away with not having autofocus at all. The camera just sets the lens to infinity focus when starting (which produces the clicking sound when accessing the camera) and then the focus would be mostly fine for landscapes.</p> <p>The downside is that things that are close to the camera aren't in focus. This is quite a problem for me because half the pictures I take with my phone is photo's of labels on devices like routers so I don't have to write the password down to enter it on a device in another room. These photo's would be quite out of focus on the PinePhone. </p> <p>The autofocus support is actually only a single line change in Megapixels to make the basic functionality work. The main changes for this have been done in the kernel driver for the ov5640. The sensor chip has a built-in driver for the focus coil in the camera module. It only needs some firmware loaded to make the focus work and some commands need to be sent to configure the focussing. The firmware upload is needed because the sensor doesn't have any built-in storage for the 4kB of firmware that can be sent, it's just stored in RAM when the sensor is powered up by Linux.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072499/focus.jpg" class="kg-image"></figure> <p>The firmware itself is a closed blob from Omnivision. It basically runs on the embedded 8051 core (that hasn't been used at all so far) and it gets a 80x60 pixels downscaled version of the camera view, from there it sends values to the focus distance register in the sensor that normally would be used for manual focussing. To trigger focus there's a few registers on the ov5640 you can write to to send commands to the firmware. The datasheet for example defines command 0x03 to focus on the defined area and 0x07 to release the focus and go back to infinity.</p> <p>After implementing this Megi figured out that there's an undocumented command 0x04 to trigger continuous autofocus in the sensor. This is way more user friendly and is what's now enabled by default by Megapixels.</p> <p>One of the remaining issues is that V4L2 doesn't seem to have controls defined to select <i>where</i> the focus should measured. The current implementation just makes the ov5640 focus on the center of the image but the firmware allows defining the zones it should use to get focus. </p> <h2>User facing manual controls</h2> <p>One of the new developments that's in Megapixels is an UI that allows users of the app to switch from automatic exposure to manual controls.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072499/ui.png" class="kg-image"></figure> <p>In the top row of the image the current state of the controls is shown. In this case it's the gain and exposure controls from V4L2. These controls don't have defined ranges so that's set by the config file for Megapixels.</p> <p>If you tap the control in the top the adjustment row in the bottom of the screenshot will open allowing you to change the value by dragging the slider, or enableing the in-sensor automatic mode for it by clicking the "Auto" toggle button.</p> <p>These controls also slightly work for the GC2145 front camera, the main issue is that the datasheets don't define the range for the built-in analog and digital gain so it can't really be mapped to useful values in the UI. The automatic gain can also only be disabled if you first disable the automatic exposure. Something that can't really be enforced in Megapixels currently so it's not super user friendly.</p> <p>The next step for this would be implementing the whitebalance controls for the cameras. That would involve some math since the UI would show the whitebalance in a color temperature and tint but V4L2 deals with whitebalance with R/G/B offsets.</p> <h2>Color calibration</h2> <p>Another huge step in image quality for the rear camera is the color calibration matrices.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072499/lumariver.PNG" class="kg-image"></figure> <p>The calibration for this is done by making correctly exposed pictures of a calibrated color target like the X-Rite colorchecker passport above. Even the slightest amount of exposure clipping ruins the calibration result but due to the manual controls in Megapixels I was now able to get a good calibration picture.</p> <p>For the calibration two photos are needed. Those need to be as far away from eachother as possible in the color spectrum. A common choice for this is using a Standard D65 illuminant and a Standard A illuminant, which is a fancy way of saying daylight on a cloudy day and an old tungsten lightbulb. By knowing the color/spectrum of the light used and the color of the paint in the calibration target it's possible to calculate a color matrix and curve to transform the raw sensor data in correct RGB colors.</p> <p>To do this calculation I used <a href="http://lumariver.com/">Lumariver Profile Designer</a>, which is a closed source tool that gives a very nice UI around <a href="http://rawtherapee.com/mirror/dcamprof/dcamprof.html">DCamProf</a> from the RawTherapee people. The license is paid for by the donations from <a href="https://www.patreon.com/martijnbraam?">my patreon</a> sponsors and the license cost is used by Lumariver to continue development on DCamProf. </p> <p>After running all the steps in the calibration software I get a .dcp file that contain the color matrices and curves for the sensor/lens combination. The matrices from this file are then written to the hardware config file in Megapixels in the colormatrix and forwardmatrix keys. Megapixels doesn't actually process any of this enformation itself, it only gets added to the exported raw files as metadata and the post processing tools will use it to produce better output files.</p> <p>The result of the matrices is that it now takes way less messing with raw tools like RawTherapee and Darktable to get a good looking picture. The pictures just look better by default.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072499/color-demo.jpg" class="kg-image"></figure> <p>It might look like it's just a saturation change, but it also corrects a few color balance issues. The second thing the calibration tool outputs are calibration curves for the sensor. These do brightness-dependend or hue dependent changes to the image. These are a bit too large to store in the config file so to use those I need to find a way to merge the calibration data from a .dcp file into the generated .dng raw files instead.</p> <h2>New post processing pipeline</h2> <p>Just writing the metadata doesn't magically do anything. To take advantage of this I rewrote the processing pipeline for the photos taking advantage of the DNG code that I wrote to test the burst function. I removed the debayer code from Megapixels that did the good quality debayer when taking a photo and I removed the jpeg export function. Megapixels now only writes the burst of photos to a temporary directory as .dng files with all the metadata set. After taking a photo Megapixels will call a shell script which then takes those files and run it through the new processing pipeline.</p> <p>Megapixels ships with a postprocess.sh file by default that is stored in /usr as the default fallback. You can overwrite the processing by copying or creating a new shell script and storing that in ~/.config/megapixels/postprocess.sh or /etc/megapixels/postprocess.sh.</p> <p>The included processing script takes the burst and saves the first .dng file into ~/Pictures as the raw photo. Then it runs the same file through dcraw to do a very good quality debayer and at the same time apply the color matrices stored in the .dng files. Then it will run imagemagic to convert the resulting tiff file into a .jpg in ~/Pictures as the final jpeg image.</p> <p>The output .jpg files are what is used in the garden pictures above.</p> <h2>HDR+ and stacking</h2> <p>Last but not least is the further development into the <a href="https://www.timothybrooks.com/tech/hdr-plus/">hdr-plus</a> implementation. After my previous blog post one of the developers in the #pinephone channel (purringChaos) made the hdr-plus repository build that I couldn't get working last time.</p> <p>The hdr-plus tool takes a burst of raw frames and aligns those to do noise reduction and get a larger dynamic range. then it runs a tonemap on the HDR file and make a nice processed image. It's basically an implementation of the photo pipeline that google made for the Pixel phones.</p> <p>So far running the hdrplus binary on the photo's has resulted in very overexposed or very weirdly colored images, which might be a result of the camera not providing the raw files in a way the hdr tool expects it. Hopefully that can be solved in the future.</p> <p>The hdr-plus repository does have another tool in it that just does the stacking and noise reduction though and that binary <i>does</i> work. If the postprocess.sh script in Megapixels detects the stack_frames command is installed it will use it to stack together the 5 frames of the burst capture and then use the .dng from that tool in the rest of the post-processing pipeline. It seems to reduce the noise in images quite a lot but it also loses most of the metadata in the .dng file so this would also need some improvements to work correctly.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072499/demo.jpg" class="kg-image"></figure> <p>I think with these changes the ov5640 is basically stretched to the limit of what's possible with the photo quality. The rest of the planned tasks is mainly UX improvement and supporting more different camera pipelines. Since postmarketOS supports a lot of phones and more and more run mainline Linux it should be possbile to also run Megapixels on some of those.</p> <p>I've also received a few patches to Megapixels now with interesting features, like support for more than 2 camera's and a proof-of-concept QR code reading feature. Hopefully I can also integrate that in this new code soon.</p> PinePhone camera adventures, part 3https://blog.brixit.nl/pinephone-camera-adventures-part-3/5f6cb41a7725960d859ca40cPhonesMartijn BraamFri, 25 Sep 2020 17:15:32 -0000<p>Armed with the knowledge gathered by making the python-pinecamera implementation I started making a GTK3 version. This time using C instead of python, partially for performance, partially because docs are hard enough for C and the python abstraction makes it way harder.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072498/megapixels-bar.png" class="kg-image"></figure> <p>The app is far from perfect and still has a bunch of low hanging fruit for performance improvements. At this moment it's still decoding the images in the UI thread making the whole app not feel very responsive, it's also decoding at another resolution than it's displayed, which can probably be sped up by decoding at exactly the right resolution and rotating while decoding.</p> <p>One of the ways the image quality is increased on the sensors is by using a raw pixelformat instead of a subsampled one. Resolution on cameras is basically a lie. This is because while displays have subpixels, sensors don't. When you display an image on a 1080p monitor, you're actually sending 5760x1080 subpixels to the display and those subpixels are neatly in red/green/blue order. </p> <p>On camera sensors when you have a 1080p sensor, you actually only have 1920x1080 subpixels. These subpixels are laid out out in a bayer matrix where you have (in the ov5640 case) on the first line a row of blue,green,blue,green... pixels and on the next row green,red,green,red,green,red. Then camera software takes every subpixel, gets the nearest other colored pixels and make that one "real" pixel. This makes for a slightly blurrier image than a 4k sensor being downscaled to 1080p.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072498/subpixels.png" class="kg-image"></figure> <p>To make the quality issue worse the image will be subsampled to have a 1080p grayscale image and the color channel will be quarter resolution. To make the quality even worse the camera will do quite agressive noise reduction, causing the oil paint look in bad pictures. </p> <p>The fun part of debayering the image in the sensor and then subsampleing it is that the bandwidth required for the final picture is about twice as much as the original raw sensor data. On higher resolutions this causes the sensor to hit a bandwidth limit when sending the video stream at a reasonable framerate. This is why it wasn't possible before to get a 5 megapixel picture from the 5 megapixel sensor. But by using a raw mode you can get the full sensor resolution at 15fps. This is what Megapixels does.</p> <h2>Speed</h2> <p>One of the major improvements of the previous solutions for the cameras is that the preview has a way higher framerate. The ffmpeg based solutions could run in the 0.5-2 FPS region based on rotation and resolution the sensor was running at. These were also running in YUV mode instead of raw. The issue with YUV for the preview is that the sensor takes the RGB subpixels and returns the image in YUV pixels. The display in the phone is RGB again thus this has to be converted again, which takes quite some CPU power. </p> <p>The easiest way to speed up data processing in computers is just process less data, so that's what it does. instead of debayering a full frame properly at 5 megapixels, it just does a quick'n'bad debayer by taking a single red, green and blue pixel from every 12x12 block of raw pixels and discarding the rest, no interpolation at all.</p> <p>This is technically not a correct solution, it also produces way worse image quality than proper debayering. That's why when you take a picture the whole raw frame is run through a proper debayering implementation. </p> <iframe width="100%" height="360" src="https://www.youtube.com/embed/n_BfUV0v3UM?feature=oembed"> <h2>Image quality</h2> <p>The main issue with the image quality currently is the lack of focussing making all images slightly soft if you're photographing things far away or very blurry if you try to take a close-up. The second big issue is that the auto exposure algorithm in the ov5640 is pretty horrible by default overexposes large parts of the image to try to get the average exposure correct.</p> <p>To test exposure improvements I made a local build of megapixels that sets a fixed exposure on the sensor. I also added some WIP .dng raw exporting code so the sensor data can actually be loaded in photo processing software as raw data.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072498/darktable.jpg" class="kg-image"></figure> <p>The resulting image quality is a lot better. With the post-processing happening after taking the photo the processing software (In this case Darktable) can run way more cpu intensive debayering algorithms and you also have way more control over the exposure and colors of the final image.</p> <p>This is how the image looks with all the post processing disabled in Darktable:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072498/raw.JPG" class="kg-image"></figure> <p>Another feature I added for testing is burst capture, the photo above is one of the 5 images captured in a burst (at full resolution 15fps). One of the possibilities is running this burst through the HDR+ software google runs, but I haven't found any open source implementation that actually compiles. Another posiblity is the Handheld-Multi-Frame-Super-Resolution algorithm that's also used by google. I also haven't been able to compile any of those implementations. </p> <p>If you want to try to post-process a PinePhone raw image yourself, there's the 5-frame burst used for the images above: <a href="http://brixitcdn.net/pinephone-burst/">http://brixitcdn.net/pinephone-burst/</a></p> <h2>Conclusion</h2> <p>Lots more to do, but it's getting better every release</p> PinePhone camera part 2https://blog.brixit.nl/pinephone-camera-part-2/5f00b2357725960d859ca25bLinuxMartijn BraamSat, 04 Jul 2020 17:46:51 -0000<p>So some progress has been made since the <a href="/camera-on-the-pinephone/">previous post</a> about the cameras in the PinePhone</p> <p>The previous post had the camera scripts running on the 5.6 kernel which only supported the ov5640 rear camera. Now I've build a kernel with 5.7 and megi's camera patches on top. This enables the front camera (gc2145) because by default the camera module for the Allwinner A64 doesn't support multiple cameras on the same bus.</p> <p>The patches also make the kernel support more modes for the camera in the correct way and disable some denoising and sharpening in the rear camera, which improves the image quality.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072495/picture-2020.07.04-17.40.47-1.jpg" class="kg-image"><figcaption>A photo taken in raw 1080p mode, postprocessed to have something resembling natural colors</figcaption></figure> <p>Post processing the raw photos takes some time but gives way sharper results. In my case I used the <a href="https://github.com/jdthomas/bayer2rgb">bayer2rgb</a> tool to convert the raw images to debayered rgb images with a tiff header, then I used Darktable to give the image better colors, contrast, sharpness and some denoising. To make this workflow better the sensor data would need to be profiled against a color chart to make a better transformation from the raw data to image colors. </p> <p>Here's a comparison between the same photo taken in UYVY mode with the latest kernel and a photo taken in raw mode and postprocessed:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072496/comparison.png" class="kg-image"><figcaption>Left images are RAW, Right images are UYVY</figcaption></figure> <p>Also important to note: these photo's aren't just bleak because the sensor, it has also been raining all day.</p> <p>The commands used to make these images:</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>camera.py still rear.raw -c rear -r 1080p1 --raw --pixfmt <span class="gp">$ </span>bayer2rgb -i rear.raw -w <span class="m">1920</span> -v <span class="m">1080</span> -b <span class="m">8</span> -f BGGR -t -o rear.tiff <span class="gp">$ </span>darktable rear.tiff </pre></div> <h2>Front camera</h2> <p>The front camera is still in pretty rough shape... but it does work!</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072496/gc2145.jpg" class="kg-image"></figure> <p>As you can see above, the image quality isn't great. Auto whitebalance isn't working at all causing the image to be very green. The auto-exposure is also not working correctly causing it to overexpose most images.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072496/testje2.jpg" class="kg-image"></figure> <p>The whitebalance can be fixed with some postprocessing, but the exposure is an issue that would need to be fixed in the camera driver. But the nice thing is that the front camera can already get pictures of the full native resolution of the sensor (1600x1200)</p> <h2>Camera.py</h2> <p>I've updated camera.py to support these camera features in the same repository as the original script: <a href="https://git.sr.ht/~martijnbraam/python-pinecamera">https://git.sr.ht/~martijnbraam/python-pinecamera</a></p> Camera on the PinePhonehttps://blog.brixit.nl/camera-on-the-pinephone/5efb398b7725960d859ca1feLinuxMartijn BraamTue, 30 Jun 2020 16:03:01 -0000<p>The camera on the PinePhone isn't very well tested since there is no easy way to get frames from the rear or front camera, even though the kernel has supported the camera's for a while.</p> <p>This is a python script that can do various camera tasks on the PinePhone when running on postmarketOS, it might work in other distributions if it has the right dependencies and kernel config.</p> <p>The required dependencies for postmarketOS are:</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>apk add python3 v4l-utils imagemagick ffmpeg </pre></div> <p>Then you need the camera script itself, you can fetch it from the <a href="https://git.sr.ht/~martijnbraam/python-pinecamera">sourcehut repository</a>:</p> <div class="highlight"><pre><span></span><span class="gp">$ </span>wget https://git.sr.ht/~martijnbraam/python-pinecamera/blob/master/camera.py </pre></div> <p>This script allows taking pictures and recoding movies. It also automatically corrects the orientation of photos based on data from the accelerometer. Here is some example use:</p> <pre><code>$ ./camera.py still photo.jpg $ ./camera.py still photo.png $ ./camera.py still --resolution 720p photo.jpg $ ./camera.py movie --resolution 1080p15 pinecones.mkv ^c to end recording $ ./camera.py --help usage: camera.py [-h] [--resolution {1080p,max,720p60,720p25,720p,1080p15,720p30,1080p10,720p24,720p50}] [--camera {rear,front}] [--debug] {still,movie} filename PinePhone camera tool positional arguments: {still,movie} filename optional arguments: -h, --help show this help message and exit --resolution {1080p,max,720p60,720p25,720p,1080p15,720p30,1080p10,720p24,720p50}, -r {1080p,max,720p60,720p25,720p,1080p15,720p30,1080p10,720p24,720p50} --camera {rear,front}, -c {rear,front} --debug, -d </code></pre> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072495/pic9.jpg" class="kg-image"><figcaption>Photo taken in 1080p mode</figcaption></figure> <p>It's also interesting to note that switching between 720p and 1080p mode changes the cropping of the sensor significantly. The 1080p photos produce more detail, but the 720p ones are a wider field of view.</p> <figure class="kg-card kg-gallery-card"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brixit.nl/image/w600//static/files/blog.brixit.nl/1670072495/pic10.jpg" class="kg-image" width="600" height="337"></div><div class="kg-gallery-image"><img src="https://blog.brixit.nl/image/w600//static/files/blog.brixit.nl/1670072495/pic11.jpg" class="kg-image" width="600" height="337"></div></div></div></figure> <p>It should be possible to get better picture quality out of these sensors. A lot of detail is lost because of the agressive noise reduction happening on the OV5640 sensor. It should be possible to produce better results by post processing the raw photos that aren't denoised by the sensor.</p> <p>Finally a video recording made using camera.py in 1080p15 mode:</p> <iframe width="100%" height="360" src="https://www.youtube.com/embed/UgN1sLwxA6Q?feature=oembed"> <p>This isn't using the hardware h264 encoder yet, it produces a relatively high bitrate h264 stream using the software encoder running on the ultrafast preset.</p>