Photography - BrixIT Bloghttps://blog.brixit.nl/tag/photography/page/1Thu, 25 Jan 2024 22:45:44 -000060Fixing the Megapixels sensor linearizationhttps://blog.brixit.nl/fixing-the-megapixels-sensor-linearization/95MegapixelsMartijn BraamThu, 25 Jan 2024 22:45:44 -0000<p>Making a piece of software that dumps camera frames from V4L2 into a file is not very difficult to do, that's only a few hundred lines for C code. Figuring out why the pictures look cheap is a way harder challenge.</p> <p>For a long time Megapixels had some simple calibrations for blacklevel (to make the shadows a bit darker) and whitelevel (to make the light parts not grey) and later after a bit of documentation studying I had added calibration matrices all the way back in <a href="https://blog.brixit.nl/pinephone-camera-pt4/">part 4</a> of the Megapixels blog series.</p> <p>The color matrix that was added in Megapixels is a simple 3x3 matrix that converts the color sensitivity of the sensor in the PinePhone to calibrated values for the rest of the pipeline. Just a simple 3x3 matrix is not enough to do a more detailed correction though. Luckily the calibration software I used produces calibration files that contain several correction curves for the camera. For example the HSV curve that changes the hue, saturation and brightness of specific colors.</p> <p>Even though this calibration data is added by Megapixels I still had issues with color casts. Occasionally someone mentions to be how "filmic" or "vintage" the PinePhone pictures look. This is the opposite of what I'm trying to do with the picture processing. The vintage look is because color casts that are not linear to brightness are very similar on how cheap or expired analog film rolls reproduce colors. So where is this issue coming from?</p> <p>I've taken a closer look to the .dcp files produced by the calibration software. With a bit of python code I extracted the linearization curve from this file and plotted it. It turns out that the curve generated after calibration was perfectly linear. It makes a bit of sense since this calibration software was never made to create profiles for completely raw sensor data. It was made to create small corrections for professional cameras that already produce nice looking pictures. Looks like I have to produce this curve myself</p> <h2>Getting a sensor linearization curve</h2> <p>As my first target I looked into the Librem 5. Mainly because that's the phone that currently has the most battery charge. I had hoped there was some documentation about the sensor response curves in the datasheet for the sensor. It turns out that even getting a datasheet for this sensor is problematic. So the solution is to measure the sensor instead.</p> <p>Measuring this pretty hard though, the most important part is having calibrated reference for most solutions. I've thought about figuring out how to calibrate a light to produce precise brightness dimming steps and measuring the curve of the light with a colorimeter to fix any color casts of the lights. Another idea was taking pictures of a printed grayscale curve but that has the issue that the light on the grayscale curve needs to be perfectly flat.</p> <p>But after thinking about this in the background for some weeks I had a thought: instead of producing a perfect reference grayscale gradient it's way easier to point the camera at a constant light source and then adjust the shutter speed of the camera to produce the various light levels. Instead of a lot of external factors with calibrated lights which can throw off measurements massively I assume that the shutter speed setting in the sensor is accurate.</p> <p>The reason I can assume this is accurate is because the shutter speed setting in these phone sensors is in "lines". These cameras don't have shutters, it's all electronic shutter in the sensor. This means that if the shutter is set to 2 lines it means that the line being read by the sensor at that moment is cleared only 2 scanlines before. This is the "rolling shutter" effect. If the shutter is set to 4 lines instead every line has exactly twice the amount of time to collect light after resetting. This should result in a pretty much perfectly linear way to control the amount of light to calibrate the response with.</p> <p>In the case of the Librem 5 this value can be set from 2 lines to 3118 lines where the maximum value means that all the lines of the sensor have been reset by the time the first line is read out giving the maximum amount of light gathering time.</p> <p>With libmegapixels I have enough control over the camera to make a small C application that runs this calibration. It goes through these steps:</p> <ol><li>Open the specified sensor and set the shutter to the maximum value</li> <li>Start measuring the brightness of the 3 color channels and adjust the sensor gain so that with the current lighting the sensor will be close to clipping. If on the lowest gain setting the light source is still too bright the tool will ask to lower the lamp brightness.</li> <li>Once the target maximum brightness has been hit the tool will start lowering the shutter speed in regular steps and then saving the brightness for the color channels at that point.</li> <li>The calibration data is then written to a csv file</li> </ol> <p>The process looks something like this:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706114157/image.png" class="kg-image"></figure> <p>This is a short run for testing where only 30 equally spaced points are measured. I did a longer run for calibration with it set to 500 points instead which takes about 8 minutes. This is a plot of the resulting data after scaling the curves to hit 1.0 at the max gain:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706114296/image.png" class="kg-image"></figure> <p>The response of the sensor is not very linear at all... This means that if a picture is whitebalanced on the midtones the shadows will have a teal color cast due to the red channel having lower values. If the picture would be corrected with whitebalance to correct the darker colors it would result in the brighter colors to turn magenta.</p> <p>The nice thing is that I don't have to deal with actually correcting this. This curve can just be loaded into the .dng file metadata and the processing software will apply this correction at the right step in the pipeline.</p> <h2>Oops</h2> <p>It is at this point that I figured out that the LinearizationTable DNG tag is a grayscale correction table so it can't fix the color cast. At least it will improve the brightness inconsistencies between the various cameras.</p> <p>With some scripting I've converted the measured response curve into a correction curve for the LinearizationTable and then wrote that table into some of my test pictures with <code>exiftool</code>. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706221766/compare-linearizationtable.jpg" class="kg-image"></figure> <p>This is the result. The left image is a raw sensor dump from the Librem 5 rear camera that does not have any corrections at all applied except the initial whitebalance pass. On the left is the exact image pipeline but with the LinearizationTable tag set in the dng before feeding it to <code>dcraw</code>.</p> <p>The annoying thing here is that both pictures don't look correct. The first one has the extreme gamma curve that is applied by the sensor so everything looks very bright. The processed picture is a bit on the dark side but that might be because the auto-exposure was run on the first picture causing underexposure on the corrected data.</p> <p>The issue with that though is that some parts of the image data are already clipping while they shouldn't be and exposing the picture brighter would only make that worse.</p> <p>Maybe I have something very wrong here but at this point I'm also just guessing how this stuff is supposed to work. Documentation for this doesn't really exist. This is all the official documentation:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1706222220/image.png" class="kg-image"><figcaption>No, chapter 5 not helpful</figcaption></figure> <p>Maybe it all works slightly better if the input raw data is not 8-bits but that's a bunch more of kernel issues to fix on the Librem 5 side.</p> <h2>Conclusion</h2> <p>So not that much progress on this at all as I hoped. I made some nice tools to produce data that makes pictures worse. Once the clipping in the highlights is fixed this might be very useful though since practically everything in the DNG pipeline expects the input raw data to be linear and it just isn't.</p> <p>The <a href="https://gitlab.com/megapixels-org/libmegapixels/-/commit/f6686d7a5a176384da3b5a1eaf93985aeb29d7be">sensor measuring tool</a> is included in the libmegapixels codebase now though.</p> <p>To fix auto-exposure I also need to figure out a way to apply this correction curve before running the AE algorithms on the live view. More engineering challenges as always :)</p> <hr> <div style="width: 75%; margin: 0 auto; background: rgba(128,128,128,0.2); padding: 10px;"> <h4>Development Funding</h4> <p>The current developments of Megapixels are funded by... You! The end-users. It takes a lot of time and a lot of weird expertice to make Linux cameras work and I've not been able to do it without your support.</p> <p>The donations are being used for the occasional hardware required for Megapixels development (Like a nice Standard Illuminant A for calibration) and the various other FOSS applications I develop for the Linux ecosystem. Every single bit helps to not do all this work entirely for free.</p> <a href="https://blog.brixit.nl/donations/">Donations</a> </div> <p></p> Expensive cameras are actually betterhttps://blog.brixit.nl/expensive-cameras-are-actually-better/80PhotographyMartijn BraamWed, 16 Aug 2023 00:01:48 -0000<p>It has been quite a while since I've bought a "new" camera. A lot of my photography has been done on a Nikon D3300. This is a 2014 entry-level DSLR. While this camera is marked as entry level it has practically all the features you'd ever need for photography and I never really have had any issues with picture quality of this camera.</p> <p>Now the world is moving on to mirrorless cameras it has become almost affordable to buy a 10 year old second hand professional camera, so I snatched up the D750 right after Nikon introduced a new batch of Z-series mirrorless cameras. I've always wanted to use some full-frame cameras but the price difference has been absurd.</p> <p>When I bought the D3300 somewhere in ~2015 If I remember correctly I did a comparison between specs of a bunch of the current Nikon cameras and came to the conclusion that getting something more expensive as the entry level just wasn't worth it. A comparison of the models at the time:</p> <ul><li>The D3300 &quot;entry level&quot; for ~300 euro at the end of 2015.</li> <li>The D5300 &quot;upper entry&quot; for ~500 euro. This adds a tilting screen mostly.</li> <li>The D7200 &quot;enthusiast&quot; for ~1000 euro. Higher max ISO and faster shutter.</li> <li>The D610 &quot;high end&quot; for ~1500 euro. This one is the cheapest full frame model but was old-gen by that time already.</li> <li>The D750 &quot;high end&quot; for ~2000 euro. The same generation but full frame.</li> <li>The D810 &quot;professional&quot; for ~3000 euro. Takes both SD cards and Compact Flash</li> <li>The D4s &quot;flagship&quot; for ~6000 euro. Still only takes Compact Flash cards.</li> </ul> <p>So the most obvious spec in speclistings but the least important for 99% of the work, sensor resolution. Almost all these cameras are the same 24.7 megapixels resolution except for the D810 that is slightly newer and has 36 megapixels and the D4s that is a bit older and has 16 megapixels. </p> <p>A lot of specifications in webshop spec lists for these models are almost or exactly the same between these models, mainly because these are mainly based around the Expeed 4 SoC. When I chose the D3300 I was upgrading from a second hand D70 and my budget allowed only choosing between the two cheapest options. The main differentiator betwen those two models are that the screen tilts and it has a lot more autofocus points, I deemed the tilting feature completely useless on a DSLR because the main selling point for these cameras is that you have an optical viewfinder. Since I didn't need more autofocus points than the D70 I just got the cheapest one.</p> <p>The D3300 is a great camera, it has significantly better noise performance than the D70 which I practically had to keep below ISO 800 to don't get super noisy results.</p> <h2>Looking at some benchmarks</h2> <p>I'm not sure if this data was available yet when I was deciding between models almost a decade back, but there is an amazing camera comparison website called DXOMARK. It has a <a href="https://www.dxomark.com/Cameras/Compare/Side-by-side/Nikon-D750-versus-Nikon-D3300___975_928">spec comparison between the D3300 and the D750</a>. This gives a better picture of camera differences through actual measurements of the sensor.</p> <p>For some reason it's really hard or impossible to find which sensor is in which model camera which seems like an obvious spec to list since it's the most important component of the camera. Some of the major sensor differences between these two cameras:</p> <ul><li>The old one is a DX size sensor and the D750 is an FX camera. This has some upsides and downsides I&#x27;ll get at later.</li> <li>The bit depth is 12 vs 14 bit. And since since that&#x27;s not super intuitive, this means it captures 4x more levels, not 1.16x the amount.</li> <li>11 vs 51 autofocus points.</li> <li>Up to 5dB improvement in noise level at high-iso use. This is almost half the noise on max ISO.</li> <li>12.7dB vs 14.5dB of dynamic range at the minimum ISO. This means I can have a lot brighter highlights in my shot without the sensor clipping.</li> </ul> <p>After using the D750 for some weeks these differences turned out to make quite a bit of difference in which shot I could still make without things going noisy or blurry.</p> <h2>User experience differences</h2> <p> Another difference you won't easily see from spec sheets is the massive difference in UX.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692136664/PXL_20230815_215628746.jpg" class="kg-image"><figcaption>The D750 on the left and the D3300 on the right</figcaption></figure> <p>On first glance the UI on the devices is pretty similar and on photos from camera reviews you'd think these behave pretty similar. It could not be more different.</p> <p>The D3300 turns on the screen for practically any action you do that doesn't require looking through the viewfinder. The bottom two rows are the quick-access settings for changing some of the main behaviors of the camera like autofocus/manual focus. The exposure and focus areas and the ISO. Some of the settings can be changed with holding a combination of hardware buttons and scrolling the thumbwheel but most of the configuration happens through this UI. The way you use those software quick-settings is with the d-pad to navigate through it.</p> <p>The D750 does seem to have roughly the same buttons on the back but the UI is missing the two bottom rows of settings. It's not only missing those two rows but it's actually not possible at all to change many of these settings using the d-pad and the screen. This pretty quickly forced me to learn all the hardware buttons which is probably a good thing since this is a lot faster to use.</p> <p>A few examples of the usage differences:</p> <ul><li>To change ISO on the D3300 I have to first move the camera out of the automatic modes using the mode dial on the top and then use the &quot;i&quot; button beside the screen to enter the actual quick settings menu. Then With the d-pad I move to the ISO option and with the &quot;ok&quot; button it opens a dialog to select the ISO with the d-pad buttons again. This does not allow changing between auto/manual ISO, that happens a few layers deep in the main menu.For the D750 I hold the button on the back that has &quot;iso&quot; above it and then I use the thumbwheel to scroll through the options. I for this action the back screen lights up quickly to show what I&#x27;m doing but there&#x27;s also a tiny ISO indicator in the viewfinder to see the options while I&#x27;m scrolling through them. The list of options include &quot;auto&quot; so no seperate menu path is needed.</li> <li>To change between autofocus and manual focus on the D3300 there&#x27;s another quick-setting menu that showing &quot;MF&quot; in the picture. Using the same procedure as the ISO option this can be navigated to switch between &quot;MF&quot;, &quot;AF-S&quot; and &quot;AF-C&quot;. The availability of these options is dependent on which picture mode the camera is in.On the D750 there&#x27;s an actual physical toggle on the side to switch between MF and AF very quickly. There&#x27;s a bunch more AF variants in this camera and also more focus point modes. These extra options can be accessed by pressing the button in the center of the toggle and using the thumbwheel the AF mode can be changed between continuous, single and auto. The second wheel on the front can change the focus point mode at the same time between single-point, all points, groups and 3d-tracking.</li> </ul> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692137711/PXL_20230815_221346196.jpg" class="kg-image"></figure> <ul><li>Changing the shutter speed or aperture on the D3300 depends on the mode. Since there&#x27;s a single thumbwheel on this camera it will pick the &quot;main&quot; setting on the mode you&#x27;re in. On shutter-priority mode it will control the shutter and on apererture-priority mode it will control the aperture. In program or manual mode the wheel controls the shutter speed. To access the aperture setting the exposure compensation button has to be held downOn the D750 there&#x27;s two wheels so the rear one is always shutter speed and the front one is always aperture control.</li> <li>To change between release modes like single-photo, burst, timer and quiet mode the D3300 has the release mode button below the d-pad that pops up another menu to navigate.The D750 has a second ring below the mode selector dial that you can rotate to select these modes between Single, Continuous low-speed, Continuous high-speed, Quiet, Quiet-continuous, Timer and mirror-lock-up.</li> </ul> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692138374/PXL_20230815_222446696.jpg" class="kg-image"></figure> <ul><li>The mode dial on the D750 has more &quot;professional&quot; options than the D3300. The D3300 mode dial is filled with magic useless modes like portrait, sport, landscape, baby?, macro and night-mode. These options are basically very slight variations on the auto program. It also has a dedicated position to open the in-camera manual for some reason.The D750 mode dial has all the special-auto modes under a single &quot;scene&quot; position so you can easily skip them and it adds the U1 and U2 positions instead. These are fully programmable positions that store the entire camera configuration. I have one of these programmed for focus trapping and one fixed ISO 100 and 1/160 shutter. Another small detail is that the mode dial has a locking button that prevents accidentally changing the mode.</li> </ul> <p>There is a lot more of these differences, practically all the settings you'd ever need are accessible using some button combination and the wheels and you can see the settings you're adjusting through the viewfinder to never lose sight of the subject of the picture. Some of these settings are especially nicely done and I really like how it spells out "3D" using the focus points when selecting the 3D focus tracking mode.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692139159/PXL_20230815_223600021.jpg" class="kg-image"><figcaption>The D750 viewfinder when selecting the 3d focus mode</figcaption></figure> <p>Another difference isn't the added physical buttons but there's a whole second display, I was used to having this display on the D70 but the smaller entry cameras are sadly missing this.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692139260/PXL_20230815_223453152.jpg" class="kg-image"><figcaption>The top LCD on the D750</figcaption></figure> <p>This top display is an old-school LCD with an optional backlight that can be triggered to read it in the dark. This picture shows basically all the settings you can read on this display but one of the nice features is that this is an always-on display. When the camera is off the amount of remaining shots is still visible (which is the 1.2k above) and when the SD card is removed it shows [-E-] instead. This makes it really quick to see if the camera is ready while it's still in my case.</p> <h2>Using the camera</h2> <p>There's quite a bit of things I noticed while using this camera the first few days. One of the first things that was obvious is how much better the autofocus system on this camera is. Not only is there a bit more granularity with the extra autofocus points the lenses also get focus a lot quicker.</p> <p>On the D3300 I mainly kept the camera in single-point single-push focus mode when possible because it relatively often picks a completely wrong autofocus point in the full-sensor mode. I also barely use the continuous focus modes because it does not super reliably pick the right moments to keep focussing and stopping it again when reframing shots.</p> <p>Another point where the D3300 focus is lacking quite a bit is subjects that are moving quickly. The 3d tracking autofocus on the D750 can actually keep up with a puppy charging towards you at full speed.</p> <p>Another big change is mainly due to switching to a much larger sensor. This changes the lens selection I use quite a bit. Since I've never planned to get a full-frame camera I never considered that as an option while buying lenses. My most used primes on the D3300 are the Nikor 35mm and 50mm and of those I mainly use the 50mm one. Due to the smaller sensor on the old camera it means that most of my photos are effectively shot on 75mm.</p> <p>At first using the lenses was quite weird. The 50mm lens on the D750 was behaving more like the 35mm lens on the D3300 which would have a 52mm effective focal length in that setup. But when putting the 35mm lens on the D750 I got... also 52mm. This is because my 50mm prime is a full frame lens and the 35mm one is a DX lens. The D750 detects this and automatically crops down the pictures in this case.</p> <p>This annoyed me for quite a bit and I considered getting a new prime in the 75-85mm range to get back my favorite focal length but then I realized that I could just use my 70-300mm telephoto lens as my "standard'" lens instead. While this would be quite unwieldy to use on the D3300 it works great on daily use of the D750 and since this is a lens with optical stabilization it helps even more with low light usage.</p> <p>I'm also quite happy that my most expensive lens, the 150-600mm happened to also be a full-frame lens. Dragging both the camera and this lens around is quite heavy but the results are amazing. With the improved autofocus it becomes possible to snap pictures of birds in flight even!</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692140619/darktable.0PPA61_1.jpg" class="kg-image"><figcaption>This picture would&#x27;ve never been in focus on the D3300</figcaption></figure> <p>The lower effective focal length using the FX sensor on the lenses, combined with a lower noise floor has helped a massive amount with getting a lot of pictures successfully I would've never even attempted on the D3300 because I would have to push the ISO so high the results become useless or I would not be able to focus quickly and accurately enough to get the shot.</p> <h2>Astrophotography</h2> <p>I find astrophotography interesting, it's incredible what you're able to see using a camera pointed at the sky that you could never see with the naked eye. While I find it interesting I never gotten into it enough to actually pick gear specifically for astrophotography qualities. A few days ago I tried the D750 for astrophotography for the first time and the results have blown me away.</p> <p>One difference here is that the D3300 has a pentamirror and the D750 has a pentaprism. This is a part that sits between the lens and the eyepiece of the camera and is responsible for reflecting the image through the lens a few times so it lines up with your eye while looking through the eyepiece and the picture isn't mirrored.</p> <p>The pentamirror is the cheaper option which consists of a few mirrors glued together. This means that the light has to go through a few more glass to air transitions before it reaches your eyes which results in the image through the viewfinder being slightly darker. I had not noticed any difference really when using the D750 for normal photography but for astrophotography it made seeing stars through the viewfinder a lot easier.</p> <p>Another difference is that on the D750 the autofocus system is actually good enough to autofocus on stars which makes the whole process a lot less painful. </p> <p>And finally due to better low-light performance I was able to get way better results than my previous tries with the D3300. </p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1692143163/stack4_03.jpg" class="kg-image"><figcaption>The Andromeda Galaxy with the D750</figcaption></figure> <p>Here's an example of a stack of 30 pictures with an ISO of 12800 showing the Andromeda Galaxy. I would put a D3300 picture for comparison here but at this focal length and shutter duration it just doesn't get enough light to extract the data from.</p> <h2>Conclusion</h2> <p>I'm glad I've picked up this camera. I've long thought the pro cameras were just a few thousands of euro added price to get a second SD slot and a few more hardware buttons. The reviews of these cameras seems like a lot of cork sniffing but if you compare the entry and pro cameras there's actually some significant improvements. A lot of these improvements also are most likely artifical market segmentation since this has the same CPU in both models and a lot of user experience are just missing software features.</p> <p>There's some things that are suprisingly similar. One example is the video recording feature. The recording quality is practically the same on both cameras and the experience with anything relating to the live view is equaly horrible. These cameras are not for recording video. This also makes the tilting screen a useless feature.</p> <p>For most pictures the D3300 is absolutely fine and as bonus it's also a lot lighter and smaller. </p> Mobile Linux camera pt6https://blog.brixit.nl/mobile-linux-camera-pt-6/70PhonesMartijn BraamWed, 08 Mar 2023 15:57:25 -0000<p>The processing with postprocessd has been working pretty well for me on the PinePhone. After I released it I had someone test it with the dng files from a Librem 5 to see how it deals with a completely different input.</p> <p>To my suprise the answer was: not well. With the same postprocessing for the PinePhone and the Librem 5 the Librem 5 pictures are turning out way too dark and contrasty. The postprocessd code is supposed to be generic and has no PinePhone specific code in it.</p> <p>Fast forward to some time later I now have a Librem 5 so I can do more camera development. The first thing to do is the sensor calibration process I did with the PinePhone in <a href="https://blog.brixit.nl/pinephone-camera-pt4/">part 4</a> of this blog series. This involves taking some pictures of a proper calibration target which in my case is an X-rite ColorChecker Passport and feeding that into some calibration software.</p> <p>Because aligning color charts and making sure all the file format conversions with the DCamProf calibration suite from RawTherapee is quite annoying I got the paid graphical utility from the developers. By analyzing the pictures the software will generate a lot of calibration data. From that currently only a small part is used by Megapixels: the ColorMatrix and ForwardMatrix.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1678280014/image.png" class="kg-image"><figcaption>Calibration output snippet</figcaption></figure> <p>These are 3x3 matrices that do the colorspace conversion for the sensor. I originally just added these two to Megapixels because these have the least amount of values so they can fit in the camera config file and they have a reasonable impact on image quality.</p> <p>The file contains two more important things though. The ToneCurve which converts the brightness data from the sensor to linear space and the HueSatMap which contains three correction curves in a 3 dimensional space of hue, saturation and brightness, this obviously is the most data.</p> <h2>What is a raw photo?</h2> <p>The whole purpose of Megapixels and postprocessd is take the raw sensor data and postprocess that with a lot of cpu power after taking the picture to produce the best picture possible. The processing of this is built on top of existing open source photo processing libraries like libraw.</p> <p>The expectations this software has for "raw" image data is that it's high bit depth linear-light sensor data that has not been debayered yet. The data from the Librem 5 is exactly this, the PinePhone sensor data is weirder.</p> <p>Unlike most phones that have the camera connected over MIPI-CSI which gives a nice high speed serial connection to push image data, the PinePhone is connected over a parallel bus. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1678281697/image.png" class="kg-image"><figcaption>Rear camera connection from the PinePhone 1.2 schematic</figcaption></figure> <p>This parallel bus provides hsync/vsync/clock and 8 data lines for the image data. The ov5640 sensor itself has a 10-bit interface though:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1678281824/image.png" class="kg-image"><figcaption>The D[9:0] is the 10 image data lines from the sensor</figcaption></figure> <p>Since only 8 of the 10 lines are available in the flatflex from the sensor module that has the ov5640 in it the camera has to be configured to output 8-bit data. I made the assumption the sensor just truncates two bits from the image data but from the big difference in the brightness response I have the suspicion that the image data is no longer linear in this case. It might actually be outputing an image that's not debayered but <i>does</i> have an sRGB gamma curve.</p> <p>This is not really a case that raw image libraries deal with and it would not traditionally be labelled "raw sensor data". But it's what we have. But instead of making assumptions again lets just look at the data.</p> <p>I have pictures of the colorchecker for both cameras and the colorchecker contains a strip of grayscale patches. With this it's possible to make a very rough estimation of the gamma curve of the picture. I cropped out that strip of patches from both calibration pictures and put them in the same image but with different colors. I also made sure to rescale the data to hit 0% and 100% with the darkest and brightest patch.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1678284482/image.png" class="kg-image"><figcaption>Waveform for the neutral patches, green is the PinePhone and pink is the Librem 5</figcaption></figure> <p>The result clearly shows that the the data from the PinePhone is not linear. It also shows that the Librem 5 is also not linear but in the opposite direction.</p> <p>These issues can be fixed though with the tonecurve calibration that's missing from the current Megapixels pictures</p> <h2>postprocessd is not generic after all</h2> <p>So what happened is that I saw the output of postprocessd while developing it and saw that my resulting pictures were way too bright. I thought I must've had a gamma issue and added a gamma correction to the code.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1678285132/image.png" class="kg-image"></figure> <p>With this code added it looks way better for the PinePhone, it looks way worse for the Librem 5. This is all a side effect of developing it with the input of only one camera. The correct solution for this is not having this gamma correction and have the libraw step before it correct the raw data according to the tonecurve that's stored in the file.</p> <h2>Storing more metadata</h2> <p>The issue with adding more calibration metadata to the files is that it doesn't really fit in the camera ini file. I have debated just adding a quick hack to it and make a setting that generates a specific gamma curve to add as the tone curve. This will fix it for my current issue but to fix it once and for all it's way better to include <i>all</i> the curves generated by the calibration software.</p> <p>So what is the output of this software? Lumariver Profiler outputs .dcp files which are "Adobe Digital Negative Camera Profile" files. I have used the profile inspection output that turns this binary file into readable json and extracted the matrices before. It would be way easier to just include the .dcp file alongside the camera configuration files to store the calibration data.</p> <p>I have not been able to find any official file format specification for this DCP file but I saw something very familiar throwing the file in a hex editor... The file starts with <code>II</code>. This is the byte order mark for a TIFF file. The field directly after it is not 0x42 though which makes this an invalid TIFF file. It turns out that a DCP file is just a TIFF file with a modified header that does not have any image data in it. This makes the Megapixels implementation pretty easy: read the TIFF tags from the DCP and save them in the DNG (which is also TIFF).</p> <p>In practice this was not that easy. Mainly because I'm using libtiff and DCP is <i>almost</i> a TIFF file. Using libtiff for DNG files works pretty well since DNG is a superset of the TIFF specification. The only thing I have to do is add a few unknown TIFF tags to the libtiff library at runtime to use it. DCP is a subset of the TIFF specification instead and it is missing some of the tags that are required by the TIFF specification. There's also no way in libtiff to ignore the invalid version number in the header.</p> <p>So I wrote my own tiff parser for this. Tiff parsers are quite hard since there's an enormous amount of possiblities to store things in TIFF files. Since DCP is a smaller subset of TIFF it's quite reasonable to parse it manually instead. A parser for the DCP metadata is around 160 lines of plain C, so that is now embedded in Megapixels. The code searches for a .dcp files associated with a specific sensor and then embeds the calibration data into the generated DNG files. If the matrices are also defined in the camera ini files then those are overwritten by the ones from the DCP file.</p> <h2>Results</h2> <p>The new calibration work is now in <a href="https://gitlab.com/postmarketOS/megapixels/-/merge_requests/30">megapixels#30</a> and needs to go through the testing and release process now. There's also a release for postprocessd upcoming that removes the gamma correction.</p> <p>For the Librem 5 there's <a href="https://source.puri.sm/Librem5/millipixels/-/merge_requests/88">millipixels#88</a> that adds correct color matrices for now until that has the DCP code added. </p> Taking a good picture of a PCBhttps://blog.brixit.nl/taking-a-good-picture-from-a-pcb/63815e6080cfb74dab7b7272PhotographyMartijn BraamSun, 27 Nov 2022 10:49:01 -0000<p>Pictures of boards are everywhere when you work in IT. Lots of computer components come as (partially) bare PCBs and single board computers are also a popular target. Taking a clear picture of a PCB is not trivial though. I suspect a lot of these product pictures are actually 3d renders.</p> <p>While updating <a href="https://hackerboards.com/">hackerboards</a> I noticed not all boards have great pictures available. I have some of them laying around and I have a camera... how hard could it be?</p> <p>Definitely the worst picture is the one in the header above. Taken with a phone at an angle with the flash in bad lighting conditions. I've taken quite a bunch of pictures of PINE64 boards, like some of the pictures in the header of the pine64 subreddit and the picture in the sidebar. I've had mixed results with taking the pictures but the best results I've had with taking board pictures is using an external flash unit. </p> <h2>The ideal setup</h2> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072525/20221127_0002.jpg" class="kg-image"></figure> <p>So to create a great picture I've decided to make a better setup. I've used several components for this. The most important one is two external flashes controlled with a wireless transmitter. I've added softboxes to the flashers to minimize the sharp shadows usually created when using a flash. This produces quite nice board pictures with even lighting. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072526/darktable.NUSIU1.jpg" class="kg-image"><figcaption>All dust looks 1000x worse on pictures :(</figcaption></figure> <p>For all the pictures from this setup I've used a 50mm macro lens. Not only is it great for getting detail pictures of the small components, it's also the lens with the least distortion I have. Having less distortion in the lens is required to have a sharp picture all the way to the edges of the board and not have the edges of the board look curved. The curvature can be fixed in software but the focal plane not being straight to the corners can't be fixed.</p> <p>It's possible to get even less shadows on the board by using something like a ring light, while this gets slightly more clarity I also find this just makes the pictures less aesthetically pleasing.</p> <p>So how to deal with the edges of the board? For a lot of website pictures you'd want a white background. I have done this by just using a sheet of paper and cleaning up the background using photo editing software. This is quite time consuming though. The usual issues with this is that the background is white but not perfectly clipped to 100% pure white in the resulting picture. There's also the issue of the board itself casting a slight shadow.</p> <p>I took my solution for this from my 3D rendering knowledge (which is not much). You can't have a shadow on an emitter. To do this in the real world I used a lightbox.</p> <p>Lightboxes are normally for tracing pictures and are quite easy to get. It doesn't give me a perfectly white background but it gets rid of the shadows at least.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072526/20221126_0006_01.jpg" class="kg-image"></figure> <p>To get this from good to perfect there's another trick though. If I take a picture without the flashes turned on but everything else on the same settings I get a terribly underexposed picture... except for the background.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072526/20221126_0003_01.jpg" class="kg-image"><figcaption>You never notice how many holes there are in a PCB until you put it on a lightbox</figcaption></figure> <p>All I need to do to get a clean background is increasing the contrast of this picture to get a perfect mask. Then in gimp I can just overlay this on the picture with the layer mode set to lighten only.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072526/composite.jpg" class="kg-image"><figcaption>The final composite</figcaption></figure> <p>It's also possible to use the mask picture as the alpha channel for the color picture instead. This works great if there's a light background on the website, it shows the flaws though when the website has a dark background.</p> <p>Let's create the worst-case scenario and use a pure black background:</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072526/black.jpg" class="kg-image"></figure> <p>Now edges are visible on the cutouts. Due to the mismatch in light color temperature with the light box the edges are also blue here. A lot of edges can be fixed by running the dilate filter in gimp on the mask layer to make the mask crop into the board by one pixel. it makes the holes in the board too large though. To get this perfect manual touchup is still required.</p> <h2>Automating it further</h2> <p>Now the input data is perfect enough that I can make the cutout with a few steps in gimp it's also possible to automate this further with the magic of ImageMagic.</p> <pre><code>$ convert board.jpg \( mask.jpg \ -colorspace gray \ -negate \ -brightness-contrast 0x20 \ \) \ -compose copy-opacity \ -composite board.png</code></pre> <p>This loads the normal picture from board.jpg and the backlit picture as mask.jpg and composites them together into a .png with transparency.</p> <p>But it can be automated even further! I still have a bit of camera shake from manually touching the shutter button on the camera and I need to remember to take both pictures every time I slightly nudge the device I'm taking a picture of.</p> <p>The camera I'm using here is the Panasonic Lumix GX7. One of the features of this camera is the built-in wifi. Using this wifi connection it's possible to use the atrocious Android application to take pictures and change a few settings.</p> <p>After a bit of reverse engineering I managed to create a <a href="https://git.sr.ht/~martijnbraam/remotecamera">Python module</a> for communicating with this camera. Now I can just script these actions:</p> <pre><code>import time from remotecamera.lumix import Lumix # My camera has a static DHCP lease camera = Lumix(&quot;192.168.2.41&quot;) camera.init() camera.change_setting(&#x27;flash&#x27;, &#x27;forcedflashon&#x27;) camera.capture() time.sleep(1) camera.change_setting(&#x27;flash&#x27;, &#x27;forcedflashoff&#x27;) camera.capture()</code></pre> <p>Now I can just run the script and it will take the two pictures I need. It's probably also possible to fetch the images over wifi and automatically trigger the compositing but it sadly requires changing the wifi mode on the camera itself between remote control and file transfer.</p> <h2>Not just for PCBs</h2> <p>This setup is not only useful for PCB pictures. This is pretty great for any product picture where the product fits on the lightbox.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072527/composite-1.jpg" class="kg-image"></figure> <p>Here's the composite of a SHIFT 6mq. Pictures of the screen itself are still difficult due to the pixel pattern interfering with the pixels of the camera sensor and the display reflecting the light of the flashes. This can probably be partially fixed once I get a polarizing filter that fits this lens.</p> Taking pictures the hard wayhttps://blog.brixit.nl/getting-into-analog-photography/624416a16c7cd575a9fb02bePhotographyMartijn BraamWed, 30 Mar 2022 12:41:07 -0000<p>I've never really been that interested in taking pictures on film. My first camera was digital and digital seems superior in every way. I can take as much pictures as I want, they're instantly ready and can be reviewed on the camera itself. Also one of the major upsides is that pictures are basically free.</p> <p>There's one thing that's not really cheap though on digital, larger sensor sizes. And with larger I mean full-frame 35mm and up. All my digital cameras are APS-C (~60% smaller than full frame) or below. This works absolutely great but there's some benefits to having a larger sensor sizes. Here's a comparison between the sensor size of a phone and a digital camera:</p> <figure class="kg-card kg-gallery-card"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.brixit.nl/image/w600//static/files/blog.brixit.nl/1670072513/PXL_20220330_113033471.jpg" class="kg-image" width="600" height="800"></div><div class="kg-gallery-image"><img src="https://blog.brixit.nl/image/w600//static/files/blog.brixit.nl/1670072514/PXL_20220330_113043157.PORTRAIT.jpg" class="kg-image" width="600" height="800"></div><div class="kg-gallery-image"><img src="https://blog.brixit.nl/image/w600//static/files/blog.brixit.nl/1670072514/P1080027.jpg" class="kg-image" width="600" height="797"></div></div></div></figure> <p>This is a comparison between a Pixel 2 and a DSLR. The larger sensor of the DSLR camera makes it possible to have natural blurry backgrounds. The larger the sensor the blurrier the background can be, and this also changes the distance at what you can make a picture of something and still have a blurry background. This is why portrait photography usually easily has a blurry background but landscape photography doesn't. It's in the combination of the sensor size, distance to the subject and the aperture size of the lens.</p> <p>Smartphone cameras are a great example of sensor size since the size difference between a professional camera and a phone camera is in the order of 10x smaller. In the pictures above you can see that the phone picture on the left still has a lot of detail in the background, this is only the natural background blur from the lens/sensor in the phone. The middle picture is taken with the same phone in "portrait photography" mode. This uses AI magic to try to emulate the look of a professional camera with a large sensor with a lot of artifacts like inconsistent blurring on the background, and no blurring in reflections. The picture on the right is taken with the largest sensor camera I have, which is the APS-C size, resulting in a nice smooth background blur.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072514/Sensor_sizes_overlaid_inside.svg.png" class="kg-image"><figcaption>Sensor size comparison, from wikipedia</figcaption></figure> <p>It's possible to get a great APS-C sensor size digital camera for ~300 euro. But as soon as you go up in sensor size you're up into 1000+ euro. The price increases even more for going above 35mm full frame, a medium format camera will be starting at 3000 euro. This is all excluding lenses.</p> <p>For me the reason for getting into analog photography is twofold. Instead of paying thousands of euros for a larger sensor digital camera I can just get a old second hand film camera since the 35mm full frame size I want is based on the size of a normal analog film roll. This brings the price down to below 50 euro for the camera.</p> <p>The second reason is that I got sent a blank 35mm film roll, which was the nudge I needed to get the camera and get into the ecosystem.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072514/PXL_20211006_161754889.jpg" class="kg-image"><figcaption>Kodak Kodacolor 200, my first roll of film</figcaption></figure> <p>The next part of this journey is figuring out which camera to get. Thanks to practically everyone having moved to digital photography it's very easy to find old analog cameras for as low as 15 euro. An example for this would be a Nikon Nikomat, since I already have Nikon F-mount lenses it would be an obvious choice.</p> <p>Instead of going for the cheapest option I decided to go for a Nikon F90 instead. This is the newest most featureful film camera by Nikon I could find for a reasonable price. I found the F90 for 49 euro ex. shipping. There's newer models but those were more scarce and the price jumps up to 1000 euro again, I guess they have some collectors value I don't really care about.</p> <p>The main reason for picking this specific camera over a fully mechanical one is that getting into analog photography is scary when picking a fully mechanical/manual one. I get only 36 shots, no feedback until I took all the shots and the quickest way to get annoyed by doing analog photography would be getting the first developed film roll back full with out of focus and under/over exposed pictures. The F90 gives me all the manual options and has compatability with autofocus on most of my existing lenses, but it also has a full-auto point and shoot mode.</p> <p>This camera is also able to detect quite a list of faults with the film feeding and having the film roll correctly inserted which helps calm my nerves about everything working smoothly when taking pictures.</p> <p>I'm quite happy with this how this decision turned out. I'll probably also get one of the older mechanical cameras next now that I have slightly more experience.</p> <h2>The first developed pictures</h2> <p>Since this is color film and I did not want to buy a lot of tools and chemicals I sent of my film to be developed at a lab and scanned. There's multiple options to get that done. The "old" way in the Netherlands would be to go to a supermarket and have someone put it in one of the large developing machines and have some nice prints an hour later. Before that it was done by getting a special envelope at the supermarket, several supermarket chains had these, and putting the film roll in it and noting on the envelope which size you would like the photos printed at.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072514/PXL_20220330_102448569.jpg" class="kg-image"><figcaption>Return envelope with the developed film in it</figcaption></figure> <p>After sending off the envelope to a lab you'd retrieve another envelope with developed film strips, 4 or 5 pictures per strip, and instructions for ordering re-prints from specific frame numbers at different sizes or multiple copies. To actually get those prints you'd have to give back the film strips again to get them printed ofcourse. A lot of back and forth to get some pictures developed.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072515/PXL_20220330_112352171.jpg" class="kg-image"><figcaption>Table in ordering more copies of a picture in one of 3 sizes</figcaption></figure> <p>This process later got modernized a bit, giving the possibility to order a photo CD instead of prints. This service is still offered to this day by some supermarket chains but it seems like this never got modernized again in the last 20 years. </p> <p>I can't get over how stupid this is. Someone is still running a full lab to develop film and scan it, and they still make picture CDs but for some reason when you order a CD which can hold 700 MB they only give compressed downscaled jpeg pictures. And with compressed I mean they could've delivered the full film roll on a 1.44MB floppy.</p> <p>This made me opt to get the film roll scanned by an actual physical photography store (they still exist!), asking to just email me the highest quality original scans. </p> <p>I got sent edited jpegs instead :(</p> <h2>The pictures</h2> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072515/000058470001.jpg" class="kg-image"></figure> <p>This is the first picture on my first film roll. Well... actually the first one I intended to take. The first one is a blurry picture of my desk because I accidentally pressed the shutter release button directly after loading the film roll. I took this with a Nikkor 50mm 1:1.8D lens, the only actual full frame compatible lens I own.</p> <p>I'm pretty happy how this one turned out, I took it on the automatic mode and it's suprisingly non-blurry for how much the chickens moved around. The amount of film grain is more than I expected, but it looks pretty nice. </p> <p>Another picture I like quite a lot is this one:</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072515/000058470005.jpg" class="kg-image"></figure> <p>It's not a particularly interesting scene, except that it looks like this picture was taken 30 years ago. The film gives it an immediate vintage feel, combined with the street being unusually empty due to corona restrictions. </p> <p>One interesting thing is that the nice green trees were not green at all when taking this picture, this is taken in fall and the trees were very brightly yellow with a bunch of brown trees in between.</p> <p>There's also the glaring scanning artifact in the center of the frame, this seemed to have happened on quite a lot of the pictures.</p> <p>Since I couldn't find a picture hosting system I liked and also could save metadata about analog film photography, I made my own. Here's a gallery of some of the other pictures from this film roll: <a href="https://pictures.brixit.nl/album/b6f2d1af-441d-466d-a788-e98b5ad9a174">https://pictures.brixit.nl/album/b6f2d1af-441d-466d-a788-e98b5ad9a174</a></p> <h2>The next roll of film</h2> <p>While having a lot of fun shooting on this first roll of film I ordered a box of filmrolls so I can continue shooting. Since my parents always used Fujifilm rolls back when everyone took pictures this way I decided to also try a Fujifilm roll next. I found the Fujicolor Pro 400H which seemed great, it had everything. The green box, the "pro" in the name. I made a slight mistake while reading the product page though.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072515/PXL_20220330_094259594.jpg" class="kg-image"></figure> <p>I've accidentally ordered some 120 medium format film instead of the 135 film I wanted. So it seems the way forward now is getting one of the 15 euro medium format film cameras. </p> <p>Medium format film is something I know even less about. The idea seems simple, make the film frame way larger for more detail. It seems like the vast majority of medium format cameras have fixed focus lenses though. The film scans I've seen online taken through such cameras make it look like there's way less detail than my 35mm film pictures though. The alternative is getting one of the medium format SLRs with an actual focusable lens but those are again ridiculously overpriced for a box with a hole for a lens in it. The reason for that is pretty simple: They are Hasselblad cameras so I'm paying the price for a collectors item. There's also a few alternative cameras that are pretty similar to the Hasselblad but they are also very overpriced.</p> <p>Anyway, I got a roll of Fujifilm Superia X-tra 400 after that from a nice physical photography store. It's still in my camera and almost full. Can't wait to see how it turns out!</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072516/P1080025.jpg" class="kg-image"></figure> <p></p> PinePhone Camera pt5https://blog.brixit.nl/pinephone-camera-pt5/620934a515b5040189838db3LinuxMartijn BraamSun, 13 Feb 2022 20:20:05 -0000<p>It's been a while since I've written anything about the Megapixels picture processing. The last post still showcases the old GTK3 version of Megapixels even!</p> <p>In the meantime users figured out how to postprocess the images better to get nicer results from the PinePhone camera. One of the major improvements that has landed was the sigmoidal contrast curve in ImageMagick.</p> <pre><code>convert img.tiff -sharpen 0x1.0 -sigmoidal-contrast 6,50% img.jpg</code></pre> <p>This command slightly sharpens the image and adds a nice smooth contrast curve to the image. This change has a major issue though, this is a fixed contrast curve added to all images and it does not work that great for a lot of cases. The best result was running this against pictures that were taken with the manual controls in megapixels so they have the right exposure.</p> <p>On the PinePhone the auto exposure in the sensor tends to overexpose images though. Adding more contrast after that will just make the issues worse. In the header image of this post there's three images shown generated from the same picture. The first one is the unprocessed image data, the second one is the .jpg created by the current version of Megapixels and the third one is the same data with my new post-processing software.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072511/image.png" class="kg-image"><figcaption>Waveform visualisation of the banner image</figcaption></figure> <p>This screenshot shows the waveform of the same header image. This visualizes the distribution of image data on the horizontal axis it's the horizontal position of the image and on the vertical axis it's the brightness of all the pixels in that column plotted. Here you can still see the 3 distinct images from the header image but with different distribution of the color/brightness data.</p> <p>One of the main issues with the data straight from the sensor is that it's mostly in the upper part of the brightness range, there's no data at all in the bottom quarter of the brightness range and this is visible as images that have no contrast and look grayish. </p> <p>The sigmoidal contrast curve in the middle image takes the pixels above the middle line and makes them brighter and pixels below the middle line and makes them darker. The main part that's improving is the data extending further in the lower part here, but due to the curve the bright parts of the image become even brighter and the top line shows that the data is clipping.</p> <p>The third image with the new algorithm instead moves the data down by keeping the bright pixels in the same spot but "stretching" the image to the bottom. This corrects for the blacklevel of the sensor data and also creates contrast without clipping the data.</p> <h2>How</h2> <p>This started with me trying to make the postprocessing faster. Currently the postprocessing is done with a shell script that calls various image manipulation utilities to generate the final image.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/old.png" class="kg-image"></figure> <p>Megapixels will take a burst of pictures and saves those as seperate .dng files in a temporary directory. From that series the second one is always used and the rest is ignored. With dcraw the image will be converted to rgb data and stored as tiff. Imagemagick will take that and apply the sharpness/contrast adjustment and save a .jpg</p> <p>Because these tools don't preserve the exif data about the picture exiftool is ran last to read the exif from the original .dng files and save that in the final .jpg</p> <p>Importing and exporting the image between the various stages is not really fast, and for some reason the processing in Imagemagick is just really really slow. My plan was to replace the 3 seperate utilities with a native binary that uses libraw, libjpeg, libtiff and libexif to deal with this process instead. </p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/postprocessd-v1.png" class="kg-image"><figcaption>version 1 of postprocessd</figcaption></figure> <p>The new tool is postprocessd (because it's supposed to run in the background and queue processing) It uses libraw to get rgb data, this is the same library that's used in dcraw. Then the resulting data is written directly to to libjpeg to create the final jpegs without any processing in between. This is what actually generated the first image shown in the banner. Processing a single .dng to a .jpg in this pipeline is pretty fast compared to the old pipeline, a full processing takes 4 seconds on the PinePhone.</p> <p>The downside is that the image looked much worse due to the missing processing. Also just having a bunch of .jpeg files isn't ideal. The solution I wanted is still the image stacking to get less noise. With the previous try to get stacking running with HDR+ it turned out that that process is way way way too slow for the PinePhone and the results just weren't that great. In the meantime I've encountered <a href="https://github.com/luigi311/Low-Power-Image-Processing">https://github.com/luigi311/Low-Power-Image-Processing</a> which uses opencv to do the stacking instead. This seemed easy to fit in.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/postprocessd-v2.png" class="kg-image"><figcaption>Version 2 with opencv for stacking</figcaption></figure> <p>This new code takes all the frames and converts them with libraw. Then the opencv code filters out all the images that are low contrast or fully black, because sometimes Megapixels glitches out. The last .dng file is then taken as a reference image and all the other images are aligned on top of that with a full 4 point warping transform to account for the phone slightly moving between taking the multiple pictures. After the aligning the pictures are averaged together to get a much less noisy image without running an actual denoiser.</p> <p>This process produced an image that's exactly the same as the output files from v1 but with less noise. </p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/stacked.png" class="kg-image"><figcaption>Before and after stacking</figcaption></figure> <p>This is a zoomed in crop of a test image that shows the difference of noise. The results are amazing for denoising without having artifacts that make the image blurry. But for every upside there's a downside. This is very slow.</p> <p>Stacking 2 images together with the current code takes 38 seconds. For great results it's better to stack 2 images though.</p> <h2>Color processing</h2> <p>Now the opencv dependency is added it's pretty easy to just use that to handle the rest of the postprocessing tasks.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072512/blacklevel-correction.png" class="kg-image"></figure> <p>The main improvement here is the automatic blacklevel and whitelevel correction. The code slightly blurs the image and then finds the darkest and brightest point. Then it's simply substracting the value of the darkest point to shift the colors in the whole image down and the colored haze is removed. Then the pixels get multiplied with a calculated value to make the brightest pixel pure white again which "stretches" the brightness range so it fills the full spectrum. This process adds the contrast like the old imagemagick code did, but in a way more carefully tuned way.</p> <p>After this a regular "unsharp mask" sharpening filter is run that's fairly agressive, but tuned for the sensor in the PinePhone so it doesn't look oversharpened.</p> <p>A last thing that's done is a slight gamma correction to darken the middle gray brightness a bit to compensate for the PinePhone sensor overexposing most things. The resulting contrast is pretty close to what my other Android phones took, except the resolution for those phones is a lot better.</p> <h2>What's left to do</h2> <p>The proof of concept works, now the integration work needs to happen. The postprocessing is quite CPU intensive so one of the goals of postprocessd is to make sure it never processes multiple images at the same time but instead queues the processing jobs up in the background so the CPU is free to actually run Megapixels. There's also still some bugs with the exif processing and the burst length in the current version of Megpixels is a bit too short. This can probably be made dynamic to take more pictures in the burst when the sensor gain is set higher.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072513/compare.jpg" class="kg-image"></figure> <p></p> DNG is not greathttps://blog.brixit.nl/dng-is-not-great/6189d3144e67c22384930706PhotographyMartijn BraamMon, 29 Nov 2021 09:00:00 -0000<p>Many who read this will probably not know DNG beyond "the annoying second file Megapixels produces". DNG stands for Digital Negative, an old standard made by Adobe to store the "raw" files from cameras.</p> <p>The standard has good ideas and it is even an open standard. There's a history of the DNG development on the <a href="https://en.wikipedia.org/wiki/Digital_Negative">wikipedia page</a> that details the timeline and goals of this new specification. My problem with the standard is also neatly summarized in one line of this article:</p> <blockquote><i>Format based on open specifications and/or standards</i>: DNG is compatible with <a href="https://en.wikipedia.org/wiki/Tag_Image_File_Format_/_Electronic_Photography">TIFF/EP</a>, and various <a href="https://en.wikipedia.org/wiki/Open_format">open formats</a> and/or <a href="https://en.wikipedia.org/wiki/Open_standard">standards</a> are used, including <a href="https://en.wikipedia.org/wiki/Exchangeable_image_file_format">Exif metadata</a>, <a href="https://en.wikipedia.org/wiki/Extensible_Metadata_Platform">XMP metadata</a>, <a href="https://en.wikipedia.org/wiki/IPTC_Information_Interchange_Model">IPTC metadata</a>, <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space">CIE XYZ coordinates</a> and <a href="https://en.wikipedia.org/wiki/JPEG">JPEG</a></blockquote> <p>This looks great at first glance, more standards! Reusing existing technologies! The issue is that it's so many standards though.</p> <h2>TIFF</h2> <p>DNG is basically nothing more than a set of conventions around TIFF image files. This is possible because TIFF is an incredibly flexible format. The problem is that TIFF is an incredibly flexible format. The format is flexible to the point that it's completely arbitrary where your data is. The only thing that's solid is the header that describes that the files is a TIFF file and a pointer to the first IFD chunk. The ordering of image data and IFD chunks within the file is completely arbitrary. If you want to store all the pixels for the image directly after the header and then have the metadata at the end of the file, that's completely possible. If you want to have half the metadata before the image and half after it, completely valid. As long as the IFD points to the right next offset in the file for another IFD and the IFD points to the right start of image data.</p> <p>This makes parsing a TIFF file more complicated. It's not really possible to parse TIFF from a stream unless you buffer the full file first, since it's basically a filesystem that contains metadata and images.</p> <p>This format supports having any number of images inside a single file and every image can have its own metadata attached and it's own encoding. This is used to store thumbnails inside the image for example. The format not just supports having multiple images, it supports an actual tree of image files and blobs of metadata.</p> <p>Every image in a TIFF file can have a different colorspace, color format, byte ordering, compression and bit depth. This is all without adding any of the extensions to the TIFF format.</p> <p>To get information about every image in the file there's the TIFF metadata tags. The tags a number for the identifier and one or more values. Every extension and further version of the TIFF specification adds more tags to describe more detailed things about the image. And the DNG specification also adds a lot of new tags to store information about raw sensor data.</p> <p>All these tags are not enough though, There's more standards to build upon! There's a neat tag called 0x8769, also known as "Exif IFD". This is a tag that is a pointer to another IFD that contains EXIF tags, from jpeg fame, that also describe the image. To make things complete the information that you can describe with TIFF tags and with EXIF tags overlaps and can ofcourse contradict eachother in the same file.</p> <p>The same way it is also possible to add XMP metadata to an image. This is made possible by the combination of letters developers will start to fear: TIFFTAG_XMLPACKET. Because everything is better with a bit of XML sprinkled on top.</p> <p>Then lastly there's the IPTC metadataformat which I luckily have never heard of and never encountered and I look forward to never learning about it.</p> <p>Shit I looked it up anyway, This is a standard for... what... newspaper metadata? Let's quickly close this tab.</p> <h2>Writing raw sensor data to a file</h2> <p>So what would be the bare minimum to just write sensor dumps to a file. Obviously that's just <code>cat sensor > picture</code> but that will lack the metadata to actually show the picture.</p> <p>The minimum data to render something that looks roughly like a picture would be:</p> <ul><li>width and height of the image</li> <li>pixel format as fourcc</li> <li>optionally the color matrices for making the color correct</li> </ul> <p>The first two are simple. This would just be 2 numbers for the dimensions since it's unlikely that 3 dimensional pictures would be supported , and the pixel format can be encoded as the 4 ascii characters representing the pixel format. The linux kernel has a lot of them defined already in the v4l2 subsystem already.</p> <p>To do proper color transforms a lot more metadata would be needed which would probably mean that it's smarter to have a generic key/value storage in the format.</p> <figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072509/text859-5-7-5.png" class="kg-image"></figure> <p>This format can be extremely simple to read and write except for the extra metadata that needs a bit of flexibility. The extra metadata should probably be some encoding that saves number of entries, the key length and the value length and write that as length prefixed strings.</p> <p>The absolute minimum to test a sensor would be writing 16 bytes which can even be done by hand to make a header for a specific resolution and then append the sensor bytes to that. </p> <h2>The hard part</h2> <p>Making up a random image file format is easy, getting software to support it is hard. Luckily there are open source image editors and picture editors, so some support could always be patched in initially for testing. Also this has quite a high XKCD-927 factor.</p> <figure class="kg-card kg-image-card"><img src="https://blog.brixit.nl/image/w1000//static/files/blog.brixit.nl/1670072509/standards-1.png" class="kg-image"><figcaption>Source: XKCD of course!</figcaption></figure> <p>Still would be great to know why a file format for this could not be this simple.</p>