FYI, I've found two quirks with file saving in the new version.
i) The 12-bit RAW packing that I'm seeing on my camera is different to what is advertised. If anyone else is having headaches decoding raw packed data, read below.
The changelog says:
Given a pair of two 12-bit pixels in hexidecmal as (0x123, 0xabc), the bytes produced
by the Raw 12-bit packing mode changes as follows:
v0.3.0 and earlier: (0xab, 0xc1, 0x23)
v0.3.1 and later: (0x23, 0x1c, 0xab)
But my test comparing the same data saved in 16 bit and 12 bit actually reveals:
Given a pair of two 12-bit pixels in hexidecmal as (0x123, 0xabc), the bytes produced
by the Raw 12-bit packing mode are:
v0.3.0 and earlier: (0x23, 0xc1, 0xab)
v0.3.1 and later: (0xab, 0x1c, 0x23)
I'm building a Python module for my students to use that handles the raw read-in to a NumPy array across various formats using PythonMagick, you can see it at https://github.com/djorlando24/pySciCam. There are sample RAW images proving that the weird byte ordering above actually works, at least for the 3 cameras we have
I took a look at the pySciCam code to see how you were doing the byte unpacking and I think that our implementations agree. In the read_chronos_raw() function, you're reading the three bytes and converting them to an integer, which on a little-endian machine I think would produce a 24-bit integer of 0x23c1ab on v0.3.0, or 0xab1c23 on v0.3.1 using our example pixels of {0x123, 0xabc}.
For a comparison, we updated our
pyraw2dng.py tool to support the 12-bit packed files too.
So, the obvious question might be, why did we go through the trouble of changing the bit packing format for 12-bit raw mode at all? Well, I must admit that we never really tested the 12-bit packed format in enough depth to show that it generated equivalent data to the 16-bit modes. In preparing for the v0.3.1 release we finally got around to testing it and we found a number of troubling bugs:
- The video input port on the CPU has a bug where it will swap the R and B channels when reading 24-bit raw data from the camera, meaning that first and last bytes were being swapped for every pair of pixels before hitting the disk.
- The camera would randomly drop the first pixel when saving in 12-bit packed mode. This is probably not apparent when viewing monochrome video, but on a color camera will shift the resulting image relative to its bayer filter.
- The final scan line of an image would contain corrupted data.
So, the new packing order is attempting to follow the
DNG specification recommendation on BitsPerSample when packing 12-bit data. Which I interpreted to mean that bytes should be arranged in little-endian order, with big-endian fill order whenever a byte has to be split between pixels.
ii) TIFF files are being written with unusual metadata tags that ImageMagick doesn't like, would be nice to fix this so we don't see so many warnings flashing up when converting files.
$ identify test/chronos14_rgb_tiff/chronos14_rgb_001.tiff
test/chronos14_rgb_tiff/chronos14_rgb_001.tiff TIFF 1280x1024 1280x1024+0+0 8-bit sRGB 3.75391MiB 0.000u 0:00.009
identify: Unknown field with tag 42033 (0xa431) encountered.
`TIFFReadCustomDirectory' @ warning/tiff.c/TIFFWarnings/995.
The offending tag in this case is in the EXIF image metadata, and should be the BodySerialNumber as defined by the EXIF 2.3 standard. It looks like the version of libtiff being used by ImageMagick only supports tags up to EXIF 2.2. I guess I should make a note to remove that tag from the TIFF format if it's not widely supported by image processing software.