Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - kaklik

Pages: [1]

for the scientific application, I need to know the exact way, and how the data from the sensor are processed. We made some attempts to analyze that directly from the camera output.  Our results are:
  • Camera substracts darkframe even in case of "RAW" data output
  • There is no exact control of what darkframe file is actually or has been used for the substraction

Especially the first point is quite strange because the 'RAW' data are supposed to be RAW, therefore we expect unprocessed values directly from the image sensor. Is there some option to disable dark frame subtraction at the least for raw12 data output? Where is the darkframe or black calibration applied? Is there some other processing made on "RAW" data? Is there a possibility to backup the required data and process the image outside from camera?

I'm attaching the screenshot where the subtraction of intentionally incorrect dark-frame is visible in all output formats (RAW12bit, TIFF, TIFF-RAW)

Software Dev / PPS Timestamp input
« on: February 09, 2022, 11:16:48 AM »
Hello,  I am wondering if there could be possible to process the PPS signal to get accurate timestamping of video frames. 
The idea is to combine the PPS signal with system time (synchronized by NTP) to get the accurate time of every video frame.

I hope that it could be an alternative to time tagging relative to the trigger.  The trigger-based time tagging has an issue that the exact time of trigger start must be known.  Therefore the control by API is not possible, because it has an unknown delay to start recording.  Accurate system time solves that, because even in case the start of video recording is unknown, the whole video shoot could be precisely aligned to data measured by other instruments based on the absolute timestamp.

I need it to study lightning where multiple cameras should be triggered on the same lightning event. Unfortunately, the multiple cameras are spread over a range of more than 15km, therefore it is not possible to trigger it by a single coaxial cable. I am only able to connect the cameras to the single ethernet-based network with relatively low latency.

Chronos User Discussion / Time tagging
« on: July 15, 2021, 02:58:31 PM »
What is the best way to capture a video, which should be possible to time-align to absolute time (e. g. UTC.)? The required precision of time alignment should be comparable to the exposure time.
I noticed that there is possible to install the ntpd to the camera OS, which is a good base for timing. But I do not know how to mark the system time precisely to the video.

Also, I noticed that there is a time in the text overlay, It seems it always shows a time of around 17xxx seconds. But It is unclear for me which time is a zero?

Software Dev / Lightning capture - time discontinuity
« on: July 11, 2021, 05:51:23 AM »

I am trying to set up an all-sky system to capture lightning for a thunderstorm research project.  The one partially successful result is here:
It has been captured manually by controlling the camera from the web interface.

Today I want to improve the setup with the prepared python script  which I intend to use for saving the video records.
I supposed the camera should be activated by manually enabling the "Recording" from the web interface.  From the preliminary tests, this expectation looks correct.

Unfortunately in the field during a thunderstorm, I captured the following video:
The video contains multiple time inconsistencies which are visible on movements of raindrops and clouds. The exact time inconsistencies are on 0:14, 0:26, 0:36, 0:50, 1:01 etc.  Despite the fact, the text overlay claims the video has around 8 seconds the recorded video looks more like a timelapse of the sky during the save of video buffers (which takes several minutes).

Therefore I modified the script during the thunderstorm to stop recording before save.

Code: [Select]
+                post ='http://chronos.lan/control/stopRecording')
+                time.sleep(2)
                 post ='http://chronos.lan/control/startFilesave', json = {'format': 'h264', 'device': 'mmcblk1p1'})
                 print("Camera recording: " + post.reason)

The result is in the following video:

Although the lightning was really captured in the middle of the video, there is at least one-time glitch on 2:21 of video time.

What is the correct workflow to capture and save few last seconds (for example latest 3seconds) of video from the video buffer by using the HTTP API?

Thanks for the quick reply in advice, the thunderstorm season is short.

Pages: [1]