Chronos > Software Dev

Deploying camApp from Qt Creator

<< < (2/6) > >>

Matom:
Hi David,

Thank you for the answer. After updating the software to the newest beta version the problem was solved. Not immediately though. When the exposure trigger was on the image froze, so I assumed that it was still a problem with the FPGA version. However, by unclicking the exposure trigger I can now update the software and use the camera afterwards. Thanks!

Btw, I am searching for the part of the code that holds the image (and sends it from the ccd to the screen). So far I assume that the buffer (pbuffer) in the video.cpp script holds the image. Am I on the right path? Where is the easiest place in the code to access the image?

 

tesla500:
Hi Matom,

A frozen image is expected with exposure trigger enabled, no images will be taken unless a source drives the external input when in that mode.

Getting an image is a little bit of work, the video is all handled by an external framework called OMX (OpenMax), there are calls that allow you to get a buffer containing video data, but I'm not too familiar with them. Foobar may be able to comment more on that.

Are you looking to get raw image sensor data or processed 8bpp data?

David

Matom:
Yes, thank you :)

I am looking for the raw image sensor data. I think I found it. However, I still did not manage to open and view the image that I dumped.
I dumped the values contained in the buffer from the "VIL_ClientCbFilledBufferDone" function.
 
The goal is to do a different video processing cycle than the one currently on the camera.
Because the camera will be working autonomously I don't need the image to be displayed.

tesla500:
I talked to Foobar, there's no easy way to get the live images from the OMX pipeline that's used for live display. It may be easier to use the Gstreamer pipeline that's used for saving, check videoRecord.cpp. Both of these video frameworks have quite a steep learning curve. If you're not using the LCD that makes things easier. It should be possible to spin up Gstreamer instead of OMX in live display mode.

To get RAW image sensor data you need to configure the video pipeline differently, see the routines for setting up for RAW saving, also in videoRecord.cpp.

You could also get recorded frames, albiet slowly, using the GPMC access used for black cal. Take a look at the black cal functions to see how this works.

David

Matom:
Dear David,

Thanks for your answer. I spent the last few weeks trying to understand how the pipelines work. And I agree that the learning curve is quite steep ;) I have the live image (8 bit/per pixel version) from the OMX pipeline now. So far, I did not manage to configure the Gstreamer pipeline.

Do you have a flowchart of the software? It would be a great help to see how the pipelines are connected and how the image is transported from the sensor to the screen.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version