)


 

That was simple.  Yes the RMS software is happy to read the MIPI camera with just
device: 0
(with the resolution and bit depth set to match the camera at 30 FPSx1920x1080x10 bits).

wow. simple.

The problem is that now the RPI is drowning in data (as expected).

The FITS files are 7.9 MB vs 3.9 MB due to the increased resolution.  Compression time is 9.36 seconds instead of 3.36 seconds.  The program keeps calm / carries on and makes a new FF file every 47 seconds (!) 
I suppose that file contains 255/30 or 8.5 seconds of exposure time. So 38.5 seconds of no-exposure out of every 47 (?)

Anyway - I asked the vendor if they have a way to scan less of the image (by support of ROI perhaps)?

Is it possible that RMS could use the pixel binning plugin of Gstreamer?
https://github.com/atdgroup/gst-plugin-binning

If it could take the frame data directly to 960x540 then RPI4 might even handle 60 FPS  (certainly 50 FPS).


Ken


Kees Meteor
 

Hi Ken,
What do you see in this setup if you set frame dropping to <true> in the .config file?
Overclocking the pi and shutting down all unnecessary apps like teamviewer and anydesk helps also a bit.

Regards, Kees.

Op maandag 17 mei 2021 schreef Ken Jamrogowicz via groups.io <ke2n=cs.com@groups.io>:

That was simple.  Yes the RMS software is happy to read the MIPI camera with just
device: 0
(with the resolution and bit depth set to match the camera at 30 FPSx1920x1080x10 bits).

wow. simple.

The problem is that now the RPI is drowning in data (as expected).

The FITS files are 7.9 MB vs 3.9 MB due to the increased resolution.  Compression time is 9.36 seconds instead of 3.36 seconds.  The program keeps calm / carries on and makes a new FF file every 47 seconds (!) 
I suppose that file contains 255/30 or 8.5 seconds of exposure time. So 38.5 seconds of no-exposure out of every 47 (?)

Anyway - I asked the vendor if they have a way to scan less of the image (by support of ROI perhaps)?

Is it possible that RMS could use the pixel binning plugin of Gstreamer?
https://github.com/atdgroup/gst-plugin-binning

If it could take the frame data directly to 960x540 then RPI4 might even handle 60 FPS  (certainly 50 FPS).


Ken


Damir Šegon
 

Great stuff, Ken!
I agree with Ken, in case camera does not support 1280x720 at 60fps binning should be the right way to go... it would boost up the number of detected meteors, barely affecting astrometric precision.
All the best,
Damir


On Mon, May 17, 2021 at 7:52 AM Kees Meteor <meteorkees@...> wrote:
Hi Ken,
What do you see in this setup if you set frame dropping to <true> in the .config file?
Overclocking the pi and shutting down all unnecessary apps like teamviewer and anydesk helps also a bit.

Regards, Kees.

Op maandag 17 mei 2021 schreef Ken Jamrogowicz via groups.io <ke2n=cs.com@groups.io>:
That was simple.  Yes the RMS software is happy to read the MIPI camera with just
device: 0
(with the resolution and bit depth set to match the camera at 30 FPSx1920x1080x10 bits).

wow. simple.

The problem is that now the RPI is drowning in data (as expected).

The FITS files are 7.9 MB vs 3.9 MB due to the increased resolution.  Compression time is 9.36 seconds instead of 3.36 seconds.  The program keeps calm / carries on and makes a new FF file every 47 seconds (!) 
I suppose that file contains 255/30 or 8.5 seconds of exposure time. So 38.5 seconds of no-exposure out of every 47 (?)

Anyway - I asked the vendor if they have a way to scan less of the image (by support of ROI perhaps)?

Is it possible that RMS could use the pixel binning plugin of Gstreamer?
https://github.com/atdgroup/gst-plugin-binning

If it could take the frame data directly to 960x540 then RPI4 might even handle 60 FPS  (certainly 50 FPS).


Ken


Richard Bassom
 

It looks as though RMS could do the binning if you set the parameter in the .config file to 2. I'm not sure whether this would be more or less CPU intensive than using a binning element in the gstreamer pipeline though.

detection_binning_factor: 1 
 


 

Yes - lots of frame dropping, Kees:

2021/05/18 01:17:18-INFO-BufferedCapture-line:161 - Initializing the video device...
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/videoio_registry.cpp (187) VideoBackendRegistry VIDEOIO: Enabled backends(7, sorted by priority): FFMPEG(1000); GSTREAMER(990); INTEL_MFX(980); V4L2(970); CV_IMAGES(960); CV_MJPEG(950); FIREWIRE(940)
[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.
[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (886) open OpenCV | GStreamer warning: unable to start pipeline
[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
(DEBUG) V4L: opening /dev/video0
2021/05/18 01:17:19-INFO-BufferedCapture-line:212 - Video device opened!
2021/05/18 01:17:21-INFO-BufferedCapture-line:281 - Grabbing a new block of 256 frames...
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.175, convert: 0.000, assignment: 0.006
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.179, convert: 0.001, assignment: 0.006
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.179, convert: 0.001, assignment: 0.006
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.179, convert: 0.001, assignment: 0.006
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.182, convert: 0.001, assignment: 0.006
2021/05/18 01:17:22-INFO-BufferedCapture-line:324 - 5 frames dropped! Time for frame: 0.178, convert: 0.001, assignment: 0.006
etc

(I am not certain if the warning messages are a problem for this configuration)

This same debugging log says the effective frame rate is only 5 FPS, which is quite a bit slower than I expected from the 3X increase in file size.

I also tried binning in the .config file - this RMS binning is apparently done at some point where it does not really help things. So it is not the answer.

I even tried putting in some numbers for ROI but I don't think ROI is supported by the PI OS v4l2 driver for this sensor. It has no effect.

So for the moment I am stuck.  Using GStream for binning ahead of any RMS processing would seem to be the only avenue left. 

I should point out that the livestream program seems to work fine. And I was able to stream video to a disk (RAM disk anyway) in a 10-bit or 12-bit RAW format.  Overnight would be a couple of hundred gigabytes ...


Ken


Richard Bassom
 

You could try using the videoscale gstreamer element by adding this into the pipeline before the appsink. This should rescale your video to 960x540:

! videoscale ! 'video/x-raw,width=960,height=540' 


For example, the following gstreamer pipeline rescales the normal 720p RTSP video stream from the IP camera down to half size:

gst-launch-1.0 rtspsrc protocols=tcp tcp-timeout=5000000 retry=5 latency=1000 location='rtsp://192.168.42.10:554/user=admin&password=&channel=1&stream=0.sdp' ! rtph264depay ! queue ! h264parse ! v4l2h264dec ! queue ! videoscale ! 'video/x-raw,width=640,height=360' ! autovideosink


Video scale also seems to have a few different methods for performing the scaling. The default is Bilinear (2-tap).


 

That sounds like what I had in mind except this is a MIPI camera interface - it does not have an IP address. Its is just device video0. The reason to use this was to eliminate the H.264 compression/expansion.

You will notice notice in the posting below some warning message about not being able to start a pipeline with this device.

I don't know if there is some way to make /dev/video0 pass through a pipeline where binning/rescaling can take place. 

Although .. what you show could be interesting to someone who has an IP camera that streams at only 1080P for example.


Ken


Richard Bassom
 

For a v4l2 device on /dev/video0 all you would have to do is to add the videoscale into the pipeline after the v4l2src element. So for example, if your /dev/video0 device needed rescaling from 1080p to half size 960x540, your gstreamer command to display at half size would be:

gst-launch-1.0 v4l2src ! videoscale ! 'video/x-raw,width=960,height=540' ! autovideosink

... and your device line in the RMS .config file for your camera device on /dev/video0 would be:

device: v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink

The appsink may need additional parameters such as "max-buffers=25 drop=true sync=1" .


Kees Meteor
 

Hi all,
testing now a while with the Veye CS-MIPI-IMX307 camera. Thanks to Tammo Jan and Yevhenii we could start with 720p at 60fps with use of the v4l2 driver from Veye. That all looks good, still looking for the best settings regarding gain, and fireball detection etc. Lets call it fine-tuning. The imx307 is connected to the pi4 on the csi-port (cameraport).

Since tonight i started the same camera at 1080p with 25fps with use of the same v4l2 driver. Made a platepar file, and this Veye imx307 cam has nearly same FOV as my im291 rms cam NL00009.
My first observations are:
The rms system with the imx307 runs fine on the pi4. (overclocked this pi4 a bit). Nearly no framedrops. I see more framedrops on the imx291 system running on a standard pi3B+ on 720p 25fps!
I noticed also that RMS with the imx307 started earlier to detect stars than the imx291.

Now wait and see what the stations come up with in the morning. 

@Richard, thx for the gstreamer strings. I will test them soon on this Veye cams, imx307 or the imx327.

Regards, Kees.

Op wo 19 mei 2021 om 16:47 schreef Richard Bassom via groups.io <rbringwood=yahoo.co.uk@groups.io>:

For a v4l2 device on /dev/video0 all you would have to do is to add the videoscale into the pipeline after the v4l2src element. So for example, if your /dev/video0 device needed rescaling from 1080p to half size 960x540, your gstreamer command to display at half size would be:

gst-launch-1.0 v4l2src ! videoscale ! 'video/x-raw,width=960,height=540' ! autovideosink

... and your device line in the RMS .config file for your camera device on /dev/video0 would be:

device: v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink

The appsink may need additional parameters such as "max-buffers=25 drop=true sync=1" .


Richard Bassom
 

If your v4l2 device on /dev/videoX supports the the reduced frame size, then it would be better to use the device driver to do the frame scaling. You can use v4l2-ctl to check the supported frame formats:

v4l2-ctl -d /dev/video0 --list-formats-ext

If you do need to use gstreamer to do the frame scaling, then you should be able to do it with the videoscale in the device string. The device string I gave before may need a videoconvert or v4l2convert before the appsink:

device: v4l2src device=/dev/video0 ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! videoconvert ! appsink


Kees Meteor
 

@Richard, thx. I will do some tests soon.

imx307 versus imx291.
The imx291 captured 24 meteors in two parts. (manual check in binvieuwer)

The imx307 captured 208 detections. After manual conformation in binviewer, 38 meteors remain in 1080p/25fps. Not a bad result I think?

Regards, Kees.


Op do 20 mei 2021 om 13:37 schreef Richard Bassom via groups.io <rbringwood=yahoo.co.uk@groups.io>:

If your v4l2 device on /dev/videoX supports the the reduced frame size, then it would be better to use the device driver to do the frame scaling. You can use v4l2-ctl to check the supported frame formats:

v4l2-ctl -d /dev/video0 --list-formats-ext

If you do need to use gstreamer to do the frame scaling, then you should be able to do it with the videoscale in the device string. The device string I gave before may need a videoconvert or v4l2convert before the appsink:

device: v4l2src device=/dev/video0 ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! videoconvert ! appsink


 

The supported formats reported by the RPI OS driver:  
VIDIOC_ENUM_FMT
Type: Video Capture
[0]: 'Y10P' (10-bit Greyscale (MIPI Packed))
Size: Discrete 1920x1080
Size: Discrete 1280x720
[1]: 'Y10 ' (10-bit Greyscale)
Size: Discrete 1920x1080
Size: Discrete 1280x720
[2]: 'Y12P' (12-bit Greyscale (MIPI Packed))
Size: Discrete 1920x1080
Size: Discrete 1280x720
[3]: 'Y12 ' (12-bit Greyscale)
Size: Discrete 1920x1080
Size: Discrete 1280x720

This is interesting because the vendor does not mention the lower size spec - or perhaps the driver supports this size but the camera will not run it that mode? The formats (e.g., Y12P) do match what is shown in the vendor info.  This enumeration does not mention frame rate. 
= = =

I tried 
device: v4l2src device=/dev/video0 ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! videoconvert ! appsink

But it does not work

(python:18428): GStreamer-CRITICAL **: 02:37:48.695: gst_element_make_from_uri: assertion 'gst_uri_is_valid (uri)' failed
[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: syntax error
[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/backend_plugin.cpp (329) getPluginCandidates VideoIO pluigin (INTEL_MFX): glob is 'libopencv_videoio_intel_mfx*.so', 1 location(s)
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/backend_plugin.cpp (336) getPluginCandidates     - /home/pi/vRMS/lib: 0
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/backend_plugin.cpp (340) getPluginCandidates Found 0 plugin(s) for INTEL_MFX
(DEBUG) V4L: opening v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink 'max-buffers=25 drop=true sync=1'
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_images.cpp (282) icvExtractPattern Pattern: v4l2src ! queue ! videoscale ! 'video/x-raw,width=%03d,height=540' ! appsink 'max-buffers=25 drop=true sync=1' @ 960
Can't open video stream: v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink 'max-buffers=25 drop=true sync=1'
Press any key to continue... 
 = = =
I also tried using 1280x720 for the size parameters and it gets a bit further along, failing here

...-INFO-BufferedCapture-line:211 - The video source could not be opened!
 
or in livestream
Can't open video stream: v4l2src ! queue ! videoscale ! 'video/x-raw,width=1280,height=720' ! appsink 'max-buffers=25 drop=true sync=1'
 

Ken

  



Richard Bassom
 

It looks as though you may be able to change the frame size to 1280x720 at the driver level then.

Using gstreamer to do the resizing, it appears that the gstreamer string gets modified for some reason when used with OpenCV, as the 'videoconvert' element is missing, and the video width setting gets mangled to '%03d'. I wonder if using the pipeline direct from the command line with gst-launch-1.0 works:

gst-launch-1.0 v4l2src device=/dev/video0 ! videoscale ! 'video/x-raw,width=960,height=540' ! videoconvert ! autovideosink

It may be worth experimenting with a few things in the gstreamer pipeline if the driver resizing doesn't work. Unfortunately I don't have a camera connected with a CSI/MIPI interface to try it out. I might try connecting a USB camera to a rpi4, to see if I can get the videoscale element working with OpenCV


On Fri, May 21, 2021 at 04:24 AM, Ken Jamrogowicz wrote:
(DEBUG) V4L: opening v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink 'max-buffers=25 drop=true sync=1'
[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_images.cpp (282) icvExtractPattern Pattern: v4l2src ! queue ! videoscale ! 'video/x-raw,width=%03d,height=540' ! appsink 'max-buffers=25 drop=true sync=1' @ 960
Can't open video stream: v4l2src ! queue ! videoscale ! 'video/x-raw,width=960,height=540' ! appsink 'max-buffers=25 drop=true sync=1'


Richard Bassom
 

So the problem with the gstreamer videoscale pipeline seems to be the single quotes. If they are removed, the gstreamer string should work in the RMS device configuration e.g.

device: v4l2src device=/dev/video0 ! queue ! videoscale ! video/x-raw,width=960,height=540 ! videoconvert ! appsink max-buffers=25 drop=true sync=1


 

using this
device: v4l2src device=/dev/video0 ! queue ! videoscale ! video/x-raw, width= 960, height=540 ! videoconvert ! appsink max-buffers=25 drop=true sync=1

 - that fixed the syntax error, but now I have an "internal data stream error"

[ WARN:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src0 reported: Internal data stream error.

debug info 

[ INFO:0] global /home/pi/opencv-4.1.2/modules/videoio/src/cap_images.cpp (282) icvExtractPattern Pattern: v4l2src device=/dev/video0 ! queue ! videoscale ! video/x-raw,width=%03d,height=540 ! videoconvert ! appsink max-buffers=25 drop=true sync=1 @ 960

(note the %03d )

I get the same thing from the command line

 gst-launch-1.0 v4l2src device=/dev/video0 ! videoscale ! 'video/x-raw,width=960,height=540' ! videoconvert ! autovideosink
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Execution ended after 0:00:00.006389593
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
 
I tried a few variations but no progress.



Ken


Richard Bassom
 

There is another topic on here that had some working gstreamer pipeilines for MIPI cameras, so you may need to adapt one of these and try to add the videoscale into the pipeline before the appsink.

Advice needed for G-streamer pipeline for CS-Mipi-imx307 camera