My goal was to produce an output file that had the same properties as the file created in Windows. This means some of the options I used might not be the best ones. I used mediainfo to compare the properties of the files produced with ffmpeg and the Windows software.
So, here is the command line I came up with (this is a single line):
ffmpeg -f alsa -i hw:1 -f v4l2 -channel 1 -i /dev/video1 -c:a mp2 -b:v 8500k -minrate:v 8500k -maxrate:v 8500k -bufsize:v 2M -pix_fmt yuv420p -flags +ilme -bf 2 -vf mp=eq2=1:1:-0.05 -aspect 3:2 outputfile.ts
Looks complicated and scary, but it really isn't. Here are the options:
-f alsa (set alsa format for the audio input) -i hw:1 (use audio input #1, #0 is the microphone on my laptop) -f v4l2 (set v4l2 format for the video input) -channel 1 (set capture input: 0=composite, 1=S-video) -i /dev/video1 (use video1 for video input, video0 is the camera on my laptop) -c:a mp2 (set the audio codec to mp2) -b:v 8200k (set the video bitrate to 8200k) -minrate:v 8200k (set min video bitrate to 8200k) -maxrate:v 8200k (set max video bitrate to 8200k) -bufsize:v 2M (set video bufsize to 2M) -pix_fmt yuv420p (set chroma format) -flags +ilme (set interlace mode) -bf 2 (set B frame rate) -vf mp=eq2=1:1:-0.05 (video filter to adjust gamma:contrast:brightness: default= 1:1:0) -aspect 3:2 (set aspect since -vf mp=eq2... causes aspect error on playback)
I spent way too much time searching for the "channel" option. ffmpeg kept insisting on using input 0 (composite). Even when I used v4l2-ctl to set the input to S-video, ffmpeg would change it back. I knew there had to be a way to set this, but I couldn't find it any of the documentation I read. I finally found it in some archived post about ffmpeg.
The output files created with the Windows software had a constant bitrate of 9000 kbits. The bitrate settings above try to simulate a constant bitrate. For whatever reason, using 9000k resulted in the bitrate being too high. Trial and error allowed me to settle on 8200k to give approximately 9000k in the output file.
I kept interlaced mode because that's what the Windows software creates. Anyway, I use Handbrake to crop, deinterlace (or more precisely decomb), and convert to MP4.
The ffmpeg results seemed washed out compared to the Windows results. Reducing the brightness a bit seems to help (the -vf option above).
There is one problem with the above command. There isn't any way to view the video as it is being captured. There are two possible solutions. If you know how long you want to capture and don't need to view the video stream, then just add the -t option, as -t #secs or -t hh:mm:ss. If you do need view the video, then another output must be added to the ffmpeg command (ffmpeg can handle multiple outputs, and inputs). I do this by adding the following right before the output file name:
-f mpegts -b:v 1M udp://ipaddress:9999You can then view the video using:
ffplay udp://ipaddress:9999
So far the results have been pretty good. I might eventually try to convert directly to mp4, but I'll have to find some way to autocrop.
So what ffmpeg options would you use instead of the above?
What do you use to capture video in Linux?