FFmpeg:通过多个视频/图像输入输出多个视频

FFmpeg:通过多个视频/图像输入输出多个视频

非常感谢您查看我的帖子,我对 ffmpeg 还很陌生,但是沉迷于尝试它,但我面临以下问题:

ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop 
       -f dshow -i video="video-input-device":audio="audio-input-device" 
       -i image.png 
       -filter_complex "[0:v]format=yuv420p,yadif[v];[1:v]scale=256:-1,overlay=10:10[secondvideo];[v][2]overlay=main_w-overlay_w-10/2:main_h-overlay_h-10/2[image];[image][secondvideo]concat=n=2[outer];[outer]split=2[out0][out1]" 
       -map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4 
       -map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4

预期输出:两个视频包含音频、正在录制的屏幕和位于视频不同位置的多个视频流,在我的情况下,它是视频左上角的网络摄像头和右下角的图像。

实际输出:以下错误

Stream mapping:
Stream #0:0 (bmp) -> format (graph 0)
Stream #1:0 (rawvideo) -> scale (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #2:0 (png) -> overlay:overlay (graph 0)
Stream #1:1 -> #0:0 (pcm_s16le (native) -> aac (native))
split:output0 (graph 0) -> Stream #0:1 (libx264)
Stream #1:1 -> #1:0 (pcm_s16le (native) -> aac (native))
split:output1 (graph 0) -> Stream #1:1 (libx264)
Press [q] to stop, [?] for help
[dshow @ 0000003601a30ec0] Thread message queue blocking; consider raising 
the thread_queue_size option (current value: 8)
[Parsed_concat_5 @ 000000360bdef840] Input link in1:v0 parameters (size 
256x192, SAR 0:1) do not match the corresponding output link in0:v0 
parameters (1360x768, SAR 0:1)
[Parsed_concat_5 @ 000000360bdef840] Failed to configure output pad on 
Parsed_concat_5
Error reinitializing filters!
Failed to inject frame into filter network: Invalid argument
Error while processing the decoded data for stream #2:0
[aac @ 0000003601a9ef00] Qavg: 198.729
[aac @ 0000003601a9ef00] 2 frames left in the queue on closing
[aac @ 000000360a253800] Qavg: 198.729
[aac @ 000000360a253800] 2 frames left in the queue on closing
Conversion failed!

我知道这是一个 filter_complex 问题,但我不知道具体在哪里,如能得到任何帮助我将非常感激!

答案1

使用

ffmpeg -f gdigrab -s 1360x768 -framerate 30 -i desktop 
       -f dshow -i video="video-input-device":audio="audio-input-device" 
       -i image.png 
       -filter_complex "[1:v]scale=256:-1[secondvideo];[0:v][secondvideo]overlay=10:10[v1];[v1][2]main_w-overlay_w-10/2:main_h-overlay_h-10/2,split=2[out0][out1]" 
       -map 1:a -c:a aac -b:a 128k -map "[out0]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output0.mp4 
       -map 1:a -c:a aac -b:a 128k -map "[out1]" -c:v libx264 -b:v 2M -preset ultrafast -s 1280x720 -f mp4 output1.mp4

无需对来自 GDI 缓冲区的输入进行去隔行处理,也无需格式化。应使用带标签的输入板连续应用覆盖。

相关内容