合并 2 个视频并叠加会导致异步问题

合并 2 个视频并叠加会导致异步问题

我使用以下ffmpeg命令将 2 个 MKV 输入与 overlay 参数合并。结果应为一个输出,其中输入 1 位于输入 2 之上。在此过程中,输出应为 webm。两个输入的长度相同(相差一秒)。

一幅插图;

-----------------
|               |
|               |
|   input1.mkv  |
|               |
|---------------|
|               |
|               |
|   input2.mkv  |
|               |
----------------- 

命令和修剪后的输出:

ffmpeg -i input1.mkv -i input2.mkv -y -filter_complex \
[0:v]select=1, setpts=PTS-STARTPTS, scale=400:300, pad=400:600 [top]; \
[1:v]select=1, setpts=PTS-STARTPTS, scale=400:300 [bottom]; \
[top][bottom] overlay=0:300 [out]; \
[0:a:0][1:a:0] amerge=inputs=2 [a]; \
[a] asetpts=PTS-STARTPTS [a] \
-map [a] -c:v libvpx -crf 10 -b:v 360K -q:v 7 -c:a libvorbis -b:a 32k \
-map [out] output.webm

ffmpeg version 2.4.git Copyright (c) 2000-2014 the FFmpeg developers
  built on Oct 30 2014 14:00:21 with gcc 4.6 (Ubuntu/Linaro 4.6.3-1ubuntu5)
  configuration: --prefix=/home/bla/ffmpeg_build --extra-cflags=-I/home/bla/ffmpeg_build/include --extra-ldflags=-L/home/bla/ffmpeg_build/lib --bindir=/home/bla/bin --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-openssl
  libavutil      54. 11.100 / 54. 11.100
  libavcodec     56. 10.100 / 56. 10.100
  libavformat    56. 11.100 / 56. 11.100
  libavdevice    56.  2.100 / 56.  2.100
  libavfilter     5.  2.100 /  5.  2.100
  libswscale      3.  1.101 /  3.  1.101
  libswresample   1.  1.100 /  1.  1.100
  libpostproc    53.  3.100 / 53.  3.100
Guessed Channel Layout for  Input Stream #0.1 : mono
Input #0, matroska,webm, from '/tmp/input1.mkv':
  Metadata:
    ENCODER         : Lavf54.20.4
  Duration: 00:02:50.45, start: 0.000000, bitrate: 174 kb/s
    Stream #0:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
    Stream #0:1: Audio: pcm_mulaw ([7][0][0][0] / 0x0007), 8000 Hz, 1 channels, s16, 64 kb/s (default)
Guessed Channel Layout for  Input Stream #1.1 : mono
Input #1, matroska,webm, from '/tmp/input2.mkv':
  Metadata:
    ENCODER         : Lavf54.20.4
  Duration: 00:02:50.46, start: 0.013000, bitrate: 1901 kb/s
    Stream #1:0: Video: vp8, yuv420p, 640x480, SAR 1:1 DAR 4:3, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
    Stream #1:1: Audio: pcm_mulaw ([7][0][0][0] / 0x0007), 8000 Hz, 1 channels, s16, 64 kb/s (default)
[Parsed_amerge_8 @ 0x325ada0] No channel layout for input 1
[Parsed_amerge_8 @ 0x325ada0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[libvpx @ 0x3268aa0] v1.3.0
Output #0, webm, to '/tmp/output.webm':
  Metadata:
    encoder         : Lavf56.11.100
    Stream #0:0: Audio: vorbis (libvorbis), 8000 Hz, stereo, fltp, 32 kb/s (default)
    Metadata:
      encoder         : Lavc56.10.100 libvorbis
    Stream #0:1: Video: vp8 (libvpx), yuv420p, 400x600 [SAR 1:1 DAR 2:3], q=-1--1, 360 kb/s, 30 fps, 1k tbn, 30 tbc (default)
    Metadata:
      encoder         : Lavc56.10.100 libvpx
Stream mapping:
  Stream #0:0 (vp8) -> select
  Stream #0:1 (pcm_mulaw) -> amerge:in0
  Stream #1:0 (vp8) -> select
  Stream #1:1 (pcm_mulaw) -> amerge:in1
  asetpts -> Stream #0:0 (libvorbis)
  overlay -> Stream #0:1 (libvpx)
Press [q] to stop, [?] for help
[vp8 @ 0x322af20] Discarding interframe without a prior keyframe!
Error while decoding stream #0:0: Invalid data found when processing input
[vp8 @ 0x322af20] Discarding interframe without a prior keyframe!
Error while decoding stream #0:0: Invalid data found when processing input
frame=  316 fps= 17 q=0.0 size=     753kB time=00:00:13.53 bitrate= 456.0kbits/s dup=0 drop=146    
[vp8 @ 0x322af20] Upscaling is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[vp8 @ 0x322af20] If you want to help, upload a sample of this file to ftp://upload.ffmpeg.org/incoming/ and contact the ffmpeg-devel mailing list. ([email protected])
Input stream #0:0 frame changed from size:320x240 fmt:yuv420p to size:384x288 fmt:yuv420p
Input stream #0:0 frame changed from size:384x288 fmt:yuv420p to size:320x240 fmt:yuv420p
Input stream #0:0 frame changed from size:320x240 fmt:yuv420p to size:384x288 fmt:yuv420p
Input stream #0:0 frame changed from size:384x288 fmt:yuv420p to size:512x384 fmt:yuv420p
Input stream #0:0 frame changed from size:512x384 fmt:yuv420p to size:640x480 fmt:yuv420p
[Parsed_amerge_8 @ 0x33462c0] No channel layout for input 1
[Parsed_amerge_8 @ 0x33462c0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[libvorbis @ 0x3266fc0] Queue input is backward in time
[webm @ 0x3266200] Non-monotonous DTS in output stream 0:0; previous: 13880, current: 3912; changing to 13880. This may result in incorrect timestamps in the output file.
frame= 2730 fps= 21 q=0.0 size=    6030kB time=00:01:39.33 bitrate= 497.3kbits/s dup=0 drop=1036    
Error while decoding stream #0:1: Cannot allocate memory
    Last message repeated 65 times
frame= 2738 fps= 21 q=0.0 size=    6048kB time=00:01:39.66 bitrate= 497.1kbits/s dup=0 drop=1036    
Error while decoding stream #0:1: Cannot allocate memory
    Last message repeated 170 times
frame= 2784 fps= 21 q=0.0 size=    6230kB time=00:01:53.17 bitrate= 450.9kbits/s dup=0 drop=1403    
Error while decoding stream #1:1: Cannot allocate memory
    Last message repeated 133 times
[webm @ 0x3266200] Non-monotonous DTS in output stream 0:0; previous: 113164, current: 3896; changing to 113164. This may result in incorrect timestamps in the output file.
[webm @ 0x3266200] Non-monotonous DTS in output stream 0:0; previous: 113164, current: 3928; changing to 113164. This may result in incorrect timestamps in the output file.
[webm @ 0x3266200] Non-monotonous DTS in output stream 0:0; previous: 113164, current: 3960; changing to 113164. This may result in incorrect timestamps in the output file.
[webm @ 0x3266200] Non-monotonous DTS in output stream 0:0; previous: 113164, current: 3992; changing to 113164. This may result in incorrect timestamps in the output file.
frame= 2784 fps= 21 q=0.0 Lsize=    6295kB time=00:01:53.17 bitrate= 455.6kbits/s dup=0 drop=1456    
video:5595kB audio:643kB subtitle:0kB other streams:0kB global headers:3kB muxing overhead: 0.898592%

此命令执行了其应执行的操作。

然而,这两个视频并不完全同步。

顶部的输入 1 播放正常,而底部的输入 2 出现黑框、速度变慢或加快,导致音频和视频不同步。

为了排除输入的个别质量问题,我们切换了视频的位置,并且顶部视频始终播放正常。

我们怎样才能解决这个问题?

--更新 1-- 通过 Node.js fluent-ffmpeg 模块运行 FFMPEG 隐藏了所有警告和错误。我在控制台中运行了 FFMPEG 命令,输出量非常大。

这是一个未经修剪的 pastebin,里面有日志 ->http://pastebin.com/bHdC2M1V

--更新 2-- 可能的线索:输入的 mkv 文件是正在接收的 WebRTC 流。如果我错了,请纠正我;在直播流中,质量会根据连接而变化。如果这意味着发送的帧大小不同,这可以解释 ffmpeg 抱怨帧大小变化的问题。所以换个说法:如何组合 2 个 mkv 输入视频(原始 WebRTC 流)而不丢失改变大小的帧?

答案1

根据反复试验,当你首先将现有输入转换为其他中间文件时,它似乎有效,并且然后覆盖它们。

例如,您可以使用 HuffYUV、高质量 H.264、无损 H.264、ProRes 等:

ffmpeg -i input.mkv -c:v huffyuv -c:a pcm_s16le output.avi
ffmpeg -i input.mkv -c:v libx264 -crf 16 -c:a aac -strict experimental -b:a 320k output.mp4
ffmpeg -i input.mkv -c:v libx264 -crf 0 -c:a aac -strict experimental -b:a 320k output.mp4
ffmpeg -i input.mkv -c:v prores -c:a pcm_s16le output.mov

然后再次尝试合并。

请注意,如果您的原始视频不使用 YUV 4:2:0 色彩空间(HuffYUV、无损 H.264 和 ProRes 的情况),则可能需要设置-pix_fmt yuv420p或使用视频过滤器。format=pix_fmts=yuv420p

最初的问题是 ffmpeg 无法处理 VP8 中实现的帧缩放,并在RFC 6386

相关内容