美文网首页FFmpegFFmpegffmpeg编解码技术及应用
ffmpeg入门教程之多媒体文件格式转换器(无编解码)详解(官网

ffmpeg入门教程之多媒体文件格式转换器(无编解码)详解(官网

作者: Vghh | 来源:发表于2019-10-05 23:07 被阅读0次

    文章目录

    ffmpeg入门教程https://www.jianshu.com/p/042c7847bd8a
    视频播放器原理
    Demuxing-Module解封装模块
    Muxing-Module封装模块
    ffmpeg多媒体文件格式转换器(无编解码)官方示例
    GitHub:https://github.com/AnJiaoDe/FFmpegDemo
    可以看到步骤极其复杂,流程如下(小编懒得画流程图,仅以文字表示):
    ffmpeg多媒体文件格式转换器(无编解码)详解
    avformat_open_input()
    avformat_find_stream_info()
    av_dump_format()
    avformat_alloc_output_context2()
    AVFormatContext中的unsigned int nb_streams
    av_mallocz_array()
    AVOutputFormat
    typedef struct AVCodecParameters
    enum AVMediaType
    avformat_new_stream()
    avcodec_parameters_copy
    codecpar->codec_tag = 0疑问?
    AVOutputFormat中的int flags
    avio_open()
    AVIOContext
    AVFormatContext中的AVIOContext *pb
    avformat_write_header()
    av_read_frame()
    ffmpeg时间基相关详解
    av_rescale_q_rnd()
    enum AVRounding
    AVStream中的AVRational time_base
    AVPacket中的int64_t pts
    AVPacket中的int64_t dts
    AVPacket中的int64_t duration
    av_rescale_q()
    av_interleaved_write_frame()
    Interleaving(交叉存取技术)
    av_write_frame()
    各位老铁有问题欢迎及时联系、指正、批评、撕逼

    ffmpeg入门教程https://www.jianshu.com/p/042c7847bd8a

    视频播放器原理

    ———————————————— 版权声明

    此处摘抄部分为CSDN博主「雷霄骅」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/leixiaohua1020/article/details/18893769

    视音频技术主要包含以下几点:封装技术,视频压缩编码技术以及音频压缩编码技术。如果考虑到网络传输的话,还包括流媒体协议技术。

    视频播放器播放一个互联网上的视频文件,需要经过以下几个步骤:解协议,解封装,解码视音频,视音频同步。如果播放本地文件则不需要解协议,为以下几个步骤:解封装,解码视音频,视音频同步。他们的过程如图所示。

    image.png

    解协议的作用

    就是将流媒体协议的数据,解析为标准的相应的封装格式数据。视音频在网络上传播的时候,常常采用各种流媒体协议,例如HTTP,RTMP,或是MMS等等。这些协议在传输视音频数据的同时,也会传输一些信令数据。这些信令数据包括对播放的控制(播放,暂停,停止),或者对网络状态的描述等。解协议的过程中会去除掉信令数据而只保留视音频数据。例如,采用RTMP协议传输的数据,经过解协议操作后,输出FLV格式的数据。

    解封装的作用

    就是将输入的封装格式的数据,分离成为音频流压缩编码数据和视频流压缩编码数据。封装格式种类很多,例如MP4,MKV,RMVB,TS,FLV,AVI等等,它的作用就是将已经压缩编码的视频数据和音频数据按照一定的格式放到一起。例如,FLV格式的数据,经过解封装操作后,输出H.264编码的视频码流和AAC编码的音频码流。

    解码的作用

    就是将视频/音频压缩编码数据,解码成为非压缩的视频/音频原始数据。音频的压缩编码标准包含AAC,MP3,AC-3等等,视频的压缩编码标准则包含H.264,MPEG2,VC-1等等。解码是整个系统中最重要也是最复杂的一个环节。通过解码,压缩编码的视频数据输出成为非压缩的颜色数据,例如YUV420P,RGB等等;压缩编码的音频数据输出成为非压缩的音频抽样数据,例如PCM数据。

    视音频同步的作用

    就是根据解封装模块处理过程中获取到的参数信息,同步解码出来的视频和音频数据,并将视频音频数据送至系统的显卡和声卡播放出来。

    ffmpeg格式转换,主要看libavformat库中的Demuxing-Module和Muxing-Module

    在这里插入图片描述

    Demuxing-Module解封装模块

     Demuxers read a media file and split it into chunks of data (packets).
    

    格式解析器读取媒体文件并将其拆分为数据块(数据包)。

      A packet contains one or more encoded frames which belongs to a single elementary stream.
      In the lavf API this process is represented by the avformat_open_input() function 
      for opening a file, av_read_frame() for reading a single packet and finally   
      avformat_close_input(), which does the cleanup.
    

    一个数据包,包含一个或多个属于单个基本流的编码帧。在libavformat的API中,解封装过程由如下几步完成:avformat_open_input()函数用于打开文件,av_read_Frame()用于读取单个数据包,最后由avformat_close_input()来进行清理。

    Opening a media file打开一个媒体文件

     The minimum information required to open a file is its URL, 
     which is passed to avformat_open_input(), as in the following code:
    

    打开文件所需的最小信息是它的url,它被传递给avformat_open_input(),如下代码所示:

    int avformat_open_input(AVFormatContext **ps, const char *url, 
    ff_const59 AVInputFormat *fmt, AVDictionary **options);
    
    const char    *url = "file:in.mp3";
    AVFormatContext *s = NULL;
    int ret = avformat_open_input(&s, url, NULL, NULL);
    if (ret < 0)
        abort();
    
    The above code attempts to allocate an AVFormatContext, open the specified file 
    (autodetecting the format) and read the header, exporting the information 
    stored there into s. Some formats do not have a header or do not store enough 
    information there, so it is recommended that you call the avformat_find_stream_info()
    function which tries to read and decode a few frames to find missing information.
    

    上面的代码会为AVFormatContext对象分配内存,打开指定的文件(自动检测格式)并读取文件头,将存储在其中的信息导出到AVFormatContext *s。有些格式没有文件头,或者没有在其中存储足够的信息,因此建议您调用avformat_find_stream_info()函数,该函数尝试读取和解码几个帧以查找丢失的信息。

    In some cases you might want to preallocate an AVFormatContext yourself with 
    avformat_alloc_context() and do some tweaking on it before passing it to 
    avformat_open_input().One such case is when you want to use custom functions for 
    reading input data instead of lavf internal I/O layer.  To do that, create your 
    own AVIOContext with avio_alloc_context(), passing your reading callbacks to it. 
    Then set the pb field of your AVFormatContext to newly created AVIOContext.
    

    在某些情况下,您可能希望使用avformat_alloc_context()预先初始化AVFormatContext对象,并在将其传递给avformat_open_input()之前对其进行一些调整。在您希望不用libavformat的内部I/O模块,而使用自定义函数来读取输入数据的时候会用到。为此,使用avio_alloc_text()创建自己的AVIOContext 对象,并将读取回调传递给它。然后将avformattext的pb属性设置为新创建的aviotext。

    Since the format of the opened file is in general not known until after 
    avformat_open_input() has returned, it is not possible to set demuxer private 
    options on a preallocated context. Instead, the options should be passed to 
    avformat_open_input() wrapped in an AVDictionary:
    

    由于打开的文件的格式通常在avformat_open_input()返回之后才知道,因此不可能在预先分配的上下文上设置demuxer私有属性。相反,应该将这些属性包装到AVDictionary对象,再传递给avformat_open_input():

    int avformat_open_input(AVFormatContext **ps, const char *url, 
    ff_const59 AVInputFormat *fmt, AVDictionary **options);
    
    AVDictionary *options = NULL;
    av_dict_set(&options, "video_size", "640x480", 0);
    av_dict_set(&options, "pixel_format", "rgb24", 0);
    
    if (avformat_open_input(&s, url, NULL, &options) < 0)
        abort();
    av_dict_free(&options);
    
    This code passes the private options 'video_size' and 'pixel_format' to the demuxer.  
    They would be necessary for e.g. the rawvideo demuxer, since it cannot know how to 
    interpret raw video data otherwise. If the format turns out to be something different 
    than raw video, those options will not be recognized by the demuxer and therefore will 
    not be applied. Such unrecognized options are then returned in the options dictionary 
    (recognized options are consumed). The calling program can handle such unrecognized 
    options as it wishes, e.g.
    

    此代码将私有属性“视频大小”和“像素格式”传递给解析器。这些属性对于原始视频解析器是必要的,否则它不知道如何解析原始视频数据。如果格式与原始视频不同,这些属性将不会被解析器识别,因此不会被应用。然后在选项字典中返回这些未识别的选项(已识别的选项被使用)。如下程序可以根据自己的意愿处理无法识别的属性:

    AVDictionaryEntry *e;
    if (e = av_dict_get(options, "", NULL, AV_DICT_IGNORE_SUFFIX)) {
        fprintf(stderr, "Option %s not recognized by the demuxer.\n", e->key);
        abort();
    }
    
    After you have finished reading the file, you must close it with avformat_close_input().
    It will free everything associated with the file.
    

    读取完该文件后,必须使用avformat_close_input()关闭该文件。它将释放与文件相关的所有内存。

    Reading from an opened file从一个打开的文件读取信息

    Reading data from an opened AVFormatContext is done by repeatedly calling av_read_frame()
    on it. Each call, if successful, will return an AVPacket containing encoded data for one
    AVStream, identified by AVPacket.stream_index. This packet may be passed straight into 
    the libavcodec decoding functions avcodec_send_packet() or avcodec_decode_subtitle2() 
    if the caller wishes to decode the data.
    

    从打开文件的AVFormatContext 读取数据是通过反复调用av_read_frame()来完成的。如果每个调用成功,则将返回一个AVPacket(AV数据包) ,它有一个stream_index属性,该属性记录了一个含有编码数据的AVStream的索引值。如果调用方希望解码数据,则可以将此数据包直接传递到libavcodec解码函数avcodec_send_packet()或avcodec_decode_subtitle 2()。

    AVPacket.pts, AVPacket.dts and AVPacket.duration timing information will be set if known. 
    They may also be unset (i.e. AV_NOPTS_VALUE for pts/dts, 0 for duration) if the stream 
    does not provide them. The timing information will be in AVStream.time_base units, i.e. 
    it has to be multiplied by the timebase to convert them to seconds.
    

    如果已知输入流的avpacket.pts、avpacket.dts和avpacket.duration 时间信息,应该将其设置到输出流。如果流里不包含该信息,它们将不必设置到输出流(例如,AV_NOPTS_VALUE for pts/dts, 0 for duration)。时间信息将以AVStream.time_base 时钟基为单位表示,时间信息必须乘以时钟基才能将它们转换为秒。

    If AVPacket.buf is set on the returned packet, then the packet is allocated dynamically 
    and the user may keep it indefinitely. Otherwise, if AVPacket.buf is NULL, the packet 
    data is backed by a static storage somewhere inside the demuxer and the packet is only 
    valid until the next av_read_frame() call or closing the file. If the caller requires 
    a longer lifetime, av_packet_make_refcounted() will ensure that the data is reference 
    counted, copying the data if necessary. In both cases, the packet must be freed with
    av_packet_unref() when it is no longer needed.
    

    如果在返回的数据包上设置avpacket.buf,则动态分配数据包,用户可以无限期地保留它。否则,如果avpacket.buf为NULL,则数据包数据由demuxer内部的静态存储所支持,并且数据包仅在下一个av_read_frame()调用或关闭文件之前才有效。如果调用方需要更长的生存期,av_packet_make_refcounted()将确保数据被计数,并在必要时复制数据。在这两种情况下,当包不再需要时,必须用av_packet_unref()释放数据包。

    Muxing-Module封装模块

    Muxers take encoded data in the form of AVPackets and write it into files 
    or other output bytestreams in the specified container format.
    

    封装器以av数据包的形式封装编码的数据,并以指定的容器格式将其写入文件或其他输出字节流。

    The main API functions for muxing are avformat_write_header() for writing the 
    file header, av_write_frame() / av_interleaved_write_frame() for writing the 
    packets and av_write_trailer() for finalizing the file.
    

    封装器的主要API函数有:avformat_write_header()用于写文件头,av_write_frame() / av_interleaved_write_frame()用于写数据包,还有av_write_trailer用于写文件尾完成文件写入。

    At the beginning of the muxing process, the caller must first call avformat_alloc_context() 
    to create a muxing context. The caller then sets up the muxer by filling the various fields 
    in this context:
     •The oformat field must be set to select the muxer that will be used.
     •Unless the format is of the AVFMT_NOFILE type, the pb field must be set to an opened IO
      context, either returned from avio_open2() or a custom one. 
     •Unless the format is of the AVFMT_NOSTREAMS type, at least one stream must be created 
     with the avformat_new_stream() function. The caller should fill the stream codec 
     parametersinformation, such as the codec type, id and other parameters (e.g. 
     width / height, the pixel or sample format, etc.) as known. The stream timebase should be 
    set to the timebase that the caller desires to use for this stream (note that the timebase
     actually used by the muxer can be different, as will be described later). 
     •It is advised to manually initialize only the relevant fields in AVCodecParameters, 
     rather than using avcodec_parameters_copy() during remuxing: there is no guarantee 
     that the codec context values remain valid for both input and output format contexts. 
     •The caller may fill in additional information, such as global or per-stream metadata,
      chapters, programs, etc. as described in the AVFormatContext documentation. Whether such
     information will actually be stored in the output depends on what the container format 
    and the muxer support.
    

    在封装过程开始时,必须首先调用avformat_alloc_context()来创建muing上下文。然后调用方通过填充上下文中的各个字段来设置muxer:
    ·必须将oformat 设置属性,以便决定使用什么封装器。

    ·除非格式为AVFMT_NOFILE类型,否则必须将pb字段设置为打开的io上下文,可以从avio_open2()返回,或者自定义。

    ·除非格式为AVFMT_NOSTREAMS 类型,否则至少必须使用avmat_new_stream()函数创建一个流。调用方应填写流编解码器参数信息,如编解码器类型、id和其他参数(例如宽度/高度、像素或采样格式等)。流时基应该设置为调用方希望对此流使用的时间基(请注意,muxer实际使用的时间基可能有所不同,稍后将对此进行描述)。

    ·建议只手动初始化avcodecameters中的相关字段,而不是在重封装期间使用avcodec_properties_Copy():不能保证codec上下文值对输入和输出格式上下文都有效。

    ·调用方可以填写附加信息,如avformattext文档中描述的全局或流元数据、片段、程序等。这些信息是否实际存储在输出中取决于容器格式和muxer的支持。

    When the muxing context is fully set up, the caller must call avformat_write_header() 
    to initialize the muxer internals and write the file header. Whether anything actually 
    is written to the IO context at this step depends on the muxer, but this function must 
    always be called. Any muxer private options must be passed in the options parameter to 
    this function.
    

    当完成设置muing上下文时,调用方必须调用avformat_write_header()来初始化muxer内部相关对象,并写文件头。在这一步中,实际是否将任何内容写入io上下文取决于muxer,但必须始终调用此函数。任何muxer私有属性都必须在Options参数中传递给此函数。

    The data is then sent to the muxer by repeatedly calling av_write_frame() or 
    av_interleaved_write_frame() (consult those functions' documentation for discussion 
    on the difference between them; only one of them may be used with a single muxing 
    context, they should not be mixed). Do note that the timing information on the packets 
    sent to the muxer must be in the corresponding AVStream's timebase. That timebase is 
    set by the muxer (in the avformat_write_header() step) and may be different from the 
    timebase requested by the caller.
    

    然后,通过反复调用av_write_frame()或av_interleaved_write_frame()将数据传递到到muxer(请参阅这些函数的文档,文档详细描述了它们之间的差异;这2个函数其中只有一个可以与单个muing上下文一起使用,不应混合使用)。请注意,传递到muxer的数据包上的时间信息必须和avstream的时间基一致。该时间基由muxer设置(在avformat_write_header()步骤中),并且可能与调用方请求的时间基不同。

    Once all the data has been written, the caller must call av_write_trailer() to flush any 
    buffered packets and finalize the output file, then close the IO context (if any) 
    and finally free the muxing context with avformat_free_context().
    

    一旦所有数据都写入,调用者必须调用av_write_trailer()来释放所有packet缓冲区,并最终完成输出文件,然后关闭io上下文(如果有的话),最后用avformat_free_context()释放muxing上下文。

    ffmpeg多媒体文件格式转换器(无编解码)官方示例

    remuxer.h:

    /*
     * Copyright (c) 2013 Stefano Sabatini
     *
     * Permission is hereby granted, free of charge, to any person obtaining a copy
     * of this software and associated documentation files (the "Software"), to deal
     * in the Software without restriction, including without limitation the rights
     * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
     * copies of the Software, and to permit persons to whom the Software is
     * furnished to do so, subject to the following conditions:
     *
     * The above copyright notice and this permission notice shall be included in
     * all copies or substantial portions of the Software.
     *
     * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
     * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
     * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
     * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
     * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
     * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
     * THE SOFTWARE.
     */
    /**
     * @file
     * libavformat/libavcodec demuxing and muxing API example.
     *
     * Remux streams from one container format to another.
     * @example remuxing.c
     */
    extern "C" {
    #include <libavutil/timestamp.h>
    #include <libavformat/avformat.h>
    }
    
    static void log_packet(const AVFormatContext *fmt_ctx, const AVPacket *pkt, const char *tag) {
        AVRational *time_base = &fmt_ctx->streams[pkt->stream_index]->time_base;
    //    printf("%s: pts:%s pts_time:%s dts:%s dts_time:%s duration:%s duration_time:%s stream_index:%d\n",
    //           tag,av_ts2str(pkt->pts), av_ts2timestr(pkt->pts, time_base),
    //           av_ts2str(pkt->dts), av_ts2timestr(pkt->dts, time_base),
    //           av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, time_base),
    //           pkt->stream_index);
    }
    
    int main_remuxer(char *inPath, char *outPath) {
        AVOutputFormat *ofmt = NULL;
        AVFormatContext *ifmt_ctx = NULL, *ofmt_ctx = NULL;
        AVPacket pkt;
        const char *pathIn, *pathOut;
        int ret, i;
    //    int stream_index = 0;
    //    int *stream_mapping = NULL;
    //    int stream_mapping_size = 0;
    //    if (argc < 3) {
    //        printf("usage: %s input output\n"
    //               "API example program to remux a media file with libavformat and libavcodec.\n"
    //               "The output format is guessed according to the file extension.\n"
    //               "\n", argv[0]);
    //        return 1;
    //    }
        pathIn = inPath;
        pathOut = outPath;
        //打开一个输入流,并且读取其文件头。编解码器没有打开。
        if ((ret = avformat_open_input(&ifmt_ctx, pathIn, 0, 0)) < 0) {
            fprintf(stderr, "Could not open input file '%s'", pathIn);
            goto end;
        }
        /**读取媒体文件的所有packets,获取流信息。
           有些格式(如MPEG)没有文件头,或者没有在其中存储足够的信息,
           因此建议您调用avformat_find_stream_info()函数,该函数尝试读取和解码几个帧以查找丢失的信息。
         */
        if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
            fprintf(stderr, "Failed to retrieve input stream information");
            goto end;
        }
        /**打印输入或输出格式的详细信息,
            is_output:0表示input,1表示output
         */
        av_dump_format(ifmt_ctx, 0, pathIn, 0);
        /**为输出格式初始化AVFormatContext指针。
         */
        avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, pathOut);
        if (!ofmt_ctx) {
            fprintf(stderr, "Could not create output context\n");
            ret = AVERROR_UNKNOWN;
            goto end;
        }
    //    stream_mapping_size = ifmt_ctx->nb_streams;
        /**创建int数组
         * @param nmemb Number of elements 数组元素的个数
         * @param size  Size of the single element 每个元素的内存长度
         */
    //    stream_mapping = static_cast<int *>(av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping)));
    //    if (!stream_mapping) {
    //        ret = AVERROR(ENOMEM);
    //        goto end;
    //    }
        /**输出文件的格式,只有在封装时使用,必须在调用avformat_write_header()前初始化
         */
        ofmt = ofmt_ctx->oformat;
        /*AVFormatContext 结构体中定义了AVStream **streams 数组;
         * nb_streams即为数组元素的个数
         */
        for (i = 0; i < ifmt_ctx->nb_streams; i++) {
            AVStream *out_stream;
            AVStream *in_stream = ifmt_ctx->streams[i];
            /**当前流的编解码参数,
            avformat_new_stream()调用后会初始化,
            avformat_free_context()调用后会被释放。
            解封装:在创建流的时候或者avformat_find_stream_info()调用后,被初始化;
            封装:avformat_write_header()调用前,手动初始化
             */
            AVCodecParameters *in_codecpar = in_stream->codecpar;
            /**如果输入多媒体文件的当前遍历到的流的 媒体类型不是音频、视频、字幕,那么stream_mapping[i]赋值为-1
             */
            if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
    //            stream_mapping[i] = -1;
                continue;
            }
            //记录流索引
    //        stream_mapping[i] = stream_index++;
            /**创建一个用于输出的AVStream指针对象
             */
            out_stream = avformat_new_stream(ofmt_ctx, NULL);
            if (!out_stream) {
                fprintf(stderr, "Failed allocating output stream\n");
                ret = AVERROR_UNKNOWN;
                goto end;
            }
            /**输出的AVCodecParameters指针所占内存被释放,然后将输入的AVCodecParameters指针内存拷贝到输出的AVCodecParameters中
             */
            ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
            if (ret < 0) {
                fprintf(stderr, "Failed to copy codec parameters\n");
                goto end;
            }
            /**
             * Additional information about the codec (corresponds to the AVI FOURCC).
               uint32_t         codec_tag;
               为编解码器添加额外信息,这里懵逼了,这行不写,输出视频文件会有毛病
             */
            out_stream->codecpar->codec_tag = 0;
        }
        /**打印输入或输出格式的详细信息,
           is_output:0表示input,1表示output
         */
        av_dump_format(ofmt_ctx, 0, pathOut, 1);
        /**解封装时,AVFormatContext中的AVIOContext *pb, 可以在调用avformat_open_input()之前初始化,
                       或者通过调用avformat_open_input()初始化
               封装时,AVFormatContext中的AVIOContext *pb,可以在调用avformat_write_header()之前初始化,
               完事后必须释放AVFormatContext中的AVIOContext *pb占用的内存
               如果ofmt->flags值为AVFMT_NOFILE,就不要初始化AVFormatContext中的AVIOContext *pb,在这种情况下,
               解封装器/封装器将会通过其它方式处理I/O,而且AVFormatContext中的AVIOContext *pb为NULL
         */
        if (!(ofmt->flags & AVFMT_NOFILE)) {
            /**为对应url的文件初始化一个AVIOContext 二级指针对象
             */
            ret = avio_open(&ofmt_ctx->pb, pathOut, AVIO_FLAG_WRITE);
            if (ret < 0) {
                fprintf(stderr, "Could not open output file '%s'", pathOut);
                goto end;
            }
        }
        //初始化流的私有数据并将流头写入输出媒体文件
        ret = avformat_write_header(ofmt_ctx, NULL);
        if (ret < 0) {
            fprintf(stderr, "Error occurred when opening output file\n");
            goto end;
        }
        while (1) {
            AVStream *in_stream, *out_stream;
            /**返回流的下一帧。
          此函数读取存储在文件中的内容到AVPacket *pkt,而不验证是否存在解码器的有效帧。
          它将*存储在文件中的内容分割成帧,并为每个调用返回一个AVPacket *pkt。
          它不会*省略有效帧之间的无效数据,以便给解码器最大的解码信息。
          返回0表示读取一帧成功,返回负数,表示出错了或者已经读到文件末尾了。
             */
            ret = av_read_frame(ifmt_ctx, &pkt);
            if (ret < 0)
                break;
            //初始化输入的AVStream,AVpacket 中的stream_index定义了流的索引
            in_stream = ifmt_ctx->streams[pkt.stream_index];
    //        if (pkt.stream_index >= stream_mapping_size ||stream_mapping[pkt.stream_index] < 0) {
    //            //清除packet占用的内存
    //            av_packet_unref(&pkt);
    //            continue;
    //        }
    //        pkt.stream_index = stream_mapping[pkt.stream_index];
            //初始化输出的AVStream
            out_stream = ofmt_ctx->streams[pkt.stream_index];
            log_packet(ifmt_ctx, &pkt, "in");
            /*pkt.pts **乘** in_stream->time_base **除** out_stream->time_base
            得到out_stream下pkt的pts,输出文件的一帧数据包的pts
            即同步了输入输出的显示时间戳
            pkt为从输入文件读取的一帧的数据包,
             * */
            pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base,
                                       static_cast<AVRounding>(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            /**pkt.dts  **乘** in_stream->time_base **除** out_stream->time_base
            得到out_stream下pkt的dts ,输出文件的一帧数据包的dts
            即同步了输入输出的解压时间戳
            pkt为从输入文件读取的一帧的数据包,
             */
            pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base,
                                       static_cast<AVRounding>(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
            /**pkt.duration**乘** in_stream->time_base **除** out_stream->time_base
            得到out_stream下pkt的duration ,输出文件的一帧数据包的持续时间
            即同步了输入输出的持续时间戳
            pkt为从输入文件读取的一帧的数据包,
             */
            pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
    //        pkt.pos = -1;
            log_packet(ofmt_ctx, &pkt, "out");
            /**将一帧数据包写入输出媒体文件。
              此函数将根据需要在内部缓冲数据包,以确保输出文件中的数据包按照dts的顺序正确交叉存储。
              调用者进行自己的交叉存储时,应该调用av_write_frame(),而不是这个函数。
              使用此函数而不是av_write_frame()可以使muxers提前了解未来的数据包,例如改善MP4对VFR内容在碎片模式下的行为。
             */
            ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
            if (ret < 0) {
                fprintf(stderr, "Error muxing packet\n");
                break;
            }
            //清除packet占用的内存
            av_packet_unref(&pkt);
        }
        // *将流的尾部写入输出媒体文件,并且释放其私有数据占用的内存
        av_write_trailer(ofmt_ctx);
        end:
        //关闭打开的input AVFormatContext。释放其所有内容占用的内存,赋值为NULL。
        avformat_close_input(&ifmt_ctx);
        /* close output */
        if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
    //        关闭被AVIOContext使用的资源,释放AVIOContext占用的内存并且置为NULL
            avio_closep(&ofmt_ctx->pb);
        //释输出的放AVFormatContext所有占用的内存
        avformat_free_context(ofmt_ctx);
        //释放内存
    //    av_freep(&stream_mapping);
        if (ret < 0 && ret != AVERROR_EOF) {
    //        fprintf(stderr, "Error occurred: %s\n", av_err2str(ret));
            return 1;
        }
        return 0;
    }
    
    

    main.cpp:

    #include <iostream>
    #include <remuxer.h>
    int main() {
        main_remuxer("../resources/video.avi","../resources/video.mp4");
        system("pause");
        return 0;
    }
    

    为了方便测试和学习,文件建得有点奇葩,并且去除了一些冗余代码,运行结果如图:

    在这里插入图片描述

    GitHub:https://github.com/AnJiaoDe/FFmpegDemo

    可以看到步骤极其复杂,流程如下(小编懒得画流程图,仅以文字表示):

    avformat_open_input(&ifmt_ctx, pathIn, 0, 0)
    avformat_find_stream_info(ifmt_ctx, 0)
    avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, pathOut);
    ofmt = ofmt_ctx->oformat;
    out_stream = avformat_new_stream(ofmt_ctx,NULL);
    avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
    out_stream->codecpar->codec_tag = 0;
    avio_open(&ofmt_ctx->pb,pathOut, AVIO_FLAG_WRITE);
    avformat_write_header(ofmt_ctx, NULL);
    av_read_frame(ifmt_ctx, &pkt);
    out_stream =ofmt_ctx->streams[pkt.stream_index];
    pkt.pts =av_rescale_q_rnd(pkt.pts, in_stream->time_base,out_stream->time_base,
    static_cast<AVRounding>(AV_ROUND_NEAR_INF |AV_ROUND_PASS_MINMAX));
    pkt.dts = av_rescale_q_rnd(pkt.dts,in_stream->time_base, out_stream->time_base,
    static_cast<AVRounding>(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
    pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base,out_stream->time_base);
    av_interleaved_write_frame(ofmt_ctx, &pkt);
    av_packet_unref(&pkt);
    av_write_trailer(ofmt_ctx);
    avformat_close_input(&ifmt_ctx);
    avio_closep(&ofmt_ctx->pb);
    avformat_free_context(ofmt_ctx);

    ffmpeg多媒体文件格式转换器(无编解码)详解

    在编辑器中点击函数,转到头文件函数声明处,查看说明,官网API的说明在头文件里都有

    avformat_open_input()

    int avformat_open_input(AVFormatContext **ps, const char *url, 
    ff_const59 AVInputFormat *fmt, AVDictionary **options);
    
    
     * Open an input stream and read the header. The codecs are not opened.
     * The stream must be closed with avformat_close_input().
     *
     * @param ps Pointer to user-supplied AVFormatContext (allocated by avformat_alloc_context).
     *           May be a pointer to NULL, in which case an AVFormatContext is allocated by this
     *           function and written into ps.
     *           Note that a user-supplied AVFormatContext will be freed on failure.
     * @param url URL of the stream to open.
     * @param fmt If non-NULL, this parameter forces a specific input format.
     *            Otherwise the format is autodetected.
     * @param options  A dictionary filled with AVFormatContext and demuxer-private options.
     *                 On return this parameter will be destroyed and replaced with a dict containing
     *                 options that were not found. May be NULL.
     *
     * @return 0 on success, a negative AVERROR on failure.
     *
     * @note If you want to use custom IO, preallocate the format context and set its pb field.
    
    
    

    打开一个输入流,并且读取其文件头。编解码器没有打开。
    该输入流完事后必须调用avformat_close_input()关闭。
    ps:为API使用者提供的二级指针AVFormatContext(由avformat_alloc_context()分配)
    该指针可初始化为NULL,调用avformat_open_input()后,会为该指针赋值。
    注意:如果打开视频文件流失败,该指针会被置为NULL。
    url:视频文件的URL路径
    fmt:如果传值非NULL,改参数要求必须明确指定输入视频文件的格式。
    否则,视频文件的格式会自动检测。
    返回0:表示操作成功;返回负数:表示失败。
    注意:如果你想使用自定义的IO,需要预先分配format context 并且设置其pb属性。

    示例:

        if ((ret = avformat_open_input(&ifmt_ctx, pathIn, 0, 0)) < 0) {
            fprintf(stderr, "Could not open input file '%s'", pathIn);
            goto end;
        }
    

    avformat_find_stream_info()

    int avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options);
    
     * Read packets of a media file to get stream information. This
     * is useful for file formats with no headers such as MPEG. This
     * function also computes the real framerate in case of MPEG-2 repeat
     * frame mode.
     * The logical file position is not changed by this function;
     * examined packets may be buffered for later processing.
     *
     * @param ic media file handle
     * @param options  If non-NULL, an ic.nb_streams long array of pointers to
     *                 dictionaries, where i-th member contains options for
     *                 codec corresponding to i-th stream.
     *                 On return each dictionary will be filled with options that were not found.
     * @return >=0 if OK, AVERROR_xxx on error
     *
     * @note this function isn't guaranteed to open all the codecs, so
     *       options being non-empty at return is a perfectly normal behavior.
     *
     * @todo Let the user decide somehow what information is needed so that
     *       we do not waste time getting stuff the user does not need.
    

    读取媒体文件的所有packets,获取流信息。
    有些格式(如MPEG)没有文件头,或者没有在其中存储足够的信息,因此建议您调用avformat_find_stream_info()函数,该函数尝试读取和解码几个帧以查找丢失的信息。

    示例:

     if ((ret = avformat_find_stream_info(ifmt_ctx, 0)) < 0) {
            fprintf(stderr, "Failed to retrieve input stream information");
            goto end;
        }
    

    av_dump_format()

    void av_dump_format(AVFormatContext *ic,int index,const char *url,int is_output);
    
     * Print detailed information about the input or output format, such as
     * duration, bitrate, streams, container, programs, metadata, side data,
     * codec and time base.
     *
     * @param ic        the context to analyze
     * @param index     index of the stream to dump information about
     * @param url       the URL to print, such as source or destination file
     * @param is_output Select whether the specified context is an input(0) or output(1)
    

    打印输入文件或输出文件的详细信息,is_output:0表示input,1表示output

    示例:

        av_dump_format(ifmt_ctx, 0, pathIn, 0);
    

    如下输入文件的详细信息:

     Input #0, flv, from '../resources/video.flv':
      Metadata:
        metadatacreator : iku
        hasKeyframes    : true
        hasVideo        : true
        hasAudio        : true
        hasMetadata     : true
        canSeekToEnd    : false
        datasize        : 932906
        videosize       : 787866
        audiosize       : 140052
        lasttimestamp   : 34
        lastkeyframetimestamp: 30
        lastkeyframelocation: 886498
        encoder         : Lavf55.19.104
      Duration: 00:00:34.20, start: 0.042000, bitrate: 394 kb/s
        Stream #0:0: Video: h264 (High), yuv420p(progressive), 512x288 [SAR 1:1 DAR 16:9], 15 fps, 15 tbr, 1k tbn, 30 tbc
        Stream #0:1: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
    

    示例:

        av_dump_format(ofmt_ctx, 0, pathOut, 1);
    

    如下输出文件的详细信息:

    Output #0, mp4, to '../resources/video.mp4':
        Stream #0:0: Video: h264 (High), yuv420p(progressive), 512x288 [SAR 1:1 DAR 16:9], q=2-31
        Stream #0:1: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
    

    avformat_alloc_output_context2()

    int avformat_alloc_output_context2(AVFormatContext **ctx, 
    ff_const59 AVOutputFormat *oformat,const char *format_name, const char *filename);
    
    * Allocate an AVFormatContext for an output format.
     * avformat_free_context() can be used to free the context and
     * everything allocated by the framework within it.
     *
     * @param *ctx is set to the created format context, or to NULL in
     * case of failure
     * @param oformat format to use for allocating the context, if NULL
     * format_name and filename are used instead
     * @param format_name the name of output format to use for allocating the
     * context, if NULL filename is used instead
     * @param filename the name of the filename to use for allocating the
     * context, may be NULL
     * @return >= 0 in case of success, a negative AVERROR code in case of
     * failure
    

    为输出格式初始化AVFormatContext指针。

    示例:

       avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, pathOut);
        if (!ofmt_ctx) {
            fprintf(stderr, "Could not create output context\n");
            ret = AVERROR_UNKNOWN;
            goto end;
        }
    

    AVFormatContext中的unsigned int nb_streams

    * Number of elements in AVFormatContext.streams.
         *
         * Set by avformat_new_stream(), must not be modified by any other code.
    

    AVFormatContext 结构体中定义了AVStream **streams 数组;
    nb_streams即为数组元素的个数

    
         *
            class A
            {
            public:
                int a = 0;
            };
            int main()
            {
                A b;
                A *p = &b;
                b.a; //类类型的对象访问类的成员
                p->a; //类类型的指针访问类的成员
            }
         */
        
    

    示例:

    stream_mapping_size = ifmt_ctx->nb_streams;
    

    av_mallocz_array()

    av_alloc_size(1, 2) void *av_mallocz_array(size_t nmemb, size_t size);
    
    * Allocate a memory block for an array with av_mallocz().
     *
     * The allocated memory will have size `size * nmemb` bytes.
     *
     * @param nmemb Number of elements
     * @param size  Size of the single element
     * @return Pointer to the allocated block, or `NULL` if the block cannot
     *         be allocated
     *
     * @see av_mallocz()
     * @see av_malloc_array()
    

    创建int数组
    * @param nmemb 数组元素的个数
    * @param size 每个元素的内存长度

    示例:

      stream_mapping = static_cast<int *>(av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping)));
        if (!stream_mapping) {
            ret = AVERROR(ENOMEM);
            goto end;
        }
    

    AVOutputFormat

     ff_const59 struct AVOutputFormat *oformat;
    
         * The output container format.
         *
         * Muxing only, must be set by the caller before avformat_write_header().
    

    输出文件的格式,只有在封装时使用,必须在调用avformat_write_header()前初始化

    示例:

        ofmt = ofmt_ctx->oformat;
    

    typedef struct AVCodecParameters

    
     * This struct describes the properties of an encoded stream.
     *
     * sizeof(AVCodecParameters) is not a part of the public ABI, this struct must
     * be allocated with avcodec_parameters_alloc() and freed with
     * avcodec_parameters_free().
    
    

    这个结构体包含了编码流的属性

    AVStream 中定义了 AVCodecParameters *codecpar;

      /**
         * Codec parameters associated with this stream. Allocated and freed by
         * libavformat in avformat_new_stream() and avformat_free_context()
         * respectively.
         *
         * - demuxing: filled by libavformat on stream creation or in
         *             avformat_find_stream_info()
         * - muxing: filled by the caller before avformat_write_header()
         */
       
    

    当前流的编解码参数,
    avformat_new_stream()调用后会初始化,
    avformat_free_context()调用后会被释放。
    解封装:在创建流的时候或者avformat_find_stream_info()调用后,被初始化;
    封装:avformat_write_header()调用前,手动初始化

    示例:

            AVCodecParameters *in_codecpar = in_stream->codecpar;
    
            ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
    

    enum AVMediaType

    enum AVMediaType {
        AVMEDIA_TYPE_UNKNOWN = -1,  ///< Usually treated as AVMEDIA_TYPE_DATA
        AVMEDIA_TYPE_VIDEO,
        AVMEDIA_TYPE_AUDIO,
        AVMEDIA_TYPE_DATA,          ///< Opaque data information usually continuous
        AVMEDIA_TYPE_SUBTITLE,
        AVMEDIA_TYPE_ATTACHMENT,    ///< Opaque data information usually sparse
        AVMEDIA_TYPE_NB
    };
    

    媒体类型
    AVCodecParameters中定义了该属性,用于标志是音频还是视频、还是字幕

    示例:

      if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO &&
                in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {
                stream_mapping[i] = -1;
                continue;
            }
    

    avformat_new_stream()

    AVStream *avformat_new_stream(AVFormatContext *s, const AVCodec *c);
    
    * Add a new stream to a media file.
     *
     * When demuxing, it is called by the demuxer in read_header(). If the
     * flag AVFMTCTX_NOHEADER is set in s.ctx_flags, then it may also
     * be called in read_packet().
     *
     * When muxing, should be called by the user before avformat_write_header().
     *
     * User is required to call avcodec_close() and avformat_free_context() to
     * clean up the allocation by avformat_new_stream().
     *
     * @param s media file handle
     * @param c If non-NULL, the AVCodecContext corresponding to the new stream
     * will be initialized to use this codec. This is needed for e.g. codec-specific
     * defaults to be set, so codec should be provided if it is known.
     *
     * @return newly created stream or NULL on error.
    

    创建一个用于输出的AVStream指针对象

    示例:

     out_stream = avformat_new_stream(ofmt_ctx, NULL);
            if (!out_stream) {
                fprintf(stderr, "Failed allocating output stream\n");
                ret = AVERROR_UNKNOWN;
                goto end;
            }
    

    avcodec_parameters_copy

    int avcodec_parameters_copy(AVCodecParameters *dst, const AVCodecParameters *src);
    
     * Copy the contents of src to dst. Any allocated fields in dst are freed and
     * replaced with newly allocated duplicates of the corresponding fields in src.
     *
     * @return >= 0 on success, a negative AVERROR code on failure.
    

    输出的AVCodecParameters指针所占内存被释放,
    然后将输入的AVCodecParameters指针内存拷贝到输出的AVCodecParameters中

    示例:

    ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
            if (ret < 0) {
                fprintf(stderr, "Failed to copy codec parameters\n");
                goto end;
            }
    

    codecpar->codec_tag = 0疑问?

    AVCodecParameters 中定义了uint32_t codec_tag;

         * Additional information about the codec (corresponds to the AVI FOURCC).
          uint32_t         codec_tag;
    

    为编解码器添加额外信息,这里懵逼了,这行不写,输出视频文件会有毛病
    有知道原因的朋友可下方留言讨论

    AVOutputFormat中的int flags

    * can use flags: AVFMT_NOFILE, AVFMT_NEEDNUMBER,
         * AVFMT_GLOBALHEADER, AVFMT_NOTIMESTAMPS, AVFMT_VARIABLE_FPS,
         * AVFMT_NODIMENSIONS, AVFMT_NOSTREAMS, AVFMT_ALLOW_FLUSH,
         * AVFMT_TS_NONSTRICT, AVFMT_TS_NEGATIVE
    

    可使用以上flags

    /// Demuxer will use avio_open, no opened file should be provided by the caller.
    #define AVFMT_NOFILE        0x0001
    #define AVFMT_NEEDNUMBER    0x0002 /**< Needs '%d' in filename. */
    #define AVFMT_SHOW_IDS      0x0008 /**< Show format stream IDs numbers. */
    #define AVFMT_GLOBALHEADER  0x0040 /**< Format wants global header. */
    #define AVFMT_NOTIMESTAMPS  0x0080 /**< Format does not need / have any timestamps. */
    #define AVFMT_GENERIC_INDEX 0x0100 /**< Use generic index building code. */
    #define AVFMT_TS_DISCONT    0x0200 /**< Format allows timestamp discontinuities. Note, muxers always require valid (monotone) timestamps */
    #define AVFMT_VARIABLE_FPS  0x0400 /**< Format allows variable fps. */
    #define AVFMT_NODIMENSIONS  0x0800 /**< Format does not need width/height */
    #define AVFMT_NOSTREAMS     0x1000 /**< Format does not require any streams */
    #define AVFMT_NOBINSEARCH   0x2000 /**< Format does not allow to fall back on binary search via read_timestamp */
    #define AVFMT_NOGENSEARCH   0x4000 /**< Format does not allow to fall back on generic search */
    #define AVFMT_NO_BYTE_SEEK  0x8000 /**< Format does not allow seeking by bytes */
    #define AVFMT_ALLOW_FLUSH  0x10000 /**< Format allows flushing. If not set, the muxer will not receive a NULL packet in the write_packet function. */
    #define AVFMT_TS_NONSTRICT 0x20000 /**< Format does not require strictly
                                            increasing timestamps, but they must
                                            still be monotonic */
    #define AVFMT_TS_NEGATIVE  0x40000 /**< Format allows muxing negative
                                            timestamps. If not set the timestamp
                                            will be shifted in av_write_frame and
                                            av_interleaved_write_frame so they
                                            start from 0.
                                            The user or muxer can override this through
                                            AVFormatContext.avoid_negative_ts
                                            */
    
    #define AVFMT_SEEK_TO_PTS   0x4000000 /**< Seeking is based on PTS */
    

    avio_open()

    int avio_open(AVIOContext **s, const char *url, int flags);
    
    * Create and initialize a AVIOContext for accessing the
     * resource indicated by url.
     * @note When the resource indicated by url has been opened in
     * read+write mode, the AVIOContext can be used only for writing.
     *
     * @param s Used to return the pointer to the created AVIOContext.
     * In case of failure the pointed to value is set to NULL.
     * @param url resource to access
     * @param flags flags which control how the resource indicated by url
     * is to be opened
     * @return >= 0 in case of success, a negative value corresponding to an
     * AVERROR code in case of failure
    

    为对应url的文件初始化一个AVIOContext 二级指针对象

    AVIOContext

    typedef struct AVIOContext
    
    
     * Bytestream IO Context.
     * New fields can be added to the end with minor version bumps.
     * Removal, reordering and changes to existing fields require a major
     * version bump.
     * sizeof(AVIOContext) must not be used outside libav*.
     *
     * @note None of the function pointers in AVIOContext should be called
     *       directly, they should only be set by the client application
     *       when implementing custom I/O. Normally these are set to the
     *       function pointers specified in avio_alloc_context()
     * 
    
    unsigned char *buffer;  /**< Start of the buffer. */
        int buffer_size;        /**< Maximum buffer size */
        unsigned char *buf_ptr; /**< Current position in the buffer */
        unsigned char *buf_end; /**< End of the data, may be less than
                                     buffer+buffer_size if the read function returned
                                     less data than requested, e.g. for streams where
                                     no more data has been received yet. */
    

    字节流IO Context,定义了一些有关IO读写的buffer缓冲区等属性

    AVFormatContext中的AVIOContext *pb

    * I/O context.
         *
         * - demuxing: either set by the user before avformat_open_input() (then
         *             the user must close it manually) or set by avformat_open_input().
         * - muxing: set by the user before avformat_write_header(). The caller must
         *           take care of closing / freeing the IO context.
         *
         * Do NOT set this field if AVFMT_NOFILE flag is set in
         * iformat/oformat.flags. In such a case, the (de)muxer will handle
         * I/O in some other way and this field will be NULL.
    

    解封装时,AVFormatContext中的AVIOContext *pb, 可以在调用avformat_open_input()之前初始化,
    或者通过调用avformat_open_input()初始化
    封装时,AVFormatContext中的AVIOContext *pb,可以在调用avformat_write_header()之前初始化,
    完事后必须释放AVFormatContext中的AVIOContext *pb占用的内存
    如果ofmt->flags值为AVFMT_NOFILE,就不要初始化AVFormatContext中的AVIOContext *pb,
    在这种情况下,解封装器/封装器将会通过其它方式处理I/O,而且AVFormatContext中的AVIOContext *pb为NULL

    示例:

     if (!(ofmt->flags & AVFMT_NOFILE)) {
            /**
             * 为对应url的文件初始化一个AVIOContext 二级指针对象
             */
            ret = avio_open(&ofmt_ctx->pb, pathOut, AVIO_FLAG_WRITE);
            if (ret < 0) {
                fprintf(stderr, "Could not open output file '%s'", pathOut);
                goto end;
            }
        }
    

    AVIO_FLAG_WRITE 写模式

    /**
     * @name URL open modes
     * The flags argument to avio_open must be one of the following
     * constants, optionally ORed with other flags.
     * @{
     */
    #define AVIO_FLAG_READ  1                                      /**< read-only */
    #define AVIO_FLAG_WRITE 2                                      /**< write-only */
    #define AVIO_FLAG_READ_WRITE (AVIO_FLAG_READ|AVIO_FLAG_WRITE)  /**< read-write pseudo flag */
    

    打开文件的模式,读模式,写模式,读写模式

    avformat_write_header()

    int avformat_write_header(AVFormatContext *s, AVDictionary **options);
    
    * Allocate the stream private data and write the stream header to
     * an output media file.
     *
     * @param s Media file handle, must be allocated with avformat_alloc_context().
     *          Its oformat field must be set to the desired output format;
     *          Its pb field must be set to an already opened AVIOContext.
     * @param options  An AVDictionary filled with AVFormatContext and muxer-private options.
     *                 On return this parameter will be destroyed and replaced with a dict containing
     *                 options that were not found. May be NULL.
     *
     * @return AVSTREAM_INIT_IN_WRITE_HEADER on success if the codec had not already been fully initialized in avformat_init,
     *         AVSTREAM_INIT_IN_INIT_OUTPUT  on success if the codec had already been fully initialized in avformat_init,
     *         negative AVERROR on failure.
     *
     * @see av_opt_find, av_dict_set, avio_open, av_oformat_next, avformat_init_output.
    

    初始化流的私有数据并将流头写入输出媒体文件。

    av_read_frame()

    int av_read_frame(AVFormatContext *s, AVPacket *pkt);
    
     * Return the next frame of a stream.
     * This function returns what is stored in the file, and does not validate
     * that what is there are valid frames for the decoder. It will split what is
     * stored in the file into frames and return one for each call. It will not
     * omit invalid data between valid frames so as to give the decoder the maximum
     * information possible for decoding.
     *
     * If pkt->buf is NULL, then the packet is valid until the next
     * av_read_frame() or until avformat_close_input(). Otherwise the packet
     * is valid indefinitely. In both cases the packet must be freed with
     * av_packet_unref when it is no longer needed. For video, the packet contains
     * exactly one frame. For audio, it contains an integer number of frames if each
     * frame has a known fixed size (e.g. PCM or ADPCM data). If the audio frames
     * have a variable size (e.g. MPEG audio), then it contains one frame.
     *
     * pkt->pts, pkt->dts and pkt->duration are always set to correct
     * values in AVStream.time_base units (and guessed if the format cannot
     * provide them). pkt->pts can be AV_NOPTS_VALUE if the video format
     * has B-frames, so it is better to rely on pkt->dts if you do not
     * decompress the payload.
     *
     * @return 0 if OK, < 0 on error or end of file
    

    返回流的下一帧。

    此函数读取存储在文件中的内容到AVPacket pkt,而不验证是否存在解码器的有效帧。它将存储在文件中的内容分割成帧,并为每个调用返回一个AVPacket pkt。它不会省略有效帧之间的无效数据,以便给解码器最大的解码信息。

    返回0表示读取一帧成功,返回负数,表示出错了或者已经读到文件末尾了。

    示例:

    ret = av_read_frame(ifmt_ctx, &pkt);
            if (ret < 0)
                break;
    

    ffmpeg时间基相关详解

    av_rescale_q_rnd()

    int64_t av_rescale_q_rnd(int64_t a, AVRational bq, AVRational cq,
                             enum AVRounding rnd) av_const;
    
     * Rescale a 64-bit integer by 2 rational numbers with specified rounding.
     *
     * The operation is mathematically equivalent to `a * bq / cq`.
     *
     * @see av_rescale(), av_rescale_rnd(), av_rescale_q()
    

    以指定的取整方式,用2个有理数处理64位整数。
    该运算在数学上等价于`a * bq/cq‘。

    enum AVRounding

    enum AVRounding {
        AV_ROUND_ZERO     = 0, ///< Round toward zero.
        AV_ROUND_INF      = 1, ///< Round away from zero.
        AV_ROUND_DOWN     = 2, ///< Round toward -infinity.
        AV_ROUND_UP       = 3, ///< Round toward +infinity.
        AV_ROUND_NEAR_INF = 5, ///< Round to nearest and halfway cases away from zero.
        /**
         * Flag telling rescaling functions to pass `INT64_MIN`/`MAX` through
           unchanged, avoiding special cases for #AV_NOPTS_VALUE.
         *
         * Unlike other values of the enumeration AVRounding, this value is a
         * bitmask that must be used in conjunction with another value of the
         * enumeration through a bitwise OR, in order to set behavior for normal
         * cases.
         *
         * @code{.c}
         * av_rescale_rnd(3, 1, 2, AV_ROUND_UP | AV_ROUND_PASS_MINMAX);
         * // Rescaling 3:
         * //     Calculating 3 * 1 / 2
         * //     3 / 2 is rounded up to 2
         * //     => 2
         *
         * av_rescale_rnd(AV_NOPTS_VALUE, 1, 2, AV_ROUND_UP | AV_ROUND_PASS_MINMAX);
         * // Rescaling AV_NOPTS_VALUE:
         * //     AV_NOPTS_VALUE == INT64_MIN
         * //     AV_NOPTS_VALUE is passed through
         * //     => AV_NOPTS_VALUE
         * @endcode
         */
        AV_ROUND_PASS_MINMAX = 8192,
    };
    

    取整方式:
    AV_ROUND_ZERO = 0,
    趋近于0
    AV_ROUND_INF = 1,
    趋远于0
    AV_ROUND_DOWN = 2,
    向下取整
    AV_ROUND_UP = 3,
    向上取整
    AV_ROUND_NEAR_INF = 5, ///< Round to nearest and halfway cases away from zero.
    四舍五入
    AV_ROUND_PASS_MINMAX=8192
    与枚举舍入的其他值不同,此值是一个位掩码,必须通过按位方式 (或),与枚举的另一个值一起使用。使用这个枚举值目的是为了避免#AV_NOPTS_VALUE情况下出错

    AVStream中的AVRational time_base

     * This is the fundamental unit of time (in seconds) in terms
          of which frame timestamps are represented.
         *
         * decoding: set by libavformat
           encoding: May be set by the caller before avformat_write_header() to
                    provide a hint to the muxer about the desired timebase. In
                    avformat_write_header(), the muxer will overwrite this field
                   with the timebase that will actually be used for the timestamps
                  written into the file (which may or may not be related to the
                   user-provided one, depending on the format).
    

    这是用来表示显示的帧时间戳的基本时间单位(以秒为单位)。
    解码:由libavmat设置
    编码:可以由调用方在avformat_write_header()之前设置,以便向muxer提供有关所需时间基。在avformat_write_header()中,muxer将用实际用于写入文件的时间戳的时间基(根据格式可能与用户提供的时间戳相关,也可能与用户提供的时间戳无关)覆盖此字段。

    用一把尺子做模型,time_base可理解为一个刻度表示多少秒

    AVPacket中的int64_t pts

      * Presentation timestamp in AVStream->time_base units; the time at which
          the decompressed packet will be presented to the user.
          Can be AV_NOPTS_VALUE if it is not stored in the file.
          pts MUST be larger or equal to dts as presentation cannot happen before
          decompression, unless one wants to view hex dumps. Some formats misuse
          the terms dts and pts/cts to mean something different. Such timestamps
          must be converted to true pts/dts before they are stored in AVPacket.
    

    以AVStream->time_base时基单元表示显示给用户的时间戳;表示解压的数据包呈现给用户的时间。如果未存储在文件中,则pts为av_nopts_value。pts必须大于或等于dts,因为在解压缩之前不能出现表示,除非要查看十六进制转储。有些格式误用了术语DTS和PTS/CTS来表达不同的意思。这种时间戳必须转换为真正的PT/DTS,然后才能存储在AVPacket中。

    用一把尺子做模型,time_base可理解为一个刻度表示多少秒,那么pts就表示占多少个刻度
    示例:

      pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, 
      out_stream->time_base,static_cast<AVRounding>(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
    

    pkt.pts in_stream->time_base out_stream->time_base
    得到out_stream下pkt的pts,输出文件的一帧数据包的pts
    即同步了输入输出的显示时间戳
    pkt为从输入文件读取的一帧的数据包,

    AVPacket中的int64_t dts

         Decompression timestamp in AVStream->time_base units; the time at which
          the packet is decompressed.
         Can be AV_NOPTS_VALUE if it is not stored in the file.
    

    以AVStream->time_base时基单元表示解压的时间戳;解压数据包的时间。如果未存储在文件中,则dts为av_nopts_value。

    用一把尺子做模型,time_base可理解为一个刻度表示多少秒,那么dts就表示占多少个刻度

    示例:

      pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base,
      static_cast<AVRounding>(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
    

    pkt.dts in_stream->time_base out_stream->time_base
    得到out_stream下pkt的dts ,输出文件的一帧数据包的dts
    即同步了输入输出的解压时间戳
    pkt为从输入文件读取的一帧的数据包,

    AVPacket中的int64_t duration

     Duration of this packet in AVStream->time_base units, 0 if unknown.
      Equals next_pts - this_pts in presentation order.
    

    以AVStream->time_base时基单元表示此数据包的持续时间(当前帧的持续时间),如果未知则为0。
    等价于在显示的顺序中,当前帧和下一帧的时间间隔。

    用一把尺子做模型,time_base可理解为一个刻度表示多少秒,那么duration就表示占多少个刻度

    示例:

    pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
    

    pkt.duration in_stream->time_base out_stream->time_base
    得到out_stream下pkt的duration ,输出文件的一帧数据包的持续时间
    即同步了输入输出的持续时间戳
    pkt为从输入文件读取的一帧的数据包,

    av_rescale_q()

    int64_t av_rescale_q(int64_t a, AVRational bq, AVRational cq) av_const;
    
    * Rescale a 64-bit integer by 2 rational numbers.
     
      The operation is mathematically equivalent to `a * bq / cq`.
    
     This function is equivalent to av_rescale_q_rnd() with #AV_ROUND_NEAR_INF.
     
     * @see av_rescale(), av_rescale_rnd(), av_rescale_q_rnd()
    

    用2个有理数重新计算64位整数.这个运算在数学上等价于‘a*bq/cq’。此函数等价于取整方式为四舍五入时的av_rescale_q_rnd()

    ————————————————
    版权声明
    :以下摘抄为CSDN博主「bixinwei」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/bixinwei22/article/details/78770090

    PTS:Presentation Time Stamp。PTS主要用于度量解码后的视频帧什么时候被显示出来
    DTS:Decode Time Stamp。DTS主要是标识读入内存中的bit流在什么时候开始送入解码器中进行解码

    也就是pts反映帧什么时候开始显示,dts反映数据流什么时候开始解码

    怎么理解这里的“什么时候”呢?如果有某一帧,假设它是第10秒开始显示。那么它的pts是多少呢。是10?还是10s?还是两者都不是。

    为了回答这个问题,先引入FFmpeg中时间基的概念,也就是time_base。它也是用来度量时间的。
    如果把1秒分为25等份,你可以理解就是一把尺,那么每一格表示的就是1/25秒。此时的time_base={1,25}
    如果你是把1秒分成90000份,每一个刻度就是1/90000秒,此时的time_base={1,90000}。
    所谓时间基表示的就是每个刻度是多少秒
    pts的值就是占多少个时间刻度(占多少个格子)。它的单位不是秒,而是时间刻度。只有pts加上time_base两者同时在一起,才能表达出时间是多少。
    好比我只告诉你,某物体的长度占某一把尺上的20个刻度。但是我不告诉你,这把尺总共是多少厘米的,你就没办法计算每个刻度是多少厘米,你也就无法知道物体的长度。
    pts=20个刻度
    time_base={1,10} 每一个刻度是1/10厘米
    所以物体的长度=ptstime_base=201/10 厘米

    在ffmpeg中。av_q2d(time_base)=每个刻度是多少秒
    此时你应该不难理解 pts*av_q2d(time_base)才是帧的显示时间戳。

    下面理解时间基的转换,为什么要有时间基转换。
    首先,不同的封装格式,timebase是不一样的。另外,整个转码过程,不同的数据状态对应的时间基也不一致。拿mpegts封装格式25fps来说(只说视频,音频大致一样,但也略有不同)。非压缩时候的数据(即YUV或者其它),在ffmpeg中对应的结构体为AVFrame,它的时间基为AVCodecContext 的time_base ,AVRational{1,25}。
    压缩后的数据(对应的结构体为AVPacket)对应的时间基为AVStream的time_base,AVRational{1,90000}。
    因为数据状态不同,时间基不一样,所以我们必须转换,在1/25时间刻度下占10格,在1/90000下是占多少格。这就是pts的转换。

    根据pts来计算一桢在整个视频中的时间位置:
    timestamp(秒) = pts * av_q2d(st->time_base)

    duration和pts单位一样,duration表示当前帧的持续时间占多少格。或者理解是两帧的间隔时间是占多少格。一定要理解单位。
    pts:格子数
    av_q2d(st->time_base): 秒/格

    计算视频长度:
    time(秒) = st->duration * av_q2d(st->time_base)

    ffmpeg内部的时间与标准的时间转换方法:
    ffmpeg内部的时间戳 = AV_TIME_BASE * time(秒)
    AV_TIME_BASE_Q=1/AV_TIME_BASE

    av_rescale_q(int64_t a, AVRational bq, AVRational cq)函数
    这个函数的作用是计算a*bq / cq来把时间戳从一个时间基调整到另外一个时间基。在进行时间基转换的时候,应该首先这个函数,因为它可以避免溢出的情况发生。
    函数表示在bq下的占a个格子,在cq下是多少。

    关于音频pts的计算:
    音频sample_rate:samples per second,即采样率,表示每秒采集多少采样点。
    比如44100HZ,就是一秒采集44100个sample.
    即每个sample的时间是1/44100秒

    一个音频帧的AVFrame有nb_samples个sample,所以一个AVFrame耗时是nb_samples(1/44100)秒
    即标准时间下duration_s=nb_samples
    (1/44100)秒,
    转换成AVStream时间基下
    duration=duration_s / av_q2d(st->time_base)
    基于st->time_base的num值一般等于采样率,所以duration=nb_samples.
    pts=nduration=nnb_samples

    补充:
    next_pts-current_pts=current_duration,根据数学等差公式an=a1+(n-1)d可得pts=nd

    av_interleaved_write_frame()

    int av_interleaved_write_frame(AVFormatContext *s, AVPacket *pkt);
    
    * Write a packet to an output media file ensuring correct interleaving.
     *
     This function will buffer the packets internally as needed to make sure the
     packets in the output file are properly interleaved in the order of
     increasing dts. Callers doing their own interleaving should call
      av_write_frame() instead of this function.
     
     Using this function instead of av_write_frame() can give muxers advance
      knowledge of future packets, improving e.g. the behaviour of the mp4
      muxer for VFR content in fragmenting mode.
     *
     * @param s media file handle
     * @param pkt The packet containing the data to be written.
     *            <br>
     *            If the packet is reference-counted, this function will take
     *            ownership of this reference and unreference it later when it sees
     *            fit.
     *            The caller must not access the data through this reference after
     *            this function returns. If the packet is not reference-counted,
     *            libavformat will make a copy.
     *            <br>
     *            This parameter can be NULL (at any time, not just at the end), to
     *            flush the interleaving queues.
     *            <br>
     *            Packet's @ref AVPacket.stream_index "stream_index" field must be
     *            set to the index of the corresponding stream in @ref
     *            AVFormatContext.streams "s->streams".
     *            <br>
     *            The timestamps (@ref AVPacket.pts "pts", @ref AVPacket.dts "dts")
     *            must be set to correct values in the stream's timebase (unless the
     *            output format is flagged with the AVFMT_NOTIMESTAMPS flag, then
     *            they can be set to AV_NOPTS_VALUE).
     *            The dts for subsequent packets in one stream must be strictly
     *            increasing (unless the output format is flagged with the
     *            AVFMT_TS_NONSTRICT, then they merely have to be nondecreasing).
     *            @ref AVPacket.duration "duration") should also be set if known.
     *
     * @return 0 on success, a negative AVERROR on error. Libavformat will always
     *         take care of freeing the packet, even if this function fails.
     *
     * @see av_write_frame(), AVFormatContext.max_interleave_delta
    

    将一帧数据包写入输出媒体文件。
    此函数将根据需要在内部缓冲数据包,以确保输出文件中的数据包按照dts的顺序正确交叉存储。调用者进行自己的交叉存储时,应该调用av_write_frame(),而不是这个函数。使用此函数而不是av_write_frame()可以使muxers提前了解未来的数据包,例如改善MP4对VFR内容在碎片模式下的行为。

    Interleaving(交叉存取技术)

    加快内存速度的一种技术。举例来说,将存储体的奇数地址和偶数地址部分分开,这样当前字节被刷新时,可以不影响下一个字节的访问。
    交叉存储主要补偿DRAM等存储器相对较慢的读写速度。读或写每一个内存块,都需要等待内存块给出ready信号,才能读写下一字节。交叉存储将连续信息分散到各个块中,读写时可以同时等待多个内存块给出ready信号,从而提高了读写的速度。

    av_write_frame()

    int av_write_frame(AVFormatContext *s, AVPacket *pkt);
    
    * Write a packet to an output media file.
     
      This function passes the packet directly to the muxer, without any buffering
      or reordering. The caller is responsible for correctly interleaving the
      packets if the format requires it. Callers that want libavformat to handle
      the interleaving should call av_interleaved_write_frame() instead of this
      function.
     *
     * @param s media file handle
     * @param pkt The packet containing the data to be written. Note that unlike
     *            av_interleaved_write_frame(), this function does not take
     *            ownership of the packet passed to it (though some muxers may make
     *            an internal reference to the input packet).
     *            <br>
     *            This parameter can be NULL (at any time, not just at the end), in
     *            order to immediately flush data buffered within the muxer, for
     *            muxers that buffer up data internally before writing it to the
     *            output.
     *            <br>
     *            Packet's @ref AVPacket.stream_index "stream_index" field must be
     *            set to the index of the corresponding stream in @ref
     *            AVFormatContext.streams "s->streams".
     *            <br>
     *            The timestamps (@ref AVPacket.pts "pts", @ref AVPacket.dts "dts")
     *            must be set to correct values in the stream's timebase (unless the
     *            output format is flagged with the AVFMT_NOTIMESTAMPS flag, then
     *            they can be set to AV_NOPTS_VALUE).
     *            The dts for subsequent packets passed to this function must be strictly
     *            increasing when compared in their respective timebases (unless the
     *            output format is flagged with the AVFMT_TS_NONSTRICT, then they
     *            merely have to be nondecreasing).  @ref AVPacket.duration
     *            "duration") should also be set if known.
     * @return < 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush
     *
     * @see av_interleaved_write_frame()
    

    将一帧数据包写入输出媒体文件。此函数直接将数据包传递给muxer,而不需要任何缓冲或重新排序。如果格式需要,调用方负责正确地交叉存储数据包。希望libavmat处理交叉存储的调用者应该调用av_interleved_write_frame(),而不是这个函数。

    各位老铁有问题欢迎及时联系、指正、批评、撕逼

    Github:https://github.com/AnJiaoDe

    简书:https://www.jianshu.com/u/b8159d455c69

    CSDN:https://blog.csdn.net/confusing_awakening

    ffmpeg入门教程:https://www.jianshu.com/p/042c7847bd8a

    微信公众号


    这里写图片描述

    QQ群

    这里写图片描述

    相关文章

      网友评论

        本文标题:ffmpeg入门教程之多媒体文件格式转换器(无编解码)详解(官网

        本文链接:https://www.haomeiwen.com/subject/hwmspctx.html