C++ ffmpeg library framerate incorrect when muxing Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Data science time! April 2019 and salary with experience The Ask Question Wizard is Live!can i mux h264 stream into mp4(aac) through libavformat without libx264?Decode h264 rtsp with ffmpeg and separated AVCodecContextlibav/ffmpeg: avcodec_decode_video2() returns -1 when separating demultiplexing and decodingIs it possible to add audio at the end of an existing audio stream?FFMPEG libavformat internal bufferingSaving frames from a network camera (RTSP) to a mp4 fileConfusion about PTS in video files and media streamsusing FFMPEG for demux and decoding of mpeg2Ts streamFFMPEG amerge fails when one audio stream is missingReplacing av_read_frame() to reduce delay

Does the Mueller report show a conspiracy between Russia and the Trump Campaign?

Understanding p-Values using an example

Is there any word for a place full of confusion?

Is multiple magic items in one inherently imbalanced?

Did any compiler fully use 80-bit floating point?

Random body shuffle every night—can we still function?

i2c bus hangs in master RPi access to MSP430G uC ~1 in 1000 accesses

What order were files/directories output in dir?

Tannaka duality for semisimple groups

Should a wizard buy fine inks every time he want to copy spells into his spellbook?

Sally's older brother

Asymptotics question

If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?

In musical terms, what properties are varied by the human voice to produce different words / syllables?

Weaponising the Grasp-at-a-Distance spell

How to write capital alpha?

I can't produce songs

Why is the change of basis formula counter-intuitive? [See details]

Is there public access to the Meteor Crater in Arizona?

What does it mean that physics no longer uses mechanical models to describe phenomena?

Delete free apps from Play Store library

Why are vacuum tubes still used in amateur radios?

The test team as an enemy of development? And how can this be avoided?

How can god fight other gods?



C++ ffmpeg library framerate incorrect when muxing



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Data science time! April 2019 and salary with experience
The Ask Question Wizard is Live!can i mux h264 stream into mp4(aac) through libavformat without libx264?Decode h264 rtsp with ffmpeg and separated AVCodecContextlibav/ffmpeg: avcodec_decode_video2() returns -1 when separating demultiplexing and decodingIs it possible to add audio at the end of an existing audio stream?FFMPEG libavformat internal bufferingSaving frames from a network camera (RTSP) to a mp4 fileConfusion about PTS in video files and media streamsusing FFMPEG for demux and decoding of mpeg2Ts streamFFMPEG amerge fails when one audio stream is missingReplacing av_read_frame() to reduce delay



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








1















I am trying to create a function which will combine an audio file and a video file and output them to an mp4. I've managed to successfully do so except that the output is not at the correct framerate. It's a very slight difference from the original. 30.13 whereas it should be 30 exactly. When I combine these files with the ffmpeg program, the result is exactly 30 as it should be.



I'm confident it has something to do with the dts/pts correction when recieving out of order data, but ffmpeg program does this too in a similar manner. So I'm not sure where to go from here. I've looked at the ffmpeg source code and copied some of their dts correction and still no luck. What am I doing wrong here?



bool mux_audio_video(const char* audio_filename, const char* video_filename, const char* output_filename)
av_register_all();

AVOutputFormat* out_format = NULL;
AVFormatContext* audio_context = NULL, *video_context = NULL, *output_context = NULL;
int video_index_in = -1, audio_index_in = -1;
int video_index_out = -1, audio_index_out = -1;


if(avformat_open_input(&audio_context, audio_filename, 0, 0) < 0)
return false;
if(avformat_find_stream_info(audio_context, 0) < 0)
avformat_close_input(&audio_context);
return false;

if(avformat_open_input(&video_context, video_filename, 0, 0) < 0)
avformat_close_input(&audio_context);
return false;

if(avformat_find_stream_info(video_context, 0) < 0)
avformat_close_input(&audio_context);
avformat_close_input(&video_context);
return false;


if(avformat_alloc_output_context2(&output_context, av_guess_format("mp4", NULL, NULL), NULL, output_filename) < 0)
avformat_close_input(&audio_context);
avformat_close_input(&video_context);
return false;

out_format = output_context->oformat;

//find first audio stream in the audio file input
for(size_t i = 0;i < audio_context->nb_streams;++i)
if(audio_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
audio_index_in = i;

AVStream* in_stream = audio_context->streams[i];
AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
AVCodecContext* tmp = avcodec_alloc_context3(codec);
avcodec_parameters_to_context(tmp, in_stream->codecpar);
AVStream* out_stream = avformat_new_stream(output_context, codec);
audio_index_out = out_stream->index;
if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

tmp->codec_tag = 0;
avcodec_parameters_from_context(out_stream->codecpar, tmp);
avcodec_free_context(&tmp);

break;


//find first video stream in the video file input
for(size_t i = 0;i < video_context->nb_streams;++i)
if(video_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
video_index_in = i;

AVStream* in_stream = video_context->streams[i];
AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
AVCodecContext* tmp = avcodec_alloc_context3(codec);
avcodec_parameters_to_context(tmp, in_stream->codecpar);
AVStream* out_stream = avformat_new_stream(output_context, codec);
video_index_out = out_stream->index;
if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

tmp->codec_tag = 0;
avcodec_parameters_from_context(out_stream->codecpar, tmp);
avcodec_free_context(&tmp);

break;



//setup output
if(!(out_format->flags & AVFMT_NOFILE))
if(avio_open(&output_context->pb, output_filename, AVIO_FLAG_WRITE) < 0)
avformat_free_context(output_context);
avformat_close_input(&audio_context);
avformat_close_input(&video_context);
return false;


if(avformat_write_header(output_context, NULL) < 0)
if(!(out_format->flags & AVFMT_NOFILE))
avio_close(output_context->pb);

avformat_free_context(output_context);
avformat_close_input(&audio_context);
avformat_close_input(&video_context);
return false;


int64_t video_pts = 0, audio_pts = 0;
int64_t last_video_dts = 0, last_audio_dts = 0;

while(true)
AVPacket packet;
av_init_packet(&packet);
packet.data = NULL;
packet.size = 0;
int64_t* last_dts;
AVFormatContext* in_context;
int stream_index = 0;
AVStream* in_stream, *out_stream;

//Read in a frame from the next stream
if(av_compare_ts(video_pts, video_context->streams[video_index_in]->time_base,
audio_pts, audio_context->streams[audio_index_in]->time_base) <= 0)

//video
last_dts = &last_video_dts;
in_context = video_context;
stream_index = video_index_out;

if(av_read_frame(in_context, &packet) >= 0)
do
if(packet.stream_index == video_index_in)
video_pts = packet.pts;
break;

av_packet_unref(&packet);
while(av_read_frame(in_context, &packet) >= 0);
else
break;

else
//audio
last_dts = &last_audio_dts;
in_context = audio_context;
stream_index = audio_index_out;

if(av_read_frame(in_context, &packet) >= 0)
do
if(packet.stream_index == audio_index_in)
audio_pts = packet.pts;
break;

av_packet_unref(&packet);
while(av_read_frame(in_context, &packet) >= 0);
else
break;


in_stream = in_context->streams[packet.stream_index];
out_stream = output_context->streams[stream_index];

av_packet_rescale_ts(&packet, in_stream->time_base, out_stream->time_base);

//if dts is out of order, ffmpeg throws an error. So manually fix. Similar to what ffmpeg does in ffmpeg.c
if(packet.dts < (*last_dts + !(output_context->oformat->flags & AVFMT_TS_NONSTRICT)) && packet.dts != AV_NOPTS_VALUE && (*last_dts) != AV_NOPTS_VALUE)
int64_t next_dts = (*last_dts)+1;
if(packet.pts >= packet.dts && packet.pts != AV_NOPTS_VALUE)
packet.pts = FFMAX(packet.pts, next_dts);

if(packet.pts == AV_NOPTS_VALUE)
packet.pts = next_dts;

packet.dts = next_dts;

(*last_dts) = packet.dts;

packet.pos = -1;
packet.stream_index = stream_index;

//output packet
if(av_interleaved_write_frame(output_context, &packet) < 0)
break;

av_packet_unref(&packet);



av_write_trailer(output_context);

//cleanup
if(!(out_format->flags & AVFMT_NOFILE))
avio_close(output_context->pb);

avformat_free_context(output_context);
avformat_close_input(&audio_context);
avformat_close_input(&video_context);
return true;










share|improve this question






























    1















    I am trying to create a function which will combine an audio file and a video file and output them to an mp4. I've managed to successfully do so except that the output is not at the correct framerate. It's a very slight difference from the original. 30.13 whereas it should be 30 exactly. When I combine these files with the ffmpeg program, the result is exactly 30 as it should be.



    I'm confident it has something to do with the dts/pts correction when recieving out of order data, but ffmpeg program does this too in a similar manner. So I'm not sure where to go from here. I've looked at the ffmpeg source code and copied some of their dts correction and still no luck. What am I doing wrong here?



    bool mux_audio_video(const char* audio_filename, const char* video_filename, const char* output_filename)
    av_register_all();

    AVOutputFormat* out_format = NULL;
    AVFormatContext* audio_context = NULL, *video_context = NULL, *output_context = NULL;
    int video_index_in = -1, audio_index_in = -1;
    int video_index_out = -1, audio_index_out = -1;


    if(avformat_open_input(&audio_context, audio_filename, 0, 0) < 0)
    return false;
    if(avformat_find_stream_info(audio_context, 0) < 0)
    avformat_close_input(&audio_context);
    return false;

    if(avformat_open_input(&video_context, video_filename, 0, 0) < 0)
    avformat_close_input(&audio_context);
    return false;

    if(avformat_find_stream_info(video_context, 0) < 0)
    avformat_close_input(&audio_context);
    avformat_close_input(&video_context);
    return false;


    if(avformat_alloc_output_context2(&output_context, av_guess_format("mp4", NULL, NULL), NULL, output_filename) < 0)
    avformat_close_input(&audio_context);
    avformat_close_input(&video_context);
    return false;

    out_format = output_context->oformat;

    //find first audio stream in the audio file input
    for(size_t i = 0;i < audio_context->nb_streams;++i)
    if(audio_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
    audio_index_in = i;

    AVStream* in_stream = audio_context->streams[i];
    AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
    AVCodecContext* tmp = avcodec_alloc_context3(codec);
    avcodec_parameters_to_context(tmp, in_stream->codecpar);
    AVStream* out_stream = avformat_new_stream(output_context, codec);
    audio_index_out = out_stream->index;
    if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

    tmp->codec_tag = 0;
    avcodec_parameters_from_context(out_stream->codecpar, tmp);
    avcodec_free_context(&tmp);

    break;


    //find first video stream in the video file input
    for(size_t i = 0;i < video_context->nb_streams;++i)
    if(video_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
    video_index_in = i;

    AVStream* in_stream = video_context->streams[i];
    AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
    AVCodecContext* tmp = avcodec_alloc_context3(codec);
    avcodec_parameters_to_context(tmp, in_stream->codecpar);
    AVStream* out_stream = avformat_new_stream(output_context, codec);
    video_index_out = out_stream->index;
    if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

    tmp->codec_tag = 0;
    avcodec_parameters_from_context(out_stream->codecpar, tmp);
    avcodec_free_context(&tmp);

    break;



    //setup output
    if(!(out_format->flags & AVFMT_NOFILE))
    if(avio_open(&output_context->pb, output_filename, AVIO_FLAG_WRITE) < 0)
    avformat_free_context(output_context);
    avformat_close_input(&audio_context);
    avformat_close_input(&video_context);
    return false;


    if(avformat_write_header(output_context, NULL) < 0)
    if(!(out_format->flags & AVFMT_NOFILE))
    avio_close(output_context->pb);

    avformat_free_context(output_context);
    avformat_close_input(&audio_context);
    avformat_close_input(&video_context);
    return false;


    int64_t video_pts = 0, audio_pts = 0;
    int64_t last_video_dts = 0, last_audio_dts = 0;

    while(true)
    AVPacket packet;
    av_init_packet(&packet);
    packet.data = NULL;
    packet.size = 0;
    int64_t* last_dts;
    AVFormatContext* in_context;
    int stream_index = 0;
    AVStream* in_stream, *out_stream;

    //Read in a frame from the next stream
    if(av_compare_ts(video_pts, video_context->streams[video_index_in]->time_base,
    audio_pts, audio_context->streams[audio_index_in]->time_base) <= 0)

    //video
    last_dts = &last_video_dts;
    in_context = video_context;
    stream_index = video_index_out;

    if(av_read_frame(in_context, &packet) >= 0)
    do
    if(packet.stream_index == video_index_in)
    video_pts = packet.pts;
    break;

    av_packet_unref(&packet);
    while(av_read_frame(in_context, &packet) >= 0);
    else
    break;

    else
    //audio
    last_dts = &last_audio_dts;
    in_context = audio_context;
    stream_index = audio_index_out;

    if(av_read_frame(in_context, &packet) >= 0)
    do
    if(packet.stream_index == audio_index_in)
    audio_pts = packet.pts;
    break;

    av_packet_unref(&packet);
    while(av_read_frame(in_context, &packet) >= 0);
    else
    break;


    in_stream = in_context->streams[packet.stream_index];
    out_stream = output_context->streams[stream_index];

    av_packet_rescale_ts(&packet, in_stream->time_base, out_stream->time_base);

    //if dts is out of order, ffmpeg throws an error. So manually fix. Similar to what ffmpeg does in ffmpeg.c
    if(packet.dts < (*last_dts + !(output_context->oformat->flags & AVFMT_TS_NONSTRICT)) && packet.dts != AV_NOPTS_VALUE && (*last_dts) != AV_NOPTS_VALUE)
    int64_t next_dts = (*last_dts)+1;
    if(packet.pts >= packet.dts && packet.pts != AV_NOPTS_VALUE)
    packet.pts = FFMAX(packet.pts, next_dts);

    if(packet.pts == AV_NOPTS_VALUE)
    packet.pts = next_dts;

    packet.dts = next_dts;

    (*last_dts) = packet.dts;

    packet.pos = -1;
    packet.stream_index = stream_index;

    //output packet
    if(av_interleaved_write_frame(output_context, &packet) < 0)
    break;

    av_packet_unref(&packet);



    av_write_trailer(output_context);

    //cleanup
    if(!(out_format->flags & AVFMT_NOFILE))
    avio_close(output_context->pb);

    avformat_free_context(output_context);
    avformat_close_input(&audio_context);
    avformat_close_input(&video_context);
    return true;










    share|improve this question


























      1












      1








      1








      I am trying to create a function which will combine an audio file and a video file and output them to an mp4. I've managed to successfully do so except that the output is not at the correct framerate. It's a very slight difference from the original. 30.13 whereas it should be 30 exactly. When I combine these files with the ffmpeg program, the result is exactly 30 as it should be.



      I'm confident it has something to do with the dts/pts correction when recieving out of order data, but ffmpeg program does this too in a similar manner. So I'm not sure where to go from here. I've looked at the ffmpeg source code and copied some of their dts correction and still no luck. What am I doing wrong here?



      bool mux_audio_video(const char* audio_filename, const char* video_filename, const char* output_filename)
      av_register_all();

      AVOutputFormat* out_format = NULL;
      AVFormatContext* audio_context = NULL, *video_context = NULL, *output_context = NULL;
      int video_index_in = -1, audio_index_in = -1;
      int video_index_out = -1, audio_index_out = -1;


      if(avformat_open_input(&audio_context, audio_filename, 0, 0) < 0)
      return false;
      if(avformat_find_stream_info(audio_context, 0) < 0)
      avformat_close_input(&audio_context);
      return false;

      if(avformat_open_input(&video_context, video_filename, 0, 0) < 0)
      avformat_close_input(&audio_context);
      return false;

      if(avformat_find_stream_info(video_context, 0) < 0)
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      if(avformat_alloc_output_context2(&output_context, av_guess_format("mp4", NULL, NULL), NULL, output_filename) < 0)
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;

      out_format = output_context->oformat;

      //find first audio stream in the audio file input
      for(size_t i = 0;i < audio_context->nb_streams;++i)
      if(audio_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
      audio_index_in = i;

      AVStream* in_stream = audio_context->streams[i];
      AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
      AVCodecContext* tmp = avcodec_alloc_context3(codec);
      avcodec_parameters_to_context(tmp, in_stream->codecpar);
      AVStream* out_stream = avformat_new_stream(output_context, codec);
      audio_index_out = out_stream->index;
      if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

      tmp->codec_tag = 0;
      avcodec_parameters_from_context(out_stream->codecpar, tmp);
      avcodec_free_context(&tmp);

      break;


      //find first video stream in the video file input
      for(size_t i = 0;i < video_context->nb_streams;++i)
      if(video_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
      video_index_in = i;

      AVStream* in_stream = video_context->streams[i];
      AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
      AVCodecContext* tmp = avcodec_alloc_context3(codec);
      avcodec_parameters_to_context(tmp, in_stream->codecpar);
      AVStream* out_stream = avformat_new_stream(output_context, codec);
      video_index_out = out_stream->index;
      if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

      tmp->codec_tag = 0;
      avcodec_parameters_from_context(out_stream->codecpar, tmp);
      avcodec_free_context(&tmp);

      break;



      //setup output
      if(!(out_format->flags & AVFMT_NOFILE))
      if(avio_open(&output_context->pb, output_filename, AVIO_FLAG_WRITE) < 0)
      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      if(avformat_write_header(output_context, NULL) < 0)
      if(!(out_format->flags & AVFMT_NOFILE))
      avio_close(output_context->pb);

      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      int64_t video_pts = 0, audio_pts = 0;
      int64_t last_video_dts = 0, last_audio_dts = 0;

      while(true)
      AVPacket packet;
      av_init_packet(&packet);
      packet.data = NULL;
      packet.size = 0;
      int64_t* last_dts;
      AVFormatContext* in_context;
      int stream_index = 0;
      AVStream* in_stream, *out_stream;

      //Read in a frame from the next stream
      if(av_compare_ts(video_pts, video_context->streams[video_index_in]->time_base,
      audio_pts, audio_context->streams[audio_index_in]->time_base) <= 0)

      //video
      last_dts = &last_video_dts;
      in_context = video_context;
      stream_index = video_index_out;

      if(av_read_frame(in_context, &packet) >= 0)
      do
      if(packet.stream_index == video_index_in)
      video_pts = packet.pts;
      break;

      av_packet_unref(&packet);
      while(av_read_frame(in_context, &packet) >= 0);
      else
      break;

      else
      //audio
      last_dts = &last_audio_dts;
      in_context = audio_context;
      stream_index = audio_index_out;

      if(av_read_frame(in_context, &packet) >= 0)
      do
      if(packet.stream_index == audio_index_in)
      audio_pts = packet.pts;
      break;

      av_packet_unref(&packet);
      while(av_read_frame(in_context, &packet) >= 0);
      else
      break;


      in_stream = in_context->streams[packet.stream_index];
      out_stream = output_context->streams[stream_index];

      av_packet_rescale_ts(&packet, in_stream->time_base, out_stream->time_base);

      //if dts is out of order, ffmpeg throws an error. So manually fix. Similar to what ffmpeg does in ffmpeg.c
      if(packet.dts < (*last_dts + !(output_context->oformat->flags & AVFMT_TS_NONSTRICT)) && packet.dts != AV_NOPTS_VALUE && (*last_dts) != AV_NOPTS_VALUE)
      int64_t next_dts = (*last_dts)+1;
      if(packet.pts >= packet.dts && packet.pts != AV_NOPTS_VALUE)
      packet.pts = FFMAX(packet.pts, next_dts);

      if(packet.pts == AV_NOPTS_VALUE)
      packet.pts = next_dts;

      packet.dts = next_dts;

      (*last_dts) = packet.dts;

      packet.pos = -1;
      packet.stream_index = stream_index;

      //output packet
      if(av_interleaved_write_frame(output_context, &packet) < 0)
      break;

      av_packet_unref(&packet);



      av_write_trailer(output_context);

      //cleanup
      if(!(out_format->flags & AVFMT_NOFILE))
      avio_close(output_context->pb);

      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return true;










      share|improve this question
















      I am trying to create a function which will combine an audio file and a video file and output them to an mp4. I've managed to successfully do so except that the output is not at the correct framerate. It's a very slight difference from the original. 30.13 whereas it should be 30 exactly. When I combine these files with the ffmpeg program, the result is exactly 30 as it should be.



      I'm confident it has something to do with the dts/pts correction when recieving out of order data, but ffmpeg program does this too in a similar manner. So I'm not sure where to go from here. I've looked at the ffmpeg source code and copied some of their dts correction and still no luck. What am I doing wrong here?



      bool mux_audio_video(const char* audio_filename, const char* video_filename, const char* output_filename)
      av_register_all();

      AVOutputFormat* out_format = NULL;
      AVFormatContext* audio_context = NULL, *video_context = NULL, *output_context = NULL;
      int video_index_in = -1, audio_index_in = -1;
      int video_index_out = -1, audio_index_out = -1;


      if(avformat_open_input(&audio_context, audio_filename, 0, 0) < 0)
      return false;
      if(avformat_find_stream_info(audio_context, 0) < 0)
      avformat_close_input(&audio_context);
      return false;

      if(avformat_open_input(&video_context, video_filename, 0, 0) < 0)
      avformat_close_input(&audio_context);
      return false;

      if(avformat_find_stream_info(video_context, 0) < 0)
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      if(avformat_alloc_output_context2(&output_context, av_guess_format("mp4", NULL, NULL), NULL, output_filename) < 0)
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;

      out_format = output_context->oformat;

      //find first audio stream in the audio file input
      for(size_t i = 0;i < audio_context->nb_streams;++i)
      if(audio_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO)
      audio_index_in = i;

      AVStream* in_stream = audio_context->streams[i];
      AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
      AVCodecContext* tmp = avcodec_alloc_context3(codec);
      avcodec_parameters_to_context(tmp, in_stream->codecpar);
      AVStream* out_stream = avformat_new_stream(output_context, codec);
      audio_index_out = out_stream->index;
      if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

      tmp->codec_tag = 0;
      avcodec_parameters_from_context(out_stream->codecpar, tmp);
      avcodec_free_context(&tmp);

      break;


      //find first video stream in the video file input
      for(size_t i = 0;i < video_context->nb_streams;++i)
      if(video_context->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
      video_index_in = i;

      AVStream* in_stream = video_context->streams[i];
      AVCodec* codec = avcodec_find_encoder(in_stream->codecpar->codec_id);
      AVCodecContext* tmp = avcodec_alloc_context3(codec);
      avcodec_parameters_to_context(tmp, in_stream->codecpar);
      AVStream* out_stream = avformat_new_stream(output_context, codec);
      video_index_out = out_stream->index;
      if(output_context->oformat->flags & AVFMT_GLOBALHEADER)= AV_CODEC_FLAG_GLOBAL_HEADER;

      tmp->codec_tag = 0;
      avcodec_parameters_from_context(out_stream->codecpar, tmp);
      avcodec_free_context(&tmp);

      break;



      //setup output
      if(!(out_format->flags & AVFMT_NOFILE))
      if(avio_open(&output_context->pb, output_filename, AVIO_FLAG_WRITE) < 0)
      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      if(avformat_write_header(output_context, NULL) < 0)
      if(!(out_format->flags & AVFMT_NOFILE))
      avio_close(output_context->pb);

      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return false;


      int64_t video_pts = 0, audio_pts = 0;
      int64_t last_video_dts = 0, last_audio_dts = 0;

      while(true)
      AVPacket packet;
      av_init_packet(&packet);
      packet.data = NULL;
      packet.size = 0;
      int64_t* last_dts;
      AVFormatContext* in_context;
      int stream_index = 0;
      AVStream* in_stream, *out_stream;

      //Read in a frame from the next stream
      if(av_compare_ts(video_pts, video_context->streams[video_index_in]->time_base,
      audio_pts, audio_context->streams[audio_index_in]->time_base) <= 0)

      //video
      last_dts = &last_video_dts;
      in_context = video_context;
      stream_index = video_index_out;

      if(av_read_frame(in_context, &packet) >= 0)
      do
      if(packet.stream_index == video_index_in)
      video_pts = packet.pts;
      break;

      av_packet_unref(&packet);
      while(av_read_frame(in_context, &packet) >= 0);
      else
      break;

      else
      //audio
      last_dts = &last_audio_dts;
      in_context = audio_context;
      stream_index = audio_index_out;

      if(av_read_frame(in_context, &packet) >= 0)
      do
      if(packet.stream_index == audio_index_in)
      audio_pts = packet.pts;
      break;

      av_packet_unref(&packet);
      while(av_read_frame(in_context, &packet) >= 0);
      else
      break;


      in_stream = in_context->streams[packet.stream_index];
      out_stream = output_context->streams[stream_index];

      av_packet_rescale_ts(&packet, in_stream->time_base, out_stream->time_base);

      //if dts is out of order, ffmpeg throws an error. So manually fix. Similar to what ffmpeg does in ffmpeg.c
      if(packet.dts < (*last_dts + !(output_context->oformat->flags & AVFMT_TS_NONSTRICT)) && packet.dts != AV_NOPTS_VALUE && (*last_dts) != AV_NOPTS_VALUE)
      int64_t next_dts = (*last_dts)+1;
      if(packet.pts >= packet.dts && packet.pts != AV_NOPTS_VALUE)
      packet.pts = FFMAX(packet.pts, next_dts);

      if(packet.pts == AV_NOPTS_VALUE)
      packet.pts = next_dts;

      packet.dts = next_dts;

      (*last_dts) = packet.dts;

      packet.pos = -1;
      packet.stream_index = stream_index;

      //output packet
      if(av_interleaved_write_frame(output_context, &packet) < 0)
      break;

      av_packet_unref(&packet);



      av_write_trailer(output_context);

      //cleanup
      if(!(out_format->flags & AVFMT_NOFILE))
      avio_close(output_context->pb);

      avformat_free_context(output_context);
      avformat_close_input(&audio_context);
      avformat_close_input(&video_context);
      return true;







      c++ ffmpeg libav






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 9 at 0:16







      therex

















      asked Mar 8 at 23:35









      therextherex

      164




      164






















          1 Answer
          1






          active

          oldest

          votes


















          1














          I found the issue. I just needed to initialize last_video_dts and last_audio_dts to the minimum value for int64_t instead of 0.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = std::numeric_limits<int64_t>::lowest();


          Now the output is basically identical to that of the ffmpeg program.



          Edit:



          As mentioned by the kamilz, it is better and more portable to use AV_NOPTS_VALUE.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = AV_NOPTS_VALUE;





          share|improve this answer

























          • This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

            – the kamilz
            Mar 13 at 7:52











          • @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

            – therex
            Mar 16 at 18:29











          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55072459%2fc-ffmpeg-library-framerate-incorrect-when-muxing%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          I found the issue. I just needed to initialize last_video_dts and last_audio_dts to the minimum value for int64_t instead of 0.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = std::numeric_limits<int64_t>::lowest();


          Now the output is basically identical to that of the ffmpeg program.



          Edit:



          As mentioned by the kamilz, it is better and more portable to use AV_NOPTS_VALUE.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = AV_NOPTS_VALUE;





          share|improve this answer

























          • This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

            – the kamilz
            Mar 13 at 7:52











          • @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

            – therex
            Mar 16 at 18:29















          1














          I found the issue. I just needed to initialize last_video_dts and last_audio_dts to the minimum value for int64_t instead of 0.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = std::numeric_limits<int64_t>::lowest();


          Now the output is basically identical to that of the ffmpeg program.



          Edit:



          As mentioned by the kamilz, it is better and more portable to use AV_NOPTS_VALUE.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = AV_NOPTS_VALUE;





          share|improve this answer

























          • This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

            – the kamilz
            Mar 13 at 7:52











          • @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

            – therex
            Mar 16 at 18:29













          1












          1








          1







          I found the issue. I just needed to initialize last_video_dts and last_audio_dts to the minimum value for int64_t instead of 0.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = std::numeric_limits<int64_t>::lowest();


          Now the output is basically identical to that of the ffmpeg program.



          Edit:



          As mentioned by the kamilz, it is better and more portable to use AV_NOPTS_VALUE.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = AV_NOPTS_VALUE;





          share|improve this answer















          I found the issue. I just needed to initialize last_video_dts and last_audio_dts to the minimum value for int64_t instead of 0.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = std::numeric_limits<int64_t>::lowest();


          Now the output is basically identical to that of the ffmpeg program.



          Edit:



          As mentioned by the kamilz, it is better and more portable to use AV_NOPTS_VALUE.



          int64_t last_video_dts, last_audio_dts;
          last_video_dts = last_audio_dts = AV_NOPTS_VALUE;






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 16 at 18:26

























          answered Mar 10 at 14:12









          therextherex

          164




          164












          • This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

            – the kamilz
            Mar 13 at 7:52











          • @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

            – therex
            Mar 16 at 18:29

















          • This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

            – the kamilz
            Mar 13 at 7:52











          • @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

            – therex
            Mar 16 at 18:29
















          This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

          – the kamilz
          Mar 13 at 7:52





          This doesn't looks portable, perhaps what you need was AV_NOPTS_VALUE.

          – the kamilz
          Mar 13 at 7:52













          @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

          – therex
          Mar 16 at 18:29





          @thekamilz thanks for the heads up. Edited the answer to reflect this new information.

          – therex
          Mar 16 at 18:29



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f55072459%2fc-ffmpeg-library-framerate-incorrect-when-muxing%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          1928 у кіно

          Захаров Федір Захарович

          Ель Греко