【FFmpeg】av_write_frame函數

目錄

  • 1.av_write_frame
    • 1.1 寫入pkt(write_packets_common)
      • 1.1.1 檢查pkt的信息(check_packet)
      • 1.1.2 準備輸入的pkt(prepare_input_packet)
      • 1.1.3 檢查碼流(check_bitstream)
      • 1.1.4 寫入pkt
        • 1.1.4.1 從write_packets_from_bsfs寫入pkt
        • 1.1.4.2 直接寫入pkt(write_packet_common)
  • 2.小結

FFmpeg相關記錄:

示例工程:
【FFmpeg】調用ffmpeg庫實現264軟編
【FFmpeg】調用ffmpeg庫實現264軟解
【FFmpeg】調用ffmpeg庫進行RTMP推流和拉流
【FFmpeg】調用ffmpeg庫進行SDL2解碼后渲染

流程分析:
【FFmpeg】編碼鏈路上主要函數的簡單分析
【FFmpeg】解碼鏈路上主要函數的簡單分析

結構體分析:
【FFmpeg】AVCodec結構體
【FFmpeg】AVCodecContext結構體
【FFmpeg】AVStream結構體
【FFmpeg】AVFormatContext結構體
【FFmpeg】AVIOContext結構體
【FFmpeg】AVPacket結構體

函數分析:
【FFmpeg】avformat_open_input函數
【FFmpeg】avformat_find_stream_info函數
【FFmpeg】avformat_alloc_output_context2函數
【FFmpeg】avio_open2函數
【FFmpeg】avformat_write_header函數

在參考雷博的文章時,發現他在進行函數分析時,分析了av_write_frame但是在實際工程之中應用了av_interleaved_write_frame,為什么有這個區別?這兩個函數有什么異同,對于不同的格式而言,應該使用什么函數來實現frame寫入比較好?本文先記錄av_write_frame是如何實現的,在FFmpeg7.0中,av_write_frame和老版本的av_write_frame有一些改動

av_write_frame函數的內部調用關系如下,其中最核心的部分是調用的write_packet函數
在這里插入圖片描述

1.av_write_frame

函數的聲明位于libavformat/avformat.h中,如下所示

/*** Write a packet to an output media file.** This function passes the packet directly to the muxer, without any buffering* or reordering. The caller is responsible for correctly interleaving the* packets if the format requires it. Callers that want libavformat to handle* the interleaving should call av_interleaved_write_frame() instead of this* function.** @param s media file handle* @param pkt The packet containing the data to be written. Note that unlike*            av_interleaved_write_frame(), this function does not take*            ownership of the packet passed to it (though some muxers may make*            an internal reference to the input packet).*            <br>*            This parameter can be NULL (at any time, not just at the end), in*            order to immediately flush data buffered within the muxer, for*            muxers that buffer up data internally before writing it to the*            output.*            <br>*            Packet's @ref AVPacket.stream_index "stream_index" field must be*            set to the index of the corresponding stream in @ref*            AVFormatContext.streams "s->streams".*            <br>*            The timestamps (@ref AVPacket.pts "pts", @ref AVPacket.dts "dts")*            must be set to correct values in the stream's timebase (unless the*            output format is flagged with the AVFMT_NOTIMESTAMPS flag, then*            they can be set to AV_NOPTS_VALUE).*            The dts for subsequent packets passed to this function must be strictly*            increasing when compared in their respective timebases (unless the*            output format is flagged with the AVFMT_TS_NONSTRICT, then they*            merely have to be nondecreasing).  @ref AVPacket.duration*            "duration") should also be set if known.* @return < 0 on error, = 0 if OK, 1 if flushed and there is no more data to flush** @see av_interleaved_write_frame()*/
// 將數據包寫入輸出媒體文件
// 此函數將數據包直接傳遞給復用器,不進行任何緩沖或重新排序。如果格式要求的話,
//		調用者負責正確地交錯數據包。想要libavformat處理交錯的調用者應該調用av_interleaved_write_frame()而不是這個函數
// @param :
// 1.這個函數與av_interleaved_write_frame不同,不具有對pkt的所有權
// 2.該參數可以為NULL(在任何時候,而不僅僅是在末尾),以便立即刷新muxer中緩沖的數據,
//		對于在將數據寫入輸出之前內部緩沖數據的muxer來說
// 3.AVPacket.stream_index必須賦值給AVFormatContext.streams中的index
// 4.pkt中的dts和pts必須設置為流的時基中的正確值
// 5.傳遞給此函數的后續數據包的dts在各自的時間基中進行比較時必須嚴格增加
// @return :
// 如果返回值小于0,則出現錯誤;返回0則成功;返回1則表明已經flush并且沒有更多數據要flush
int av_write_frame(AVFormatContext *s, AVPacket *pkt);

av_write_frame的定義位于libavformat\avformat.c中

int av_write_frame(AVFormatContext *s, AVPacket *in)
{FFFormatContext *const si = ffformatcontext(s);AVPacket *pkt = si->parse_pkt;int ret;// 如果沒有輸入的pkt,并且允許FLUSH,將空的pkt寫入,否則直接返回1if (!in) {if (ffofmt(s->oformat)->flags_internal & FF_OFMT_FLAG_ALLOW_FLUSH) {ret = ffofmt(s->oformat)->write_packet(s, NULL);flush_if_needed(s);if (ret >= 0 && s->pb && s->pb->error < 0)ret = s->pb->error;return ret;}return 1;}// AV_PKT_FLAG_UNCODED_FRAME表示數據包含未編碼的幀,此時直接賦值if (in->flags & AV_PKT_FLAG_UNCODED_FRAME) {pkt = in;} else {/* We don't own in, so we have to make sure not to modify it.* (ff_write_chained() relies on this fact.)* The following avoids copying in's data unnecessarily.* Copying side data is unavoidable as a bitstream filter* may change it, e.g. free it on errors. */// 不擁有pkt的所有權,所以必須確保不修改它。(ff_write_chained()依賴于這個事實。)// 下面的代碼避免不必要地復制in的數據。復制側數據是不可避免的,因為比特流過濾器可能會改變它// 例如釋放時的錯誤pkt->data = in->data;pkt->size = in->size;// 將in的一些數據例如pts,buf,size等信息copy到pkt之中ret = av_packet_copy_props(pkt, in);if (ret < 0)return ret;if (in->buf) {// 創建一個新的buffer給到pkt->buf之中pkt->buf = av_buffer_ref(in->buf);if (!pkt->buf) {ret = AVERROR(ENOMEM);goto fail;}}}// 寫入pktret = write_packets_common(s, pkt, 0/*non-interleaved*/);fail:// Uncoded frames using the noninterleaved codepath are also freed hereav_packet_unref(pkt);return ret;
}

1.1 寫入pkt(write_packets_common)

write_packets_common的定義位于libavformat\mux.c中

static int write_packets_common(AVFormatContext *s, AVPacket *pkt, int interleaved)
{AVStream *st;FFStream *sti;// 1.檢查pkt的信息int ret = check_packet(s, pkt);if (ret < 0)return ret;st = s->streams[pkt->stream_index];sti = ffstream(st);// 2.準備輸入的pktret = prepare_input_packet(s, st, pkt);if (ret < 0)return ret;// 3.檢查碼流ret = check_bitstream(s, sti, pkt);if (ret < 0)return ret;// 4.寫入packet// FFStream中的bsfc變量表示bitstream filter context// 如果碼流過濾器存在,則使用write_packets_from_bsfs寫入pkt// 否則使用write_packet_common寫入pktif (sti->bsfc) {return write_packets_from_bsfs(s, st, pkt, interleaved);} else {return write_packet_common(s, st, pkt, interleaved);}
}

函數主要的流程為:
(1)檢查pkt的信息(check_packet)
(2)準備輸入的pkt(prepare_input_packet)
(3)檢查碼流(check_bitstream)
(4)根據具體情況寫入pkt
?(a)如果已經有bsfc,則使用write_packets_from_bsfs,將處理過的數據包從碼流過濾器寫入pkt
?(b)否則,使用write_packet_common寫入pkt

1.1.1 檢查pkt的信息(check_packet)

static int check_packet(AVFormatContext *s, AVPacket *pkt)
{	// 檢查stream_indexif (pkt->stream_index < 0 || pkt->stream_index >= s->nb_streams) {av_log(s, AV_LOG_ERROR, "Invalid packet stream index: %d\n",pkt->stream_index);return AVERROR(EINVAL);}// 檢查codec_typeif (s->streams[pkt->stream_index]->codecpar->codec_type == AVMEDIA_TYPE_ATTACHMENT) {av_log(s, AV_LOG_ERROR, "Received a packet for an attachment stream.\n");return AVERROR(EINVAL);}return 0;
}

1.1.2 準備輸入的pkt(prepare_input_packet)

static int prepare_input_packet(AVFormatContext *s, AVStream *st, AVPacket *pkt)
{FFStream *const sti = ffstream(st);
#if !FF_API_COMPUTE_PKT_FIELDS2 // 默認這里不會使用/* sanitize the timestamps */// 清理時間戳if (!(s->oformat->flags & AVFMT_NOTIMESTAMPS)) {/* when there is no reordering (so dts is equal to pts), but* only one of them is set, set the other as well */// 如果沒有重新排序(所以DTS等于pts),但是只設置了其中一個,那么也設置另一個if (!sti->reorder) {if (pkt->pts == AV_NOPTS_VALUE && pkt->dts != AV_NOPTS_VALUE)pkt->pts = pkt->dts;if (pkt->dts == AV_NOPTS_VALUE && pkt->pts != AV_NOPTS_VALUE)pkt->dts = pkt->pts;}/* check that the timestamps are set */// 檢查是否設置了時間戳if (pkt->pts == AV_NOPTS_VALUE || pkt->dts == AV_NOPTS_VALUE) {av_log(s, AV_LOG_ERROR,"Timestamps are unset in a packet for stream %d\n", st->index);return AVERROR(EINVAL);}/* check that the dts are increasing (or at least non-decreasing,* if the format allows it */// 檢查DTS是否在增加(或者至少不減少,如果格式允許的話)if (sti->cur_dts != AV_NOPTS_VALUE &&((!(s->oformat->flags & AVFMT_TS_NONSTRICT) && sti->cur_dts >= pkt->dts) ||sti->cur_dts > pkt->dts)) {av_log(s, AV_LOG_ERROR,"Application provided invalid, non monotonically increasing ""dts to muxer in stream %d: %" PRId64 " >= %" PRId64 "\n",st->index, sti->cur_dts, pkt->dts);return AVERROR(EINVAL);}if (pkt->pts < pkt->dts) {av_log(s, AV_LOG_ERROR, "pts %" PRId64 " < dts %" PRId64 " in stream %d\n",pkt->pts, pkt->dts, st->index);return AVERROR(EINVAL);}}
#endif/* update flags */if (sti->is_intra_only)pkt->flags |= AV_PKT_FLAG_KEY;if (!pkt->data && !pkt->side_data_elems) {/* Such empty packets signal EOS for the BSF API; so sanitize* the packet by allocating data of size 0 (+ padding). */av_buffer_unref(&pkt->buf);return av_packet_make_refcounted(pkt);}return 0;
}

1.1.3 檢查碼流(check_bitstream)

函數位于libavformat\mux.c中

static int check_bitstream(AVFormatContext *s, FFStream *sti, AVPacket *pkt)
{int ret;if (!(s->flags & AVFMT_FLAG_AUTO_BSF))return 1;if (ffofmt(s->oformat)->check_bitstream) {if (!sti->bitstream_checked) { // 如果沒有檢查碼流,則進行檢查if ((ret = ffofmt(s->oformat)->check_bitstream(s, &sti->pub, pkt)) < 0)return ret;else if (ret == 1)sti->bitstream_checked = 1;}}return 1;
}

這里根據具體的格式進行碼流的檢查,例如flv格式

const FFOutputFormat ff_flv_muxer = {.p.name         = "flv",.p.long_name    = NULL_IF_CONFIG_SMALL("FLV (Flash Video)"),.p.mime_type    = "video/x-flv",.p.extensions   = "flv",.priv_data_size = sizeof(FLVContext),.p.audio_codec  = CONFIG_LIBMP3LAME ? AV_CODEC_ID_MP3 : AV_CODEC_ID_ADPCM_SWF,.p.video_codec  = AV_CODEC_ID_FLV1,.init           = flv_init,.write_header   = flv_write_header,.write_packet   = flv_write_packet,.write_trailer  = flv_write_trailer,.deinit         = flv_deinit,.check_bitstream= flv_check_bitstream,.p.codec_tag    = (const AVCodecTag* const []) {flv_video_codec_ids, flv_audio_codec_ids, 0},.p.flags        = AVFMT_GLOBALHEADER | AVFMT_VARIABLE_FPS |AVFMT_TS_NONSTRICT,.p.priv_class   = &flv_muxer_class,
};

會調用flv_check_bitstream進行碼流的檢查,如下所示

static int flv_check_bitstream(AVFormatContext *s, AVStream *st,const AVPacket *pkt)
{	// AAC格式if (st->codecpar->codec_id == AV_CODEC_ID_AAC) {if (pkt->size > 2 && (AV_RB16(pkt->data) & 0xfff0) == 0xfff0)return ff_stream_add_bitstream_filter(st, "aac_adtstoasc", NULL);}// H264 or HEVC or AV1 or MPEG4格式// 添加對應格式的碼流過濾器if (!st->codecpar->extradata_size &&(st->codecpar->codec_id == AV_CODEC_ID_H264 ||st->codecpar->codec_id == AV_CODEC_ID_HEVC ||st->codecpar->codec_id == AV_CODEC_ID_AV1 ||st->codecpar->codec_id == AV_CODEC_ID_MPEG4))return ff_stream_add_bitstream_filter(st, "extract_extradata", NULL);return 1;
}

ff_stream_add_bitstream_filter的主要作用是在FFStream中添加碼流過濾器(Bitstream Filter),碼流過濾器的主要目的是對已編碼的碼流進行操作,而不涉及解碼過程,過濾器通常用于在不解碼的情況下對編碼數據進行格式轉換或者處理,使其能夠被解碼器正確處理。ff_stream_add_bitstream_filter的定義如下

int ff_stream_add_bitstream_filter(AVStream *st, const char *name, const char *args)
{int ret;const AVBitStreamFilter *bsf;FFStream *const sti = ffstream(st);AVBSFContext *bsfc;av_assert0(!sti->bsfc);if (!(bsf = av_bsf_get_by_name(name))) {av_log(NULL, AV_LOG_ERROR, "Unknown bitstream filter '%s'\n", name);return AVERROR_BSF_NOT_FOUND;}if ((ret = av_bsf_alloc(bsf, &bsfc)) < 0)return ret;bsfc->time_base_in = st->time_base;if ((ret = avcodec_parameters_copy(bsfc->par_in, st->codecpar)) < 0) {av_bsf_free(&bsfc);return ret;}if (args && bsfc->filter->priv_class) {if ((ret = av_set_options_string(bsfc->priv_data, args, "=", ":")) < 0) {av_bsf_free(&bsfc);return ret;}}if ((ret = av_bsf_init(bsfc)) < 0) {av_bsf_free(&bsfc);return ret;}sti->bsfc = bsfc;av_log(NULL, AV_LOG_VERBOSE,"Automatically inserted bitstream filter '%s'; args='%s'\n",name, args ? args : "");return 1;
}

1.1.4 寫入pkt

在寫入pkt時,會分兩種情況,如果使用了Bitstream Filter Context,則使用write_packets_from_bsfs,否則就使用write_packet_common將pkt寫入到輸出的URL當中去

1.1.4.1 從write_packets_from_bsfs寫入pkt
static int write_packets_from_bsfs(AVFormatContext *s, AVStream *st, AVPacket *pkt, int interleaved)
{FFStream *const sti = ffstream(st);AVBSFContext *const bsfc = sti->bsfc;int ret;// 將輸入數據包發送到比特流過濾器(Bitstream Filter)進行處理if ((ret = av_bsf_send_packet(bsfc, pkt)) < 0) {av_log(s, AV_LOG_ERROR,"Failed to send packet to filter %s for stream %d\n",bsfc->filter->name, st->index);return ret;}do {// 從比特流過濾器(Bitstream Filter)接收處理后的數據包ret = av_bsf_receive_packet(bsfc, pkt);if (ret < 0) {if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)return 0;av_log(s, AV_LOG_ERROR, "Error applying bitstream filters to an output ""packet for stream #%d: %s\n", st->index, av_err2str(ret));if (!(s->error_recognition & AV_EF_EXPLODE) && ret != AVERROR(ENOMEM))continue;return ret;}av_packet_rescale_ts(pkt, bsfc->time_base_out, st->time_base);// 寫入pktret = write_packet_common(s, st, pkt, interleaved);if (ret >= 0 && !interleaved) // a successful write_packet_common already unrefed pkt for interleavedav_packet_unref(pkt);} while (ret >= 0);return ret;
}
1.1.4.2 直接寫入pkt(write_packet_common)

函數會寫入一個packet,定義位于libavformat\mux.c中

static int write_packet_common(AVFormatContext *s, AVStream *st, AVPacket *pkt, int interleaved)
{int ret;if (s->debug & FF_FDEBUG_TS)av_log(s, AV_LOG_DEBUG, "%s size:%d dts:%s pts:%s\n", __func__,pkt->size, av_ts2str(pkt->dts), av_ts2str(pkt->pts));// 1.猜測pkt的持續時間guess_pkt_duration(s, st, pkt);// 2.設置AVPacket的一些屬性值
#if FF_API_COMPUTE_PKT_FIELDS2if ((ret = compute_muxer_pkt_fields(s, st, pkt)) < 0 && !(s->oformat->flags & AVFMT_NOTIMESTAMPS))return ret;
#endif// 3.寫入packetif (interleaved) {if (pkt->dts == AV_NOPTS_VALUE && !(s->oformat->flags & AVFMT_NOTIMESTAMPS))return AVERROR(EINVAL);return interleaved_write_packet(s, pkt, 0, 1);} else {return write_packet(s, pkt);}
}

函數執行的過程分為幾個部分:
(1)猜測pkt的持續時間(guess_pkt_duration)
(2)設置AVPacket的一些屬性值(compute_muxer_pkt_fields)
(3)寫入pkt

guess_pkt_duration的定義為

static void guess_pkt_duration(AVFormatContext *s, AVStream *st, AVPacket *pkt)
{if (pkt->duration < 0 && st->codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE) {av_log(s, AV_LOG_WARNING, "Packet with invalid duration %"PRId64" in stream %d\n",pkt->duration, pkt->stream_index);pkt->duration = 0;}if (pkt->duration)return;switch (st->codecpar->codec_type) {case AVMEDIA_TYPE_VIDEO: // 計算視頻的durationif (st->avg_frame_rate.num > 0 && st->avg_frame_rate.den > 0) {pkt->duration = av_rescale_q(1, av_inv_q(st->avg_frame_rate),st->time_base);} else if (st->time_base.num * 1000LL > st->time_base.den)pkt->duration = 1;break;case AVMEDIA_TYPE_AUDIO: {int frame_size = av_get_audio_frame_duration2(st->codecpar, pkt->size);if (frame_size && st->codecpar->sample_rate) {pkt->duration = av_rescale_q(frame_size,(AVRational){1, st->codecpar->sample_rate},st->time_base);}break;}}
}

compute_muxer_pkt_fields的定義如下,主要進行的任務是dts和pts的計算和檢查

#if FF_API_COMPUTE_PKT_FIELDS2
FF_DISABLE_DEPRECATION_WARNINGS
//FIXME merge with compute_pkt_fields
static int compute_muxer_pkt_fields(AVFormatContext *s, AVStream *st, AVPacket *pkt)
{FFFormatContext *const si = ffformatcontext(s);FFStream *const sti = ffstream(st);int delay = st->codecpar->video_delay;int frame_size;if (!si->missing_ts_warning &&!(s->oformat->flags & AVFMT_NOTIMESTAMPS) &&(!(st->disposition & AV_DISPOSITION_ATTACHED_PIC) || (st->disposition & AV_DISPOSITION_TIMED_THUMBNAILS)) &&(pkt->pts == AV_NOPTS_VALUE || pkt->dts == AV_NOPTS_VALUE)) {av_log(s, AV_LOG_WARNING,"Timestamps are unset in a packet for stream %d. ""This is deprecated and will stop working in the future. ""Fix your code to set the timestamps properly\n", st->index);si->missing_ts_warning = 1;}if (s->debug & FF_FDEBUG_TS)av_log(s, AV_LOG_DEBUG, "compute_muxer_pkt_fields: pts:%s dts:%s cur_dts:%s b:%d size:%d st:%d\n",av_ts2str(pkt->pts), av_ts2str(pkt->dts), av_ts2str(sti->cur_dts), delay, pkt->size, pkt->stream_index);if (pkt->pts == AV_NOPTS_VALUE && pkt->dts != AV_NOPTS_VALUE && delay == 0)pkt->pts = pkt->dts;//XXX/FIXME this is a temporary hack until all encoders output ptsif ((pkt->pts == 0 || pkt->pts == AV_NOPTS_VALUE) && pkt->dts == AV_NOPTS_VALUE && !delay) {static int warned;if (!warned) {av_log(s, AV_LOG_WARNING, "Encoder did not produce proper pts, making some up.\n");warned = 1;}pkt->dts =
//        pkt->pts= st->cur_dts;pkt->pts = sti->priv_pts.val;}//calculate dts from pts// 計算dtsif (pkt->pts != AV_NOPTS_VALUE && pkt->dts == AV_NOPTS_VALUE && delay <= MAX_REORDER_DELAY) {sti->pts_buffer[0] = pkt->pts;for (int i = 1; i < delay + 1 && sti->pts_buffer[i] == AV_NOPTS_VALUE; i++)sti->pts_buffer[i] = pkt->pts + (i - delay - 1) * pkt->duration;for (int i = 0; i<delay && sti->pts_buffer[i] > sti->pts_buffer[i + 1]; i++)FFSWAP(int64_t, sti->pts_buffer[i], sti->pts_buffer[i + 1]);pkt->dts = sti->pts_buffer[0];}if (sti->cur_dts && sti->cur_dts != AV_NOPTS_VALUE &&((!(s->oformat->flags & AVFMT_TS_NONSTRICT) &&st->codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE &&st->codecpar->codec_type != AVMEDIA_TYPE_DATA &&sti->cur_dts >= pkt->dts) || sti->cur_dts > pkt->dts)) {av_log(s, AV_LOG_ERROR,"Application provided invalid, non monotonically increasing dts to muxer in stream %d: %s >= %s\n",st->index, av_ts2str(sti->cur_dts), av_ts2str(pkt->dts));return AVERROR(EINVAL);}if (pkt->dts != AV_NOPTS_VALUE && pkt->pts != AV_NOPTS_VALUE && pkt->pts < pkt->dts) {av_log(s, AV_LOG_ERROR,"pts (%s) < dts (%s) in stream %d\n",av_ts2str(pkt->pts), av_ts2str(pkt->dts),st->index);return AVERROR(EINVAL);}if (s->debug & FF_FDEBUG_TS)av_log(s, AV_LOG_DEBUG, "av_write_frame: pts2:%s dts2:%s\n",av_ts2str(pkt->pts), av_ts2str(pkt->dts));sti->cur_dts      = pkt->dts;sti->priv_pts.val = pkt->dts;/* update pts */switch (st->codecpar->codec_type) {case AVMEDIA_TYPE_AUDIO:frame_size = (pkt->flags & AV_PKT_FLAG_UNCODED_FRAME) ?(*(AVFrame **)pkt->data)->nb_samples :av_get_audio_frame_duration2(st->codecpar, pkt->size);/* HACK/FIXME, we skip the initial 0 size packets as they are most* likely equal to the encoder delay, but it would be better if we* had the real timestamps from the encoder */if (frame_size >= 0 && (pkt->size || sti->priv_pts.num != sti->priv_pts.den >> 1 || sti->priv_pts.val)) {frac_add(&sti->priv_pts, (int64_t)st->time_base.den * frame_size);}break;case AVMEDIA_TYPE_VIDEO:frac_add(&sti->priv_pts, (int64_t)st->time_base.den * st->time_base.num);break;}return 0;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif

如果使用了交錯(interleaved)格式,則使用interleaved_write_packet寫入packet,否則使用write_packet寫入packet。其中interleaved_write_packet的定義如下

static int interleaved_write_packet(AVFormatContext *s, AVPacket *pkt,int flush, int has_packet)
{FFFormatContext *const si = ffformatcontext(s);for (;; ) {// interleave_packet能夠將多個視頻和音頻數據包交錯組合成一個完整的幀int ret = si->interleave_packet(s, pkt, flush, has_packet);if (ret <= 0)return ret;has_packet = 0;// 寫入pktret = write_packet(s, pkt);av_packet_unref(pkt);if (ret < 0)return ret;}
}

在FFmpeg-7.0的代碼中,查了一下interleave_packet使用的地方,發現只有在gxf和mxf兩個地方有定義(不確定是否哪里有遺漏),如下所示

const FFOutputFormat ff_mxf_muxer = {.p.name            = "mxf",.p.long_name       = NULL_IF_CONFIG_SMALL("MXF (Material eXchange Format)"),.p.mime_type       = "application/mxf",.p.extensions      = "mxf",.priv_data_size    = sizeof(MXFContext),.p.audio_codec     = AV_CODEC_ID_PCM_S16LE,.p.video_codec     = AV_CODEC_ID_MPEG2VIDEO,.init              = mxf_init,.write_packet      = mxf_write_packet,.write_trailer     = mxf_write_footer,.deinit            = mxf_deinit,.p.flags           = AVFMT_NOTIMESTAMPS,.interleave_packet = mxf_interleave,.p.priv_class      = &mxf_muxer_class,.check_bitstream   = mxf_check_bitstream,
};const FFOutputFormat ff_gxf_muxer = {.p.name            = "gxf",.p.long_name       = NULL_IF_CONFIG_SMALL("GXF (General eXchange Format)"),.p.extensions      = "gxf",.priv_data_size    = sizeof(GXFContext),.p.audio_codec     = AV_CODEC_ID_PCM_S16LE,.p.video_codec     = AV_CODEC_ID_MPEG2VIDEO,.write_header      = gxf_write_header,.write_packet      = gxf_write_packet,.write_trailer     = gxf_write_trailer,.deinit            = gxf_deinit,.interleave_packet = gxf_interleave_packet,
};

由于interleave比較復雜,不做記錄,先考慮write_packet。其實如果不使用interleave,上面的一些判斷也不會進行,會直接使用write_packet進行pkt的寫入,write_packet的定義如下

/*** Shift timestamps and call muxer; the original pts/dts are not kept.** FIXME: this function should NEVER get undefined pts/dts beside when the* AVFMT_NOTIMESTAMPS is set.* Those additional safety checks should be dropped once the correct checks* are set in the callers.*/
// 移位時間戳和調用復用器;不保留原始的pts/dts
// 這個函數應該永遠不會得到未定義的pts/dts,除非AVFMT_NOTIMESTAMPS被設置
// 一旦在調用程序中設置了正確的檢查,就應該刪除這些額外的安全檢查
static int write_packet(AVFormatContext *s, AVPacket *pkt)
{FFFormatContext *const si = ffformatcontext(s);AVStream *const st = s->streams[pkt->stream_index];FFStream *const sti = ffstream(st);int ret;// If the timestamp offsetting below is adjusted, adjust// ff_interleaved_peek similarly.// 如果調整了下面的時間戳偏移量,請類似地調整ff_interleaved_peekif (s->output_ts_offset) {int64_t offset = av_rescale_q(s->output_ts_offset, AV_TIME_BASE_Q, st->time_base);if (pkt->dts != AV_NOPTS_VALUE)pkt->dts += offset;if (pkt->pts != AV_NOPTS_VALUE)pkt->pts += offset;}handle_avoid_negative_ts(si, sti, pkt);// AV_PKT_FLAG_UNCODED_FRAME表示數據包含未編碼的幀if ((pkt->flags & AV_PKT_FLAG_UNCODED_FRAME)) {AVFrame **frame = (AVFrame **)pkt->data;av_assert0(pkt->size == sizeof(*frame));// 寫入未編碼的幀信息ret = ffofmt(s->oformat)->write_uncoded_frame(s, pkt->stream_index, frame, 0);} else {ret = ffofmt(s->oformat)->write_packet(s, pkt);}if (s->pb && ret >= 0) {flush_if_needed(s);if (s->pb->error < 0)ret = s->pb->error;}if (ret >= 0)st->nb_frames++;return ret;
}

上面函數中的核心部分是write_packet,以FLV格式為例,則會調用flv_write_packet

static int flv_write_packet(AVFormatContext *s, AVPacket *pkt)
{AVIOContext *pb      = s->pb;AVCodecParameters *par = s->streams[pkt->stream_index]->codecpar;FLVContext *flv      = s->priv_data;unsigned ts;int size = pkt->size;uint8_t *data = NULL;uint8_t frametype = pkt->flags & AV_PKT_FLAG_KEY ? FLV_FRAME_KEY : FLV_FRAME_INTER;int flags = -1, flags_size, ret = 0;int64_t cur_offset = avio_tell(pb);if (par->codec_type == AVMEDIA_TYPE_AUDIO && !pkt->size) {av_log(s, AV_LOG_WARNING, "Empty audio Packet\n");return AVERROR(EINVAL);}// 確定flag_size的大小if (par->codec_id == AV_CODEC_ID_VP6F || par->codec_id == AV_CODEC_ID_VP6A ||par->codec_id == AV_CODEC_ID_VP6  || par->codec_id == AV_CODEC_ID_AAC)flags_size = 2;else if (par->codec_id == AV_CODEC_ID_H264 || par->codec_id == AV_CODEC_ID_MPEG4 ||par->codec_id == AV_CODEC_ID_HEVC || par->codec_id == AV_CODEC_ID_AV1 ||par->codec_id == AV_CODEC_ID_VP9)flags_size = 5;elseflags_size = 1;if (par->codec_id == AV_CODEC_ID_HEVC && pkt->pts != pkt->dts)flags_size += 3;// side data的處理if (par->codec_id == AV_CODEC_ID_AAC || par->codec_id == AV_CODEC_ID_H264|| par->codec_id == AV_CODEC_ID_MPEG4 || par->codec_id == AV_CODEC_ID_HEVC|| par->codec_id == AV_CODEC_ID_AV1 || par->codec_id == AV_CODEC_ID_VP9) {size_t side_size;uint8_t *side = av_packet_get_side_data(pkt, AV_PKT_DATA_NEW_EXTRADATA, &side_size);if (side && side_size > 0 && (side_size != par->extradata_size || memcmp(side, par->extradata, side_size))) {ret = ff_alloc_extradata(par, side_size);if (ret < 0)return ret;memcpy(par->extradata, side, side_size);flv_write_codec_header(s, par, pkt->dts);}flv_write_metadata_packet(s, par, pkt->dts);}if (flv->delay == AV_NOPTS_VALUE)flv->delay = -pkt->dts;if (pkt->dts < -flv->delay) {av_log(s, AV_LOG_WARNING,"Packets are not in the proper order with respect to DTS\n");return AVERROR(EINVAL);}if (par->codec_id == AV_CODEC_ID_H264 || par->codec_id == AV_CODEC_ID_MPEG4 ||par->codec_id == AV_CODEC_ID_HEVC ||  par->codec_id == AV_CODEC_ID_AV1 ||par->codec_id == AV_CODEC_ID_VP9) {if (pkt->pts == AV_NOPTS_VALUE) {av_log(s, AV_LOG_ERROR, "Packet is missing PTS\n");return AVERROR(EINVAL);}}ts = pkt->dts;if (s->event_flags & AVSTREAM_EVENT_FLAG_METADATA_UPDATED) {write_metadata(s, ts);s->event_flags &= ~AVSTREAM_EVENT_FLAG_METADATA_UPDATED;}avio_write_marker(pb, av_rescale(ts, AV_TIME_BASE, 1000),pkt->flags & AV_PKT_FLAG_KEY && (flv->video_par ? par->codec_type == AVMEDIA_TYPE_VIDEO : 1) ? AVIO_DATA_MARKER_SYNC_POINT : AVIO_DATA_MARKER_BOUNDARY_POINT);switch (par->codec_type) {case AVMEDIA_TYPE_VIDEO:// 視頻的TAG類型// 1.寫入Tag header當中的typeavio_w8(pb, FLV_TAG_TYPE_VIDEO);flags = ff_codec_get_tag(flv_video_codec_ids, par->codec_id);// frame類型,是否是key_frameflags |= frametype;break;case AVMEDIA_TYPE_AUDIO:flags = get_audio_flags(s, par);av_assert0(size);avio_w8(pb, FLV_TAG_TYPE_AUDIO);break;case AVMEDIA_TYPE_SUBTITLE:case AVMEDIA_TYPE_DATA:avio_w8(pb, FLV_TAG_TYPE_META);break;default:return AVERROR(EINVAL);}if (par->codec_id == AV_CODEC_ID_H264 || par->codec_id == AV_CODEC_ID_MPEG4) {/* check if extradata looks like mp4 formatted */if (par->extradata_size > 0 && *(uint8_t*)par->extradata != 1)if ((ret = ff_avc_parse_nal_units_buf(pkt->data, &data, &size)) < 0)return ret;} else if (par->codec_id == AV_CODEC_ID_HEVC) {if (par->extradata_size > 0 && *(uint8_t*)par->extradata != 1)if ((ret = ff_hevc_annexb2mp4_buf(pkt->data, &data, &size, 0, NULL)) < 0)return ret;} else if (par->codec_id == AV_CODEC_ID_AAC && pkt->size > 2 &&(AV_RB16(pkt->data) & 0xfff0) == 0xfff0) {if (!s->streams[pkt->stream_index]->nb_frames) {av_log(s, AV_LOG_ERROR, "Malformed AAC bitstream detected: ""use the audio bitstream filter 'aac_adtstoasc' to fix it ""('-bsf:a aac_adtstoasc' option with ffmpeg)\n");return AVERROR_INVALIDDATA;}av_log(s, AV_LOG_WARNING, "aac bitstream error\n");}/* check Speex packet duration */if (par->codec_id == AV_CODEC_ID_SPEEX && ts - flv->last_ts[pkt->stream_index] > 160)av_log(s, AV_LOG_WARNING, "Warning: Speex stream has more than ""8 frames per packet. Adobe Flash ""Player cannot handle this!\n");if (flv->last_ts[pkt->stream_index] < ts)flv->last_ts[pkt->stream_index] = ts;if (size + flags_size >= 1<<24) {av_log(s, AV_LOG_ERROR, "Too large packet with size %u >= %u\n",size + flags_size, 1<<24);ret = AVERROR(EINVAL);goto fail;}// 2.寫入Tag Header當中的DataSizeavio_wb24(pb, size + flags_size);/*// FLV timestamps are 32 bits signed, RTMP timestamps should be 32-bit unsignedstatic void put_timestamp(AVIOContext *pb, int64_t ts) {avio_wb24(pb, ts & 0xFFFFFF); // 3.寫入Tag Header當中的Timestamp avio_w8(pb, (ts >> 24) & 0x7F); // 4.寫入Tag Header當中的Timestamp_ex,timestamps are 32 bits _signed_}*/put_timestamp(pb, ts);// 5.寫入Tag Header當中的StreamIDavio_wb24(pb, flv->reserved);if (par->codec_type == AVMEDIA_TYPE_DATA ||par->codec_type == AVMEDIA_TYPE_SUBTITLE ) {int data_size;int64_t metadata_size_pos = avio_tell(pb);if (par->codec_id == AV_CODEC_ID_TEXT) {// legacy FFmpeg magic?avio_w8(pb, AMF_DATA_TYPE_STRING);put_amf_string(pb, "onTextData");avio_w8(pb, AMF_DATA_TYPE_MIXEDARRAY);avio_wb32(pb, 2);put_amf_string(pb, "type");avio_w8(pb, AMF_DATA_TYPE_STRING);put_amf_string(pb, "Text");put_amf_string(pb, "text");avio_w8(pb, AMF_DATA_TYPE_STRING);put_amf_string(pb, pkt->data);put_amf_string(pb, "");avio_w8(pb, AMF_END_OF_OBJECT);} else {// just pass the metadata throughavio_write(pb, data ? data : pkt->data, size);}/* write total size of tag */data_size = avio_tell(pb) - metadata_size_pos;avio_seek(pb, metadata_size_pos - 10, SEEK_SET);avio_wb24(pb, data_size);avio_seek(pb, data_size + 10 - 3, SEEK_CUR);avio_wb32(pb, data_size + 11);} else {av_assert1(flags>=0);// hevc格式if (par->codec_id == AV_CODEC_ID_HEVC) {int pkttype = (pkt->pts != pkt->dts) ? PacketTypeCodedFrames : PacketTypeCodedFramesX;avio_w8(pb, FLV_IS_EX_HEADER | pkttype | frametype); // ExVideoTagHeader mode with PacketTypeCodedFrames(X)avio_write(pb, "hvc1", 4);if (pkttype == PacketTypeCodedFrames)avio_wb24(pb, pkt->pts - pkt->dts);} else if (par->codec_id == AV_CODEC_ID_AV1 || par->codec_id == AV_CODEC_ID_VP9) { // av1格式或者是vp9格式avio_w8(pb, FLV_IS_EX_HEADER | PacketTypeCodedFrames | frametype);avio_write(pb, par->codec_id == AV_CODEC_ID_AV1 ? "av01" : "vp09", 4);} else {avio_w8(pb, flags);}if (par->codec_id == AV_CODEC_ID_VP6)avio_w8(pb,0);if (par->codec_id == AV_CODEC_ID_VP6F || par->codec_id == AV_CODEC_ID_VP6A) {if (par->extradata_size)avio_w8(pb, par->extradata[0]);elseavio_w8(pb, ((FFALIGN(par->width,  16) - par->width) << 4) |(FFALIGN(par->height, 16) - par->height));} else if (par->codec_id == AV_CODEC_ID_AAC)avio_w8(pb, 1); // AAC rawelse if (par->codec_id == AV_CODEC_ID_H264 || par->codec_id == AV_CODEC_ID_MPEG4) {avio_w8(pb, 1); // AVC NALUavio_wb24(pb, pkt->pts - pkt->dts);}// 6.寫入Tag Dataavio_write(pb, data ? data : pkt->data, size);avio_wb32(pb, size + flags_size + 11); // previous tag sizeflv->duration = FFMAX(flv->duration,pkt->pts + flv->delay + pkt->duration);}// FLV_ADD_KEYFRAME_INDEX的作用是在FLV文件中添加關鍵索引,以提高文件的搜索(seek)性能if (flv->flags & FLV_ADD_KEYFRAME_INDEX) {switch (par->codec_type) {case AVMEDIA_TYPE_VIDEO:flv->videosize += (avio_tell(pb) - cur_offset);flv->lasttimestamp = pkt->dts / 1000.0;if (pkt->flags & AV_PKT_FLAG_KEY) {flv->lastkeyframetimestamp = flv->lasttimestamp;flv->lastkeyframelocation = cur_offset;ret = flv_append_keyframe_info(s, flv, flv->lasttimestamp, cur_offset);if (ret < 0)goto fail;}break;case AVMEDIA_TYPE_AUDIO:flv->audiosize += (avio_tell(pb) - cur_offset);break;default:av_log(s, AV_LOG_WARNING, "par->codec_type is type = [%d]\n", par->codec_type);break;}}
fail:av_free(data);return ret;
}

FLV格式為
在這里插入圖片描述
從代碼上看,核心的部分是完成了6個部分的內容:
(1)寫入Tag Header中的Type

avio_w8(pb, FLV_TAG_TYPE_VIDEO); // 寫入視頻類型的type

(2)寫入Tag Header中的Datasize

avio_wb24(pb, size + flags_size);

(3&4)寫入Tag Header中的Timestamp和Timestamp_ex

// FLV timestamps are 32 bits signed, RTMP timestamps should be 32-bit unsigned
static void put_timestamp(AVIOContext *pb, int64_t ts) {avio_wb24(pb, ts & 0xFFFFFF); // 3.寫入Tag Header當中的Timestamp // timestamps are 32 bits _signed_avio_w8(pb, (ts >> 24) & 0x7F); // 4.寫入Tag Header當中的Timestamp_ex
}

(5)寫入Tag Header中的StreamID

avio_wb24(pb, flv->reserved);

(6)寫入Tag Data

avio_write(pb, data ? data : pkt->data, size);

這樣就完成了FLV格式的寫入

2.小結

av_write_frame實現了將數據信息寫入到輸出的功能,它將外部輸入的pkt寫入到指定的輸出url當中,在執行的過程之中考慮了interleaved的情況,這種情況比較復雜,不做記錄,另外,在寫入信息時,涉及到很多pts和dts的計算。最后就是需要注意根據不同的媒體格式來進行文件的寫入,

CSDN : https://blog.csdn.net/weixin_42877471
Github : https://github.com/DoFulangChen

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/38820.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/38820.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/38820.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

【創建者模式-建造者模式】

概要 將一個復雜對象的構建與表示分離&#xff0c;使得同樣的構建過程可以創建不同的表示。 建造者模式包含以下角色 抽象建造者類&#xff08;Builder&#xff09;&#xff1a;這個接口規定要實現復雜對象的那些部分的創建&#xff0c;并不涉及具體的部件對象的創建。具體建…

什么是ISR?

ISR&#xff08;Interrupt Service Routine&#xff0c;中斷服務程序&#xff09;是一個用于處理硬件中斷的特定程序。中斷是硬件或軟件引起的事件&#xff0c;會暫時打斷當前正在運行的任務&#xff0c;以便緊急處理某個事件。ISR的目的是快速響應中斷信號&#xff0c;執行所需…

在WSL Ubuntu中啟用root用戶的SSH服務

在 Ubuntu 中&#xff0c;默認情況下 root 用戶是禁用 SSH 登錄的&#xff0c;這是為了增加系統安全性。 一、修改配置 找到 PermitRootLogin 行&#xff1a;在文件中找到 PermitRootLogin 配置項。默認情況下&#xff0c;它通常被設置為 PermitRootLogin prohibit-password 或…

一篇文章學會【node.js安裝以及Vue-Cli腳手架搭建】

一.為什么搭建Vue-Cli (1).傳統的前端項目結構&#xff1a; 一個項目中有許多html文件&#xff0c;每一個html文件都是相互獨立的&#xff0c; 如果需要在頁面中導入一些外部依賴的組件&#xff0c;就需要在每一個html文件中都需要導入&#xff0c;非常麻煩 (2).現在的前端…

A股低開高走,近3000點,行情要啟動了嗎?

A股低開高走&#xff0c;近3000點&#xff0c;行情要啟動了嗎&#xff1f; 今天的A股&#xff0c;讓人瞪目結舌了&#xff0c;你們知道是為什么嗎&#xff1f;盤面上出現2個重要信號&#xff0c;一起來看看&#xff1a; 1、今天兩市低開高走&#xff0c;銀行板塊護盤指數&…

Windows 下后臺啟動java項目的 jar 包

java -jar swagger.jar 的dos窗口 后臺啟動 jar 包&#xff1a; 使用 javaw.exe 啟動 jar 包&#xff0c;并不會在窗口打印日志&#xff0c;而且會直接在后臺運行進程&#xff0c;關掉窗口&#xff0c;進程繼續跑 javaw -jar swagger.jar 關閉進程&#xff1a; 后臺啟動的 …

大數據面試題之Spark(7)

Spark實現wordcount Spark Streaming怎么實現數據持久化保存? Spark SQL讀取文件&#xff0c;內存不夠使用&#xff0c;如何處理? Spark的lazy體現在哪里? Spark中的并行度等于什么 Spark運行時并行度的設署 Spark SQL的數據傾斜 Spark的exactly-once Spark的RDD和p…

大話C語言:第26篇 靜態庫

1 靜態庫概述 C語言靜態庫&#xff08;Static Library&#xff09;是一種包含一組目標文件的歸檔文件&#xff0c;這些目標文件通常是由多個C語言源文件編譯而成的。靜態庫在程序編譯時被鏈接到目標程序中&#xff0c;成為程序的一部分&#xff0c;因此在運行時不再需要額外的…

java Lambda表達式介紹

Lambda 表達式是 Java 8 中引入的一種語法糖,用于簡化使用函數式接口的代碼編寫。它使得 Java 編程更加簡潔和靈活,特別是在處理集合數據、事件監聽器等方面提供了便利。 Lambda 表達式的語法 Lambda 表達式的基本語法如下: (parameters) -> expression或者是一個代碼…

盤古5.0,靠什么去解最難的題?

文&#xff5c;周效敬 編&#xff5c;王一粟 當大模型的競爭開始拼落地&#xff0c;商業化在B端和C端都展開了自由生長。 在B端&#xff0c;借助云計算向千行萬業扎根&#xff1b;在C端&#xff0c;通過軟件App和智能終端快速迭代。 在華為&#xff0c;這家曾經以通信行業起…

Error: A JNl error has occurred, please check your installation and try again.

Eclipse 運行main方法的時候報錯&#xff1a;Error: A JNl error has occurred, please check your installation and try again. 一、問題分析 導致這個問題&#xff0c;主要原因&#xff0c;我認為是在新版本中&#xff0c;默認的JDK編譯版本與我們配置的JDK版本不一致導致的…

公網環境使用Potplayer遠程訪問家中群暉NAS搭建的WebDAV聽歌看電影

文章目錄 前言1 使用環境要求&#xff1a;2 配置webdav3 測試局域網使用potplayer訪問webdav4 內網穿透&#xff0c;映射至公網5 使用固定地址在potplayer訪問webdav 前言 本文主要介紹如何在Windows設備使用potplayer播放器遠程訪問本地局域網的群暉NAS中的影視資源&#xff…

告別流失,擁抱增長!Xinstall智能邀請系統,讓你的App拉新更高效

在移動互聯網時代&#xff0c;App的推廣和運營面臨著諸多挑戰。其中&#xff0c;如何有效地進行邀請拉新活動&#xff0c;吸引更多新用戶&#xff0c;成為了每個運營者都需要面對的問題。今天&#xff0c;我們將為大家介紹一款能夠幫助你輕松解決這一難題的神器——Xinstall。 …

C語言從頭學28——數組(一)

一、基本概念 數組是一組相同類型的值被順序地儲存在一起。數組表示方法為變量名加方括號&#xff0c;方括號里是數組的成員數量。例如&#xff1a; int arr[20]; //聲明了一個 int 類型的名為 arr 包含20個成員的數組 數組的成員是從0開始編號的&#x…

深入理解Symfony框架的環境配置策略

引言 Symfony是一個高度靈活的PHP框架&#xff0c;它允許開發者通過配置文件來定制應用程序的行為&#xff0c;以適應不同的運行環境。環境配置是Symfony中一個重要的概念&#xff0c;它允許開發者為開發、測試和生產環境設置不同的配置參數。本文將詳細探討Symfony的環境配置…

7-491 3名同學5門課程成績,輸出最好成績及所在的行和列(二維數組作為函數的參數)

編程:數組存儲3名同學5門課程成績 輸出最好成績及所在的行和列 要求&#xff1a;將輸入、查找和打印的功能編寫成函數 并將二維數組通過指針參數傳遞的方式由主函數傳遞到子函數中 輸入格式: 每行輸入一個同學的5門課的成績&#xff0c;每個成績之間空一格&#xff0c;見輸入…

互聯網框架五層模型詳解

注&#xff1a;機翻&#xff0c;未校對。 What is the Five Layers Model? The Framework of the Internet Explained 五層模型互聯網框架解釋 Computer Networks are a beautiful, amazing topic. Networks involve so much knowledge from different fields, from physics…

Elasticsearch架構基本原理

Elasticsearch的架構原理可以詳細分為以下幾個方面進行介紹&#xff1a; 一、Elasticsearch基本概念 Elasticsearch&#xff08;簡稱ES&#xff09;是一個基于Lucene構建的開源、分布式、RESTful搜索和分析引擎。它支持全文搜索、結構化搜索、半結構化搜索、數據分析、地理位…

[數據集][目標檢測]城市街道井蓋破損未蓋丟失檢測數據集VOC+YOLO格式4404張5類別

數據集格式&#xff1a;Pascal VOC格式YOLO格式(不包含分割路徑的txt文件&#xff0c;僅僅包含jpg圖片以及對應的VOC格式xml文件和yolo格式txt文件) 圖片數量(jpg文件個數)&#xff1a;4404 標注數量(xml文件個數)&#xff1a;4404 標注數量(txt文件個數)&#xff1a;4404 標注…

note-網絡是怎樣連接的6 請求到達服務器,響應返回瀏覽器

助記提要 服務器程序的結構套接字的指代方式MAC模塊的接收過程IP模塊的接收過程TCP模塊處理連接包TCP模塊處理數據包TCP模塊的斷開操作URI轉換為實際文件路徑URI調用程序Web服務器訪問控制響應內容的類型 6章 請求到達服務器&#xff0c;響應返回瀏覽器 1 服務器概覽 在數據…