FFmpeg源代碼簡單分析-其他-libswscale的sws_getContext()

參考鏈接

  • FFmpeg源代碼簡單分析:libswscale的sws_getContext()_雷霄驊的博客-CSDN博客

libswscale的sws_getContext()

  • FFmpeg中類庫libswsscale用于圖像處理(縮放,YUV/RGB格式轉換)
  • libswscale是一個主要用于處理圖片像素數據的類庫。
  • 可以完成圖片像素格式的轉換,圖片的拉伸等工作。
  • 有關libswscale的使用可以參考文章:最簡單的基于FFmpeg的libswscale的示例(YUV轉RGB)_雷霄驊的博客-CSDN博客_ffmpeg yuv422轉rgb
  • libswscale常用的函數數量很少,一般情況下就3個:
    • sws_getContext():初始化一個SwsContext。
    • sws_scale():處理圖像數據。
    • sws_freeContext():釋放一個SwsContext。
  • 其中sws_getContext()也可以用sws_getCachedContext()取代。
  • sws_getContext()是初始化SwsContext的函數。
  • sws_getContext()的聲明位于libswscale\swscale.h,如下所示。
/*** Allocate and return an SwsContext. You need it to perform* scaling/conversion operations using sws_scale().** @param srcW the width of the source image* @param srcH the height of the source image* @param srcFormat the source image format* @param dstW the width of the destination image* @param dstH the height of the destination image* @param dstFormat the destination image format* @param flags specify which algorithm and options to use for rescaling* @param param extra parameters to tune the used scaler*              For SWS_BICUBIC param[0] and [1] tune the shape of the basis*              function, param[0] tunes f(1) and param[1] f麓(1)*              For SWS_GAUSS param[0] tunes the exponent and thus cutoff*              frequency*              For SWS_LANCZOS param[0] tunes the width of the window function* @return a pointer to an allocated context, or NULL in case of error* @note this function is to be removed after a saner alternative is*       written*/
struct SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,int dstW, int dstH, enum AVPixelFormat dstFormat,int flags, SwsFilter *srcFilter,SwsFilter *dstFilter, const double *param);
  • 該函數包含以下參數:
    • srcW:源圖像的寬
    • srcH:源圖像的高
    • srcFormat:源圖像的像素格式
    • dstW:目標圖像的寬
    • dstH:目標圖像的高
    • dstFormat:目標圖像的像素格式
    • flags:設定圖像拉伸使用的算法
  • 成功執行的話返回生成的SwsContext,否則返回NULL。
  • sws_getContext()的定義位于libswscale\utils.c,如下所示。
SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,int dstW, int dstH, enum AVPixelFormat dstFormat,int flags, SwsFilter *srcFilter,SwsFilter *dstFilter, const double *param)
{SwsContext *c;c = sws_alloc_set_opts(srcW, srcH, srcFormat,dstW, dstH, dstFormat,flags, param);if (!c)return NULL;if (sws_init_context(c, srcFilter, dstFilter) < 0) {sws_freeContext(c);return NULL;}return c;
}
  • ?從sws_getContext()的定義中可以看出,它首先調用了一個函數sws_alloc_set_opts,sws_alloc_set_opts函數包含了先前的sws_alloc_context()函數,目的保持不變給SwsContext分配內存。
  • 然后將傳入的源圖像,目標圖像的寬高,像素格式,以及標志位分別賦值給該SwsContext相應的字段。最后調用一個函數sws_init_context()完成初始化工作。
  • 下面我們分別看一下sws_alloc_set_opts、sws_alloc_context()和sws_init_context()這幾個函數。

sws_alloc_set_opts?

/*** Allocate and return an SwsContext.* This is like sws_getContext() but does not perform the init step, allowing* the user to set additional AVOptions.** @see sws_getContext()*/
struct SwsContext *sws_alloc_set_opts(int srcW, int srcH, enum AVPixelFormat srcFormat,int dstW, int dstH, enum AVPixelFormat dstFormat,int flags, const double *param);
SwsContext *sws_alloc_set_opts(int srcW, int srcH, enum AVPixelFormat srcFormat,int dstW, int dstH, enum AVPixelFormat dstFormat,int flags, const double *param)
{SwsContext *c;if (!(c = sws_alloc_context()))return NULL;c->flags     = flags;c->srcW      = srcW;c->srcH      = srcH;c->dstW      = dstW;c->dstH      = dstH;c->srcFormat = srcFormat;c->dstFormat = dstFormat;if (param) {c->param[0] = param[0];c->param[1] = param[1];}return c;
}

?sws_alloc_context()

  • sws_alloc_context()是FFmpeg的一個API,用于給SwsContext分配內存,它的聲明如下所示。?
/*** Allocate an empty SwsContext. This must be filled and passed to* sws_init_context(). For filling see AVOptions, options.c and* sws_setColorspaceDetails().*/
struct SwsContext *sws_alloc_context(void);
  • sws_alloc_context()的定義位于libswscale\utils.c,如下所示。
  • 從代碼中可以看出,sws_alloc_context()首先調用av_mallocz()為SwsContext結構體分配了一塊內存;
  • 然后設置了該結構體的AVClass,并且給該結構體的字段設置了默認值。
SwsContext *sws_alloc_context(void)
{SwsContext *c = av_mallocz(sizeof(SwsContext));av_assert0(offsetof(SwsContext, redDither) + DITHER32_INT == offsetof(SwsContext, dither32));if (c) {c->av_class = &ff_sws_context_class;av_opt_set_defaults(c);atomic_init(&c->stride_unaligned_warned, 0);atomic_init(&c->data_unaligned_warned,   0);}return c;
}

sws_init_context

  • sws_init_context()的是FFmpeg的一個API,用于初始化SwsContext。
/*** Initialize the swscaler context sws_context.** @return zero or positive value on success, a negative value on* error*/
av_warn_unused_result
int sws_init_context(struct SwsContext *sws_context, SwsFilter *srcFilter, SwsFilter *dstFilter);
  • sws_init_context()的函數定義非常的長,位于libswscale\utils.c,如下所示。
av_cold int sws_init_context(SwsContext *c, SwsFilter *srcFilter,SwsFilter *dstFilter)
{int i;int usesVFilter, usesHFilter;int unscaled;SwsFilter dummyFilter = { NULL, NULL, NULL, NULL };int srcW              = c->srcW;int srcH              = c->srcH;int dstW              = c->dstW;int dstH              = c->dstH;int dst_stride        = FFALIGN(dstW * sizeof(int16_t) + 66, 16);int flags, cpu_flags;enum AVPixelFormat srcFormat = c->srcFormat;enum AVPixelFormat dstFormat = c->dstFormat;const AVPixFmtDescriptor *desc_src;const AVPixFmtDescriptor *desc_dst;int ret = 0;enum AVPixelFormat tmpFmt;static const float float_mult = 1.0f / 255.0f;static AVOnce rgb2rgb_once = AV_ONCE_INIT;if (c->nb_threads != 1) {ret = context_init_threaded(c, srcFilter, dstFilter);if (ret < 0 || c->nb_threads > 1)return ret;// threading disabled in this build, init as single-threaded}cpu_flags = av_get_cpu_flags();flags     = c->flags;emms_c();if (ff_thread_once(&rgb2rgb_once, ff_sws_rgb2rgb_init) != 0)return AVERROR_UNKNOWN;unscaled = (srcW == dstW && srcH == dstH);c->srcRange |= handle_jpeg(&c->srcFormat);c->dstRange |= handle_jpeg(&c->dstFormat);if(srcFormat!=c->srcFormat || dstFormat!=c->dstFormat)av_log(c, AV_LOG_WARNING, "deprecated pixel format used, make sure you did set range correctly\n");if (!c->contrast && !c->saturation && !c->dstFormatBpp)sws_setColorspaceDetails(c, ff_yuv2rgb_coeffs[SWS_CS_DEFAULT], c->srcRange,ff_yuv2rgb_coeffs[SWS_CS_DEFAULT],c->dstRange, 0, 1 << 16, 1 << 16);handle_formats(c);srcFormat = c->srcFormat;dstFormat = c->dstFormat;desc_src = av_pix_fmt_desc_get(srcFormat);desc_dst = av_pix_fmt_desc_get(dstFormat);// If the source has no alpha then disable alpha blendawayif (c->src0Alpha)c->alphablend = SWS_ALPHA_BLEND_NONE;if (!(unscaled && sws_isSupportedEndiannessConversion(srcFormat) &&av_pix_fmt_swap_endianness(srcFormat) == dstFormat)) {if (!sws_isSupportedInput(srcFormat)) {av_log(c, AV_LOG_ERROR, "%s is not supported as input pixel format\n",av_get_pix_fmt_name(srcFormat));return AVERROR(EINVAL);}if (!sws_isSupportedOutput(dstFormat)) {av_log(c, AV_LOG_ERROR, "%s is not supported as output pixel format\n",av_get_pix_fmt_name(dstFormat));return AVERROR(EINVAL);}}av_assert2(desc_src && desc_dst);i = flags & (SWS_POINT         |SWS_AREA          |SWS_BILINEAR      |SWS_FAST_BILINEAR |SWS_BICUBIC       |SWS_X             |SWS_GAUSS         |SWS_LANCZOS       |SWS_SINC          |SWS_SPLINE        |SWS_BICUBLIN);/* provide a default scaler if not set by caller */if (!i) {if (dstW < srcW && dstH < srcH)flags |= SWS_BICUBIC;else if (dstW > srcW && dstH > srcH)flags |= SWS_BICUBIC;elseflags |= SWS_BICUBIC;c->flags = flags;} else if (i & (i - 1)) {av_log(c, AV_LOG_ERROR,"Exactly one scaler algorithm must be chosen, got %X\n", i);return AVERROR(EINVAL);}/* sanity check */if (srcW < 1 || srcH < 1 || dstW < 1 || dstH < 1) {/* FIXME check if these are enough and try to lower them after* fixing the relevant parts of the code */av_log(c, AV_LOG_ERROR, "%dx%d -> %dx%d is invalid scaling dimension\n",srcW, srcH, dstW, dstH);return AVERROR(EINVAL);}if (flags & SWS_FAST_BILINEAR) {if (srcW < 8 || dstW < 8) {flags ^= SWS_FAST_BILINEAR | SWS_BILINEAR;c->flags = flags;}}if (!dstFilter)dstFilter = &dummyFilter;if (!srcFilter)srcFilter = &dummyFilter;c->lumXInc      = (((int64_t)srcW << 16) + (dstW >> 1)) / dstW;c->lumYInc      = (((int64_t)srcH << 16) + (dstH >> 1)) / dstH;c->dstFormatBpp = av_get_bits_per_pixel(desc_dst);c->srcFormatBpp = av_get_bits_per_pixel(desc_src);c->vRounder     = 4 * 0x0001000100010001ULL;usesVFilter = (srcFilter->lumV && srcFilter->lumV->length > 1) ||(srcFilter->chrV && srcFilter->chrV->length > 1) ||(dstFilter->lumV && dstFilter->lumV->length > 1) ||(dstFilter->chrV && dstFilter->chrV->length > 1);usesHFilter = (srcFilter->lumH && srcFilter->lumH->length > 1) ||(srcFilter->chrH && srcFilter->chrH->length > 1) ||(dstFilter->lumH && dstFilter->lumH->length > 1) ||(dstFilter->chrH && dstFilter->chrH->length > 1);av_pix_fmt_get_chroma_sub_sample(srcFormat, &c->chrSrcHSubSample, &c->chrSrcVSubSample);av_pix_fmt_get_chroma_sub_sample(dstFormat, &c->chrDstHSubSample, &c->chrDstVSubSample);c->dst_slice_align = 1 << c->chrDstVSubSample;if (isAnyRGB(dstFormat) && !(flags&SWS_FULL_CHR_H_INT)) {if (dstW&1) {av_log(c, AV_LOG_DEBUG, "Forcing full internal H chroma due to odd output size\n");flags |= SWS_FULL_CHR_H_INT;c->flags = flags;}if (   c->chrSrcHSubSample == 0&& c->chrSrcVSubSample == 0&& c->dither != SWS_DITHER_BAYER //SWS_FULL_CHR_H_INT is currently not supported with SWS_DITHER_BAYER&& !(c->flags & SWS_FAST_BILINEAR)) {av_log(c, AV_LOG_DEBUG, "Forcing full internal H chroma due to input having non subsampled chroma\n");flags |= SWS_FULL_CHR_H_INT;c->flags = flags;}}if (c->dither == SWS_DITHER_AUTO) {if (flags & SWS_ERROR_DIFFUSION)c->dither = SWS_DITHER_ED;}if(dstFormat == AV_PIX_FMT_BGR4_BYTE ||dstFormat == AV_PIX_FMT_RGB4_BYTE ||dstFormat == AV_PIX_FMT_BGR8 ||dstFormat == AV_PIX_FMT_RGB8) {if (c->dither == SWS_DITHER_AUTO)c->dither = (flags & SWS_FULL_CHR_H_INT) ? SWS_DITHER_ED : SWS_DITHER_BAYER;if (!(flags & SWS_FULL_CHR_H_INT)) {if (c->dither == SWS_DITHER_ED || c->dither == SWS_DITHER_A_DITHER || c->dither == SWS_DITHER_X_DITHER || c->dither == SWS_DITHER_NONE) {av_log(c, AV_LOG_DEBUG,"Desired dithering only supported in full chroma interpolation for destination format '%s'\n",av_get_pix_fmt_name(dstFormat));flags   |= SWS_FULL_CHR_H_INT;c->flags = flags;}}if (flags & SWS_FULL_CHR_H_INT) {if (c->dither == SWS_DITHER_BAYER) {av_log(c, AV_LOG_DEBUG,"Ordered dither is not supported in full chroma interpolation for destination format '%s'\n",av_get_pix_fmt_name(dstFormat));c->dither = SWS_DITHER_ED;}}}if (isPlanarRGB(dstFormat)) {if (!(flags & SWS_FULL_CHR_H_INT)) {av_log(c, AV_LOG_DEBUG,"%s output is not supported with half chroma resolution, switching to full\n",av_get_pix_fmt_name(dstFormat));flags   |= SWS_FULL_CHR_H_INT;c->flags = flags;}}/* reuse chroma for 2 pixels RGB/BGR unless user wants full* chroma interpolation */if (flags & SWS_FULL_CHR_H_INT &&isAnyRGB(dstFormat)        &&!isPlanarRGB(dstFormat)    &&dstFormat != AV_PIX_FMT_RGBA64LE &&dstFormat != AV_PIX_FMT_RGBA64BE &&dstFormat != AV_PIX_FMT_BGRA64LE &&dstFormat != AV_PIX_FMT_BGRA64BE &&dstFormat != AV_PIX_FMT_RGB48LE &&dstFormat != AV_PIX_FMT_RGB48BE &&dstFormat != AV_PIX_FMT_BGR48LE &&dstFormat != AV_PIX_FMT_BGR48BE &&dstFormat != AV_PIX_FMT_RGBA  &&dstFormat != AV_PIX_FMT_ARGB  &&dstFormat != AV_PIX_FMT_BGRA  &&dstFormat != AV_PIX_FMT_ABGR  &&dstFormat != AV_PIX_FMT_RGB24 &&dstFormat != AV_PIX_FMT_BGR24 &&dstFormat != AV_PIX_FMT_BGR4_BYTE &&dstFormat != AV_PIX_FMT_RGB4_BYTE &&dstFormat != AV_PIX_FMT_BGR8 &&dstFormat != AV_PIX_FMT_RGB8) {av_log(c, AV_LOG_WARNING,"full chroma interpolation for destination format '%s' not yet implemented\n",av_get_pix_fmt_name(dstFormat));flags   &= ~SWS_FULL_CHR_H_INT;c->flags = flags;}if (isAnyRGB(dstFormat) && !(flags & SWS_FULL_CHR_H_INT))c->chrDstHSubSample = 1;// drop some chroma lines if the user wants itc->vChrDrop          = (flags & SWS_SRC_V_CHR_DROP_MASK) >>SWS_SRC_V_CHR_DROP_SHIFT;c->chrSrcVSubSample += c->vChrDrop;/* drop every other pixel for chroma calculation unless user* wants full chroma */if (isAnyRGB(srcFormat) && !(flags & SWS_FULL_CHR_H_INP)   &&srcFormat != AV_PIX_FMT_RGB8 && srcFormat != AV_PIX_FMT_BGR8 &&srcFormat != AV_PIX_FMT_RGB4 && srcFormat != AV_PIX_FMT_BGR4 &&srcFormat != AV_PIX_FMT_RGB4_BYTE && srcFormat != AV_PIX_FMT_BGR4_BYTE &&srcFormat != AV_PIX_FMT_GBRP9BE   && srcFormat != AV_PIX_FMT_GBRP9LE  &&srcFormat != AV_PIX_FMT_GBRP10BE  && srcFormat != AV_PIX_FMT_GBRP10LE &&srcFormat != AV_PIX_FMT_GBRAP10BE && srcFormat != AV_PIX_FMT_GBRAP10LE &&srcFormat != AV_PIX_FMT_GBRP12BE  && srcFormat != AV_PIX_FMT_GBRP12LE &&srcFormat != AV_PIX_FMT_GBRAP12BE && srcFormat != AV_PIX_FMT_GBRAP12LE &&srcFormat != AV_PIX_FMT_GBRP14BE  && srcFormat != AV_PIX_FMT_GBRP14LE &&srcFormat != AV_PIX_FMT_GBRP16BE  && srcFormat != AV_PIX_FMT_GBRP16LE &&srcFormat != AV_PIX_FMT_GBRAP16BE  && srcFormat != AV_PIX_FMT_GBRAP16LE &&srcFormat != AV_PIX_FMT_GBRPF32BE  && srcFormat != AV_PIX_FMT_GBRPF32LE &&srcFormat != AV_PIX_FMT_GBRAPF32BE && srcFormat != AV_PIX_FMT_GBRAPF32LE &&((dstW >> c->chrDstHSubSample) <= (srcW >> 1) ||(flags & SWS_FAST_BILINEAR)))c->chrSrcHSubSample = 1;// Note the AV_CEIL_RSHIFT is so that we always round toward +inf.c->chrSrcW = AV_CEIL_RSHIFT(srcW, c->chrSrcHSubSample);c->chrSrcH = AV_CEIL_RSHIFT(srcH, c->chrSrcVSubSample);c->chrDstW = AV_CEIL_RSHIFT(dstW, c->chrDstHSubSample);c->chrDstH = AV_CEIL_RSHIFT(dstH, c->chrDstVSubSample);if (!FF_ALLOCZ_TYPED_ARRAY(c->formatConvBuffer, FFALIGN(srcW * 2 + 78, 16) * 2))goto nomem;c->frame_src = av_frame_alloc();c->frame_dst = av_frame_alloc();if (!c->frame_src || !c->frame_dst)goto nomem;c->srcBpc = desc_src->comp[0].depth;if (c->srcBpc < 8)c->srcBpc = 8;c->dstBpc = desc_dst->comp[0].depth;if (c->dstBpc < 8)c->dstBpc = 8;if (isAnyRGB(srcFormat) || srcFormat == AV_PIX_FMT_PAL8)c->srcBpc = 16;if (c->dstBpc == 16)dst_stride <<= 1;if (INLINE_MMXEXT(cpu_flags) && c->srcBpc == 8 && c->dstBpc <= 14) {c->canMMXEXTBeUsed = dstW >= srcW && (dstW & 31) == 0 &&c->chrDstW >= c->chrSrcW &&(srcW & 15) == 0;if (!c->canMMXEXTBeUsed && dstW >= srcW && c->chrDstW >= c->chrSrcW && (srcW & 15) == 0&& (flags & SWS_FAST_BILINEAR)) {if (flags & SWS_PRINT_INFO)av_log(c, AV_LOG_INFO,"output width is not a multiple of 32 -> no MMXEXT scaler\n");}if (usesHFilter || isNBPS(c->srcFormat) || is16BPS(c->srcFormat) || isAnyRGB(c->srcFormat))c->canMMXEXTBeUsed = 0;} elsec->canMMXEXTBeUsed = 0;c->chrXInc = (((int64_t)c->chrSrcW << 16) + (c->chrDstW >> 1)) / c->chrDstW;c->chrYInc = (((int64_t)c->chrSrcH << 16) + (c->chrDstH >> 1)) / c->chrDstH;/* Match pixel 0 of the src to pixel 0 of dst and match pixel n-2 of src* to pixel n-2 of dst, but only for the FAST_BILINEAR mode otherwise do* correct scaling.* n-2 is the last chrominance sample available.* This is not perfect, but no one should notice the difference, the more* correct variant would be like the vertical one, but that would require* some special code for the first and last pixel */if (flags & SWS_FAST_BILINEAR) {if (c->canMMXEXTBeUsed) {c->lumXInc += 20;c->chrXInc += 20;}// we don't use the x86 asm scaler if MMX is availableelse if (INLINE_MMX(cpu_flags) && c->dstBpc <= 14) {c->lumXInc = ((int64_t)(srcW       - 2) << 16) / (dstW       - 2) - 20;c->chrXInc = ((int64_t)(c->chrSrcW - 2) << 16) / (c->chrDstW - 2) - 20;}}// hardcoded for nowc->gamma_value = 2.2;tmpFmt = AV_PIX_FMT_RGBA64LE;if (!unscaled && c->gamma_flag && (srcFormat != tmpFmt || dstFormat != tmpFmt)) {SwsContext *c2;c->cascaded_context[0] = NULL;ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,srcW, srcH, tmpFmt, 64);if (ret < 0)return ret;c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat,srcW, srcH, tmpFmt,flags, NULL, NULL, c->param);if (!c->cascaded_context[0]) {return AVERROR(ENOMEM);}c->cascaded_context[1] = sws_getContext(srcW, srcH, tmpFmt,dstW, dstH, tmpFmt,flags, srcFilter, dstFilter, c->param);if (!c->cascaded_context[1])return AVERROR(ENOMEM);c2 = c->cascaded_context[1];c2->is_internal_gamma = 1;c2->gamma     = alloc_gamma_tbl(    c->gamma_value);c2->inv_gamma = alloc_gamma_tbl(1.f/c->gamma_value);if (!c2->gamma || !c2->inv_gamma)return AVERROR(ENOMEM);// is_internal_flag is set after creating the context// to properly create the gamma convert FilterDescriptor// we have to re-initialize itff_free_filters(c2);if ((ret = ff_init_filters(c2)) < 0) {sws_freeContext(c2);c->cascaded_context[1] = NULL;return ret;}c->cascaded_context[2] = NULL;if (dstFormat != tmpFmt) {ret = av_image_alloc(c->cascaded1_tmp, c->cascaded1_tmpStride,dstW, dstH, tmpFmt, 64);if (ret < 0)return ret;c->cascaded_context[2] = sws_getContext(dstW, dstH, tmpFmt,dstW, dstH, dstFormat,flags, NULL, NULL, c->param);if (!c->cascaded_context[2])return AVERROR(ENOMEM);}return 0;}if (isBayer(srcFormat)) {if (!unscaled ||(dstFormat != AV_PIX_FMT_RGB24 && dstFormat != AV_PIX_FMT_YUV420P &&dstFormat != AV_PIX_FMT_RGB48)) {enum AVPixelFormat tmpFormat = isBayer16BPS(srcFormat) ? AV_PIX_FMT_RGB48 : AV_PIX_FMT_RGB24;ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,srcW, srcH, tmpFormat, 64);if (ret < 0)return ret;c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat,srcW, srcH, tmpFormat,flags, srcFilter, NULL, c->param);if (!c->cascaded_context[0])return AVERROR(ENOMEM);c->cascaded_context[1] = sws_getContext(srcW, srcH, tmpFormat,dstW, dstH, dstFormat,flags, NULL, dstFilter, c->param);if (!c->cascaded_context[1])return AVERROR(ENOMEM);return 0;}}if (unscaled && c->srcBpc == 8 && dstFormat == AV_PIX_FMT_GRAYF32){for (i = 0; i < 256; ++i){c->uint2float_lut[i] = (float)i * float_mult;}}// float will be converted to uint16_tif ((srcFormat == AV_PIX_FMT_GRAYF32BE || srcFormat == AV_PIX_FMT_GRAYF32LE) &&(!unscaled || unscaled && dstFormat != srcFormat && (srcFormat != AV_PIX_FMT_GRAYF32 ||dstFormat != AV_PIX_FMT_GRAY8))){c->srcBpc = 16;}if (CONFIG_SWSCALE_ALPHA && isALPHA(srcFormat) && !isALPHA(dstFormat)) {enum AVPixelFormat tmpFormat = alphaless_fmt(srcFormat);if (tmpFormat != AV_PIX_FMT_NONE && c->alphablend != SWS_ALPHA_BLEND_NONE) {if (!unscaled ||dstFormat != tmpFormat ||usesHFilter || usesVFilter ||c->srcRange != c->dstRange) {c->cascaded_mainindex = 1;ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,srcW, srcH, tmpFormat, 64);if (ret < 0)return ret;c->cascaded_context[0] = sws_alloc_set_opts(srcW, srcH, srcFormat,srcW, srcH, tmpFormat,flags, c->param);if (!c->cascaded_context[0])return AVERROR(EINVAL);c->cascaded_context[0]->alphablend = c->alphablend;ret = sws_init_context(c->cascaded_context[0], NULL , NULL);if (ret < 0)return ret;c->cascaded_context[1] = sws_alloc_set_opts(srcW, srcH, tmpFormat,dstW, dstH, dstFormat,flags, c->param);if (!c->cascaded_context[1])return AVERROR(EINVAL);c->cascaded_context[1]->srcRange = c->srcRange;c->cascaded_context[1]->dstRange = c->dstRange;ret = sws_init_context(c->cascaded_context[1], srcFilter , dstFilter);if (ret < 0)return ret;return 0;}}}#if HAVE_MMAP && HAVE_MPROTECT && defined(MAP_ANONYMOUS)
#define USE_MMAP 1
#else
#define USE_MMAP 0
#endif/* precalculate horizontal scaler filter coefficients */{
#if HAVE_MMXEXT_INLINE
// can't downscale !!!if (c->canMMXEXTBeUsed && (flags & SWS_FAST_BILINEAR)) {c->lumMmxextFilterCodeSize = ff_init_hscaler_mmxext(dstW, c->lumXInc, NULL,NULL, NULL, 8);c->chrMmxextFilterCodeSize = ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc,NULL, NULL, NULL, 4);#if USE_MMAPc->lumMmxextFilterCode = mmap(NULL, c->lumMmxextFilterCodeSize,PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS,-1, 0);c->chrMmxextFilterCode = mmap(NULL, c->chrMmxextFilterCodeSize,PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS,-1, 0);
#elif HAVE_VIRTUALALLOCc->lumMmxextFilterCode = VirtualAlloc(NULL,c->lumMmxextFilterCodeSize,MEM_COMMIT,PAGE_EXECUTE_READWRITE);c->chrMmxextFilterCode = VirtualAlloc(NULL,c->chrMmxextFilterCodeSize,MEM_COMMIT,PAGE_EXECUTE_READWRITE);
#elsec->lumMmxextFilterCode = av_malloc(c->lumMmxextFilterCodeSize);c->chrMmxextFilterCode = av_malloc(c->chrMmxextFilterCodeSize);
#endif#ifdef MAP_ANONYMOUSif (c->lumMmxextFilterCode == MAP_FAILED || c->chrMmxextFilterCode == MAP_FAILED)
#elseif (!c->lumMmxextFilterCode || !c->chrMmxextFilterCode)
#endif{av_log(c, AV_LOG_ERROR, "Failed to allocate MMX2FilterCode\n");return AVERROR(ENOMEM);}if (!FF_ALLOCZ_TYPED_ARRAY(c->hLumFilter,    dstW           / 8 + 8) ||!FF_ALLOCZ_TYPED_ARRAY(c->hChrFilter,    c->chrDstW     / 4 + 8) ||!FF_ALLOCZ_TYPED_ARRAY(c->hLumFilterPos, dstW       / 2 / 8 + 8) ||!FF_ALLOCZ_TYPED_ARRAY(c->hChrFilterPos, c->chrDstW / 2 / 4 + 8))goto nomem;ff_init_hscaler_mmxext(      dstW, c->lumXInc, c->lumMmxextFilterCode,c->hLumFilter, (uint32_t*)c->hLumFilterPos, 8);ff_init_hscaler_mmxext(c->chrDstW, c->chrXInc, c->chrMmxextFilterCode,c->hChrFilter, (uint32_t*)c->hChrFilterPos, 4);#if USE_MMAPif (   mprotect(c->lumMmxextFilterCode, c->lumMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1|| mprotect(c->chrMmxextFilterCode, c->chrMmxextFilterCodeSize, PROT_EXEC | PROT_READ) == -1) {av_log(c, AV_LOG_ERROR, "mprotect failed, cannot use fast bilinear scaler\n");ret = AVERROR(EINVAL);goto fail;}
#endif} else
#endif /* HAVE_MMXEXT_INLINE */{const int filterAlign = X86_MMX(cpu_flags)     ? 4 :PPC_ALTIVEC(cpu_flags) ? 8 :have_neon(cpu_flags)   ? 4 : 1;if ((ret = initFilter(&c->hLumFilter, &c->hLumFilterPos,&c->hLumFilterSize, c->lumXInc,srcW, dstW, filterAlign, 1 << 14,(flags & SWS_BICUBLIN) ? (flags | SWS_BICUBIC) : flags,cpu_flags, srcFilter->lumH, dstFilter->lumH,c->param,get_local_pos(c, 0, 0, 0),get_local_pos(c, 0, 0, 0))) < 0)goto fail;if (ff_shuffle_filter_coefficients(c, c->hLumFilterPos, c->hLumFilterSize, c->hLumFilter, dstW) < 0)goto nomem;if ((ret = initFilter(&c->hChrFilter, &c->hChrFilterPos,&c->hChrFilterSize, c->chrXInc,c->chrSrcW, c->chrDstW, filterAlign, 1 << 14,(flags & SWS_BICUBLIN) ? (flags | SWS_BILINEAR) : flags,cpu_flags, srcFilter->chrH, dstFilter->chrH,c->param,get_local_pos(c, c->chrSrcHSubSample, c->src_h_chr_pos, 0),get_local_pos(c, c->chrDstHSubSample, c->dst_h_chr_pos, 0))) < 0)goto fail;if (ff_shuffle_filter_coefficients(c, c->hChrFilterPos, c->hChrFilterSize, c->hChrFilter, c->chrDstW) < 0)goto nomem;}} // initialize horizontal stuff/* precalculate vertical scaler filter coefficients */{const int filterAlign = X86_MMX(cpu_flags)     ? 2 :PPC_ALTIVEC(cpu_flags) ? 8 :have_neon(cpu_flags)   ? 2 : 1;if ((ret = initFilter(&c->vLumFilter, &c->vLumFilterPos, &c->vLumFilterSize,c->lumYInc, srcH, dstH, filterAlign, (1 << 12),(flags & SWS_BICUBLIN) ? (flags | SWS_BICUBIC) : flags,cpu_flags, srcFilter->lumV, dstFilter->lumV,c->param,get_local_pos(c, 0, 0, 1),get_local_pos(c, 0, 0, 1))) < 0)goto fail;if ((ret = initFilter(&c->vChrFilter, &c->vChrFilterPos, &c->vChrFilterSize,c->chrYInc, c->chrSrcH, c->chrDstH,filterAlign, (1 << 12),(flags & SWS_BICUBLIN) ? (flags | SWS_BILINEAR) : flags,cpu_flags, srcFilter->chrV, dstFilter->chrV,c->param,get_local_pos(c, c->chrSrcVSubSample, c->src_v_chr_pos, 1),get_local_pos(c, c->chrDstVSubSample, c->dst_v_chr_pos, 1))) < 0)goto fail;#if HAVE_ALTIVECif (!FF_ALLOC_TYPED_ARRAY(c->vYCoeffsBank, c->vLumFilterSize * c->dstH) ||!FF_ALLOC_TYPED_ARRAY(c->vCCoeffsBank, c->vChrFilterSize * c->chrDstH))goto nomem;for (i = 0; i < c->vLumFilterSize * c->dstH; i++) {int j;short *p = (short *)&c->vYCoeffsBank[i];for (j = 0; j < 8; j++)p[j] = c->vLumFilter[i];}for (i = 0; i < c->vChrFilterSize * c->chrDstH; i++) {int j;short *p = (short *)&c->vCCoeffsBank[i];for (j = 0; j < 8; j++)p[j] = c->vChrFilter[i];}
#endif}for (i = 0; i < 4; i++)if (!FF_ALLOCZ_TYPED_ARRAY(c->dither_error[i], c->dstW + 2))goto nomem;c->needAlpha = (CONFIG_SWSCALE_ALPHA && isALPHA(c->srcFormat) && isALPHA(c->dstFormat)) ? 1 : 0;// 64 / c->scalingBpp is the same as 16 / sizeof(scaling_intermediate)c->uv_off   = (dst_stride>>1) + 64 / (c->dstBpc &~ 7);c->uv_offx2 = dst_stride + 16;av_assert0(c->chrDstH <= dstH);if (flags & SWS_PRINT_INFO) {const char *scaler = NULL, *cpucaps;for (i = 0; i < FF_ARRAY_ELEMS(scale_algorithms); i++) {if (flags & scale_algorithms[i].flag) {scaler = scale_algorithms[i].description;break;}}if (!scaler)scaler =  "ehh flags invalid?!";av_log(c, AV_LOG_INFO, "%s scaler, from %s to %s%s ",scaler,av_get_pix_fmt_name(srcFormat),
#ifdef DITHER1XBPPdstFormat == AV_PIX_FMT_BGR555   || dstFormat == AV_PIX_FMT_BGR565   ||dstFormat == AV_PIX_FMT_RGB444BE || dstFormat == AV_PIX_FMT_RGB444LE ||dstFormat == AV_PIX_FMT_BGR444BE || dstFormat == AV_PIX_FMT_BGR444LE ?"dithered " : "",
#else"",
#endifav_get_pix_fmt_name(dstFormat));if (INLINE_MMXEXT(cpu_flags))cpucaps = "MMXEXT";else if (INLINE_AMD3DNOW(cpu_flags))cpucaps = "3DNOW";else if (INLINE_MMX(cpu_flags))cpucaps = "MMX";else if (PPC_ALTIVEC(cpu_flags))cpucaps = "AltiVec";elsecpucaps = "C";av_log(c, AV_LOG_INFO, "using %s\n", cpucaps);av_log(c, AV_LOG_VERBOSE, "%dx%d -> %dx%d\n", srcW, srcH, dstW, dstH);av_log(c, AV_LOG_DEBUG,"lum srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",c->srcW, c->srcH, c->dstW, c->dstH, c->lumXInc, c->lumYInc);av_log(c, AV_LOG_DEBUG,"chr srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",c->chrSrcW, c->chrSrcH, c->chrDstW, c->chrDstH,c->chrXInc, c->chrYInc);}/* alpha blend special case, note this has been split via cascaded contexts if its scaled */if (unscaled && !usesHFilter && !usesVFilter &&c->alphablend != SWS_ALPHA_BLEND_NONE &&isALPHA(srcFormat) &&(c->srcRange == c->dstRange || isAnyRGB(dstFormat)) &&alphaless_fmt(srcFormat) == dstFormat) {c->convert_unscaled = ff_sws_alphablendaway;if (flags & SWS_PRINT_INFO)av_log(c, AV_LOG_INFO,"using alpha blendaway %s -> %s special converter\n",av_get_pix_fmt_name(srcFormat), av_get_pix_fmt_name(dstFormat));return 0;}/* unscaled special cases */if (unscaled && !usesHFilter && !usesVFilter &&(c->srcRange == c->dstRange || isAnyRGB(dstFormat) ||isFloat(srcFormat) || isFloat(dstFormat))){ff_get_unscaled_swscale(c);if (c->convert_unscaled) {if (flags & SWS_PRINT_INFO)av_log(c, AV_LOG_INFO,"using unscaled %s -> %s special converter\n",av_get_pix_fmt_name(srcFormat), av_get_pix_fmt_name(dstFormat));return 0;}}ff_sws_init_scale(c);return ff_init_filters(c);
nomem:ret = AVERROR(ENOMEM);
fail: // FIXME replace things by appropriate error codesif (ret == RETCODE_USE_CASCADE)  {int tmpW = sqrt(srcW * (int64_t)dstW);int tmpH = sqrt(srcH * (int64_t)dstH);enum AVPixelFormat tmpFormat = AV_PIX_FMT_YUV420P;if (isALPHA(srcFormat))tmpFormat = AV_PIX_FMT_YUVA420P;if (srcW*(int64_t)srcH <= 4LL*dstW*dstH)return AVERROR(EINVAL);ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,tmpW, tmpH, tmpFormat, 64);if (ret < 0)return ret;c->cascaded_context[0] = sws_getContext(srcW, srcH, srcFormat,tmpW, tmpH, tmpFormat,flags, srcFilter, NULL, c->param);if (!c->cascaded_context[0])return AVERROR(ENOMEM);c->cascaded_context[1] = sws_getContext(tmpW, tmpH, tmpFormat,dstW, dstH, dstFormat,flags, NULL, dstFilter, c->param);if (!c->cascaded_context[1])return AVERROR(ENOMEM);return 0;}return ret;
}
  • sws_init_context()除了對SwsContext中的各種變量進行賦值之外,主要按照順序完成了以下一些工作:
  • 1. ?通過sws_rgb2rgb_init()初始化RGB轉RGB(或者YUV轉YUV)的函數(注意不包含RGB與YUV相互轉換的函數)。sws_rgb2rgb_init函數被ff_sws_rgb2rgb_init取代,但是二者內部的函數實現是一樣的
  • 2. ?通過判斷輸入輸出圖像的寬高來判斷圖像是否需要拉伸。如果圖像需要拉伸,那么unscaled變量會被標記為1。
  • 3. ?通過sws_setColorspaceDetails()初始化顏色空間。
  • 4. ?一些輸入參數的檢測。例如:如果沒有設置圖像拉伸方法的話,默認設置為SWS_BICUBIC;如果輸入和輸出圖像的寬高小于等于0的話,也會返回錯誤信息。
  • 5. ?初始化Filter。這一步根據拉伸方法的不同,初始化不同的Filter。
  • 6. ?如果flags中設置了“打印信息”選項SWS_PRINT_INFO,則輸出信息。
  • 7. ?如果不需要拉伸的話,調用ff_get_unscaled_swscale()將特定的像素轉換函數的指針賦值給SwsContext中的swscale指針。
  • 8. ?如果需要拉伸的話,調用ff_getSwsFunc()將通用的swscale()賦值給SwsContext中的swscale指針(這個地方有點繞,但是確實是這樣的)。沒有找到對應的代碼進行論證??ff_getSwsFunc函數已被棄用

下面分別記錄一下上述步驟的實現。

?1.初始化RGB轉RGB(或者YUV轉YUV)的函數。注意這部分函數不包含RGB與YUV相互轉換的函數。

ff_sws_rgb2rgb_init()

  • ff_sws_rgb2rgb_init()的定義位于libswscale\rgb2rgb.c,如下所示。
/** RGB15->RGB16 original by Strepto/Astral* ported to gcc & bugfixed : A'rpi* MMXEXT, 3DNOW optimization by Nick Kurshev* 32-bit C version, and and&add trick by Michael Niedermayer*/av_cold void ff_sws_rgb2rgb_init(void)
{rgb2rgb_init_c();if (ARCH_AARCH64)rgb2rgb_init_aarch64();if (ARCH_X86)rgb2rgb_init_x86();
}
  • 從ff_sws_rgb2rgb_init()代碼中可以看出,有兩個初始化函數:
  • rgb2rgb_init_c()是初始化C語言版本的RGB互轉(或者YUV互轉)的函數,
  • rgb2rgb_init_x86()則是初始化X86匯編版本的RGB互轉的函數。
  • PS:在libswscale中有一點需要注意:很多的函數名稱中包含類似“_c”這樣的字符串,代表了該函數是C語言寫的。與之對應的還有其它標記,比如“_mmx”,“sse2”等。

rgb2rgb_init_c()

  • 首先來看一下C語言版本的RGB互轉函數的初始化函數rgb2rgb_init_c(),
  • 定義位于libswscale\rgb2rgb_template.c,如下所示。?
static av_cold void rgb2rgb_init_c(void)
{rgb15to16          = rgb15to16_c;rgb15tobgr24       = rgb15tobgr24_c;rgb15to32          = rgb15to32_c;rgb16tobgr24       = rgb16tobgr24_c;rgb16to32          = rgb16to32_c;rgb16to15          = rgb16to15_c;rgb24tobgr16       = rgb24tobgr16_c;rgb24tobgr15       = rgb24tobgr15_c;rgb24tobgr32       = rgb24tobgr32_c;rgb32to16          = rgb32to16_c;rgb32to15          = rgb32to15_c;rgb32tobgr24       = rgb32tobgr24_c;rgb24to15          = rgb24to15_c;rgb24to16          = rgb24to16_c;rgb24tobgr24       = rgb24tobgr24_c;
#if HAVE_BIGENDIANshuffle_bytes_0321 = shuffle_bytes_2103_c;shuffle_bytes_2103 = shuffle_bytes_0321_c;
#elseshuffle_bytes_0321 = shuffle_bytes_0321_c;shuffle_bytes_2103 = shuffle_bytes_2103_c;
#endifshuffle_bytes_1230 = shuffle_bytes_1230_c;shuffle_bytes_3012 = shuffle_bytes_3012_c;shuffle_bytes_3210 = shuffle_bytes_3210_c;rgb32tobgr16       = rgb32tobgr16_c;rgb32tobgr15       = rgb32tobgr15_c;yv12toyuy2         = yv12toyuy2_c;yv12touyvy         = yv12touyvy_c;yuv422ptoyuy2      = yuv422ptoyuy2_c;yuv422ptouyvy      = yuv422ptouyvy_c;yuy2toyv12         = yuy2toyv12_c;planar2x           = planar2x_c;ff_rgb24toyv12     = ff_rgb24toyv12_c;interleaveBytes    = interleaveBytes_c;deinterleaveBytes  = deinterleaveBytes_c;vu9_to_vu12        = vu9_to_vu12_c;yvu9_to_yuy2       = yvu9_to_yuy2_c;uyvytoyuv420       = uyvytoyuv420_c;uyvytoyuv422       = uyvytoyuv422_c;yuyvtoyuv420       = yuyvtoyuv420_c;yuyvtoyuv422       = yuyvtoyuv422_c;
}
  • 可以看出rgb2rgb_init_c()執行后,會把C語言版本的圖像格式轉換函數賦值給系統的函數指針。
  • 下面我們選擇幾個函數看一下這些轉換函數的定義。

rgb24tobgr24_c

  • rgb24tobgr24_c()完成了RGB24向BGR24格式的轉換。函數的定義如下所示。從代碼中可以看出,該函數實現了“R”與“B”之間位置的對調,從而完成了這兩種格式之間的轉換。
static inline void rgb24tobgr24_c(const uint8_t *src, uint8_t *dst, int src_size)
{unsigned i;for (i = 0; i < src_size; i += 3) {register uint8_t x = src[i + 2];dst[i + 1]         = src[i + 1];dst[i + 2]         = src[i + 0];dst[i + 0]         = x;}
}

rgb24to16_c()

  • rgb24to16_c()完成了RGB24向RGB16像素格式的轉換。
  • 函數的定義如下所示。?
static inline void rgb24to16_c(const uint8_t *src, uint8_t *dst, int src_size)
{uint16_t *d        = (uint16_t *)dst;const uint8_t *s   = src;const uint8_t *end = s + src_size;while (s < end) {const int r = *s++;const int g = *s++;const int b = *s++;*d++        = (b >> 3) | ((g & 0xFC) << 3) | ((r & 0xF8) << 8);}
}

yuyvtoyuv422_c()

  • yuyvtoyuv422_c()完成了YUYV向YUV422像素格式的轉換。函數的定義如下所示。
static void yuyvtoyuv422_c(uint8_t *ydst, uint8_t *udst, uint8_t *vdst,const uint8_t *src, int width, int height,int lumStride, int chromStride, int srcStride)
{int y;const int chromWidth = AV_CEIL_RSHIFT(width, 1);for (y = 0; y < height; y++) {extract_even_c(src, ydst, width);extract_odd2_c(src, udst, vdst, chromWidth);src  += srcStride;ydst += lumStride;udst += chromStride;vdst += chromStride;}
}
  • ?該函數將YUYV像素數據分離成為Y,U,V三個分量的像素數據。
  • 其中extract_even_c()用于獲取一行像素中序數為偶數的像素,對應提取了YUYV像素格式中的“Y”。
  • extract_odd2_c()用于獲取一行像素中序數為奇數的像素,并且把這些像素值再次按照奇偶的不同,存儲于兩個數組中。
  • 對應提取了YUYV像素格式中的“U”和“V”。
  • extract_even_c()定義如下所示。
static void extract_even_c(const uint8_t *src, uint8_t *dst, int count)
{dst   +=  count;src   +=  count * 2;count  = -count;while (count < 0) {dst[count] = src[2 * count];count++;}
}
  • extract_odd2_c()定義如下所示。
static void extract_odd2_c(const uint8_t *src, uint8_t *dst0, uint8_t *dst1,int count)
{dst0  +=  count;dst1  +=  count;src   +=  count * 4;count  = -count;src++;while (count < 0) {dst0[count] = src[4 * count + 0];dst1[count] = src[4 * count + 2];count++;}
}

rgb2rgb_init_x86

  • rgb2rgb_init_x86()用于初始化基于X86匯編語言的RGB互轉的代碼。由于對匯編不是很熟,不再作詳細分析,出于和rgb2rgb_init_c()相對比的目的,列出它的代碼。
  • 它的代碼位于libswscale\x86\rgb2rgb.c,如下所示。
  • PS:所有和匯編有關的代碼都位于libswscale目錄的x86子目錄下。
av_cold void rgb2rgb_init_x86(void)
{int cpu_flags = av_get_cpu_flags();#if HAVE_INLINE_ASMif (INLINE_MMX(cpu_flags))rgb2rgb_init_mmx();if (INLINE_AMD3DNOW(cpu_flags))rgb2rgb_init_3dnow();if (INLINE_MMXEXT(cpu_flags))rgb2rgb_init_mmxext();if (INLINE_SSE2(cpu_flags))rgb2rgb_init_sse2();if (INLINE_AVX(cpu_flags))rgb2rgb_init_avx();
#endif /* HAVE_INLINE_ASM */if (EXTERNAL_MMXEXT(cpu_flags)) {shuffle_bytes_2103 = ff_shuffle_bytes_2103_mmxext;}if (EXTERNAL_SSE2(cpu_flags)) {
#if ARCH_X86_64uyvytoyuv422 = ff_uyvytoyuv422_sse2;
#endif}if (EXTERNAL_SSSE3(cpu_flags)) {shuffle_bytes_0321 = ff_shuffle_bytes_0321_ssse3;shuffle_bytes_2103 = ff_shuffle_bytes_2103_ssse3;shuffle_bytes_1230 = ff_shuffle_bytes_1230_ssse3;shuffle_bytes_3012 = ff_shuffle_bytes_3012_ssse3;shuffle_bytes_3210 = ff_shuffle_bytes_3210_ssse3;}
#if ARCH_X86_64if (EXTERNAL_AVX2_FAST(cpu_flags)) {shuffle_bytes_0321 = ff_shuffle_bytes_0321_avx2;shuffle_bytes_2103 = ff_shuffle_bytes_2103_avx2;shuffle_bytes_1230 = ff_shuffle_bytes_1230_avx2;shuffle_bytes_3012 = ff_shuffle_bytes_3012_avx2;shuffle_bytes_3210 = ff_shuffle_bytes_3210_avx2;}if (EXTERNAL_AVX(cpu_flags)) {uyvytoyuv422 = ff_uyvytoyuv422_avx;}
#endif
}
  • ?可以看出,rgb2rgb_init_x86()首先調用了av_get_cpu_flags()獲取CPU支持的特性,根據特性調用rgb2rgb_init_mmx(),rgb2rgb_init_3dnow(),rgb2rgb_init_mmxext(),rgb2rgb_init_sse2(),rgb2rgb_init_avx()等函數。

2.判斷圖像是否需要拉伸

  • ?這一步主要通過比較輸入圖像和輸出圖像的寬高實現。
  • 系統使用一個unscaled變量記錄圖像是否需要拉伸,如下所示。
  • unscaled = (srcW == dstW && srcH == dstH);

3.初始化顏色空間。

  • 初始化顏色空間通過函數sws_setColorspaceDetails()完成。
  • sws_setColorspaceDetails()是FFmpeg的一個API函數,它的聲明如下所示:
/*** @param dstRange flag indicating the while-black range of the output (1=jpeg / 0=mpeg)* @param srcRange flag indicating the while-black range of the input (1=jpeg / 0=mpeg)* @param table the yuv2rgb coefficients describing the output yuv space, normally ff_yuv2rgb_coeffs[x]* @param inv_table the yuv2rgb coefficients describing the input yuv space, normally ff_yuv2rgb_coeffs[x]* @param brightness 16.16 fixed point brightness correction* @param contrast 16.16 fixed point contrast correction* @param saturation 16.16 fixed point saturation correction
#if LIBSWSCALE_VERSION_MAJOR > 6* @return negative error code on error, non negative otherwise
#else* @return -1 if not supported
#endif*/
int sws_setColorspaceDetails(struct SwsContext *c, const int inv_table[4],int srcRange, const int table[4], int dstRange,int brightness, int contrast, int saturation);
  • 簡單解釋一下幾個參數的含義:
    • c:需要設定的SwsContext。
    • inv_table:描述輸出YUV顏色空間的參數表。
    • srcRange:輸入圖像的取值范圍(“1”代表JPEG標準,取值范圍是0-255;“0”代表MPEG標準,取值范圍是16-235)。
    • table:描述輸入YUV顏色空間的參數表。
    • dstRange:輸出圖像的取值范圍。
    • brightness:未研究。
    • contrast:未研究。
    • saturation:未研究。
  • 如果返回-1代表設置不成功。
  • 其中描述顏色空間的參數表可以通過sws_getCoefficients()獲取。
  • 該函數在后文中再詳細記錄。
  • sws_setColorspaceDetails()的定義位于libswscale\utils.c,如下所示。
int sws_setColorspaceDetails(struct SwsContext *c, const int inv_table[4],int srcRange, const int table[4], int dstRange,int brightness, int contrast, int saturation)
{const AVPixFmtDescriptor *desc_dst;const AVPixFmtDescriptor *desc_src;int need_reinit = 0;if (c->nb_slice_ctx) {int parent_ret = 0;for (int i = 0; i < c->nb_slice_ctx; i++) {int ret = sws_setColorspaceDetails(c->slice_ctx[i], inv_table,srcRange, table, dstRange,brightness, contrast, saturation);if (ret < 0)parent_ret = ret;}return parent_ret;}handle_formats(c);desc_dst = av_pix_fmt_desc_get(c->dstFormat);desc_src = av_pix_fmt_desc_get(c->srcFormat);if(range_override_needed(c->dstFormat))dstRange = 0;if(range_override_needed(c->srcFormat))srcRange = 0;if (c->srcRange != srcRange ||c->dstRange != dstRange ||c->brightness != brightness ||c->contrast   != contrast ||c->saturation != saturation ||memcmp(c->srcColorspaceTable, inv_table, sizeof(int) * 4) ||memcmp(c->dstColorspaceTable,     table, sizeof(int) * 4))need_reinit = 1;memmove(c->srcColorspaceTable, inv_table, sizeof(int) * 4);memmove(c->dstColorspaceTable, table, sizeof(int) * 4);c->brightness = brightness;c->contrast   = contrast;c->saturation = saturation;c->srcRange   = srcRange;c->dstRange   = dstRange;//The srcBpc check is possibly wrong but we seem to lack a definitive reference to test this//and what we have in ticket 2939 looks better with this checkif (need_reinit && (c->srcBpc == 8 || !isYUV(c->srcFormat)))ff_sws_init_range_convert(c);c->dstFormatBpp = av_get_bits_per_pixel(desc_dst);c->srcFormatBpp = av_get_bits_per_pixel(desc_src);if (c->cascaded_context[c->cascaded_mainindex])return sws_setColorspaceDetails(c->cascaded_context[c->cascaded_mainindex],inv_table, srcRange,table, dstRange, brightness,  contrast, saturation);if (!need_reinit)return 0;if ((isYUV(c->dstFormat) || isGray(c->dstFormat)) && (isYUV(c->srcFormat) || isGray(c->srcFormat))) {if (!c->cascaded_context[0] &&memcmp(c->dstColorspaceTable, c->srcColorspaceTable, sizeof(int) * 4) &&c->srcW && c->srcH && c->dstW && c->dstH) {enum AVPixelFormat tmp_format;int tmp_width, tmp_height;int srcW = c->srcW;int srcH = c->srcH;int dstW = c->dstW;int dstH = c->dstH;int ret;av_log(c, AV_LOG_VERBOSE, "YUV color matrix differs for YUV->YUV, using intermediate RGB to convert\n");if (isNBPS(c->dstFormat) || is16BPS(c->dstFormat)) {if (isALPHA(c->srcFormat) && isALPHA(c->dstFormat)) {tmp_format = AV_PIX_FMT_BGRA64;} else {tmp_format = AV_PIX_FMT_BGR48;}} else {if (isALPHA(c->srcFormat) && isALPHA(c->dstFormat)) {tmp_format = AV_PIX_FMT_BGRA;} else {tmp_format = AV_PIX_FMT_BGR24;}}if (srcW*srcH > dstW*dstH) {tmp_width  = dstW;tmp_height = dstH;} else {tmp_width  = srcW;tmp_height = srcH;}ret = av_image_alloc(c->cascaded_tmp, c->cascaded_tmpStride,tmp_width, tmp_height, tmp_format, 64);if (ret < 0)return ret;c->cascaded_context[0] = sws_alloc_set_opts(srcW, srcH, c->srcFormat,tmp_width, tmp_height, tmp_format,c->flags, c->param);if (!c->cascaded_context[0])return -1;c->cascaded_context[0]->alphablend = c->alphablend;ret = sws_init_context(c->cascaded_context[0], NULL , NULL);if (ret < 0)return ret;//we set both src and dst depending on that the RGB side will be ignoredsws_setColorspaceDetails(c->cascaded_context[0], inv_table,srcRange, table, dstRange,brightness, contrast, saturation);c->cascaded_context[1] = sws_alloc_set_opts(tmp_width, tmp_height, tmp_format,dstW, dstH, c->dstFormat,c->flags, c->param);if (!c->cascaded_context[1])return -1;c->cascaded_context[1]->srcRange = srcRange;c->cascaded_context[1]->dstRange = dstRange;ret = sws_init_context(c->cascaded_context[1], NULL , NULL);if (ret < 0)return ret;sws_setColorspaceDetails(c->cascaded_context[1], inv_table,srcRange, table, dstRange,0, 1 << 16, 1 << 16);return 0;}//We do not support this combination currently, we need to cascade more contexts to compensateif (c->cascaded_context[0] && memcmp(c->dstColorspaceTable, c->srcColorspaceTable, sizeof(int) * 4))return -1; //AVERROR_PATCHWELCOME;return 0;}if (!isYUV(c->dstFormat) && !isGray(c->dstFormat)) {ff_yuv2rgb_c_init_tables(c, inv_table, srcRange, brightness,contrast, saturation);// FIXME factorizeif (ARCH_PPC)ff_yuv2rgb_init_tables_ppc(c, inv_table, brightness,contrast, saturation);}fill_rgb2yuv_table(c, table, dstRange);return 0;
}
  • 從sws_setColorspaceDetails()定義中可以看出,該函數將輸入的參數分別賦值給了相應的變量,并且在最后調用了一個函數fill_rgb2yuv_table()。
  • fill_rgb2yuv_table()函數還沒有弄懂,暫時不記錄。

sws_getCoefficients()

  • sws_getCoefficients()用于獲取描述顏色空間的參數表。
  • 它的聲明如下
/*** Return a pointer to yuv<->rgb coefficients for the given colorspace* suitable for sws_setColorspaceDetails().** @param colorspace One of the SWS_CS_* macros. If invalid,* SWS_CS_DEFAULT is used.*/
const int *sws_getCoefficients(int colorspace);
  • 其中colorspace可以取值如下變量。
  • 默認的取值SWS_CS_DEFAULT等同于SWS_CS_ITU601或者SWS_CS_SMPTE170M。
#define SWS_CS_ITU709         1
#define SWS_CS_FCC            4
#define SWS_CS_ITU601         5
#define SWS_CS_ITU624         5
#define SWS_CS_SMPTE170M      5
#define SWS_CS_SMPTE240M      7
#define SWS_CS_DEFAULT        5
#define SWS_CS_BT2020         9
  • 下面看一下sws_getCoefficients()的定義,位于libswscale\yuv2rgb.c,如下所示。?
const int *sws_getCoefficients(int colorspace)
{if (colorspace > 10 || colorspace < 0 || colorspace == 8)colorspace = SWS_CS_DEFAULT;return ff_yuv2rgb_coeffs[colorspace];
}
  • 可以看出它返回了一個名稱為ff_yuv2rgb_coeffs的數組中的一個元素,該數組的定義如下所示。
/* Color space conversion coefficients for YCbCr -> RGB mapping.** Entries are {crv, cbu, cgu, cgv}**   crv = (255 / 224) * 65536 * (1 - cr) / 0.5*   cbu = (255 / 224) * 65536 * (1 - cb) / 0.5*   cgu = (255 / 224) * 65536 * (cb / cg) * (1 - cb) / 0.5*   cgv = (255 / 224) * 65536 * (cr / cg) * (1 - cr) / 0.5** where Y = cr * R + cg * G + cb * B and cr + cg + cb = 1.*/
const int32_t ff_yuv2rgb_coeffs[11][4] = {{ 117489, 138438, 13975, 34925 }, /* no sequence_display_extension */{ 117489, 138438, 13975, 34925 }, /* ITU-R Rec. 709 (1990) */{ 104597, 132201, 25675, 53279 }, /* unspecified */{ 104597, 132201, 25675, 53279 }, /* reserved */{ 104448, 132798, 24759, 53109 }, /* FCC */{ 104597, 132201, 25675, 53279 }, /* ITU-R Rec. 624-4 System B, G */{ 104597, 132201, 25675, 53279 }, /* SMPTE 170M */{ 117579, 136230, 16907, 35559 }, /* SMPTE 240M (1987) */{      0                       }, /* YCgCo */{ 110013, 140363, 12277, 42626 }, /* Bt-2020-NCL */{ 110013, 140363, 12277, 42626 }, /* Bt-2020-CL */
};

?4.一些輸入參數的檢測。

  • 例如:如果沒有設置圖像拉伸方法的話,默認設置為SWS_BICUBIC;
  • 如果輸入和輸出圖像的寬高小于等于0的話,也會返回錯誤信息。
  • 有關這方面的代碼比較多,簡單舉個例子。
    i = flags & (SWS_POINT         |SWS_AREA          |SWS_BILINEAR      |SWS_FAST_BILINEAR |SWS_BICUBIC       |SWS_X             |SWS_GAUSS         |SWS_LANCZOS       |SWS_SINC          |SWS_SPLINE        |SWS_BICUBLIN);/* provide a default scaler if not set by caller */if (!i) {if (dstW < srcW && dstH < srcH)flags |= SWS_BICUBIC;else if (dstW > srcW && dstH > srcH)flags |= SWS_BICUBIC;elseflags |= SWS_BICUBIC;c->flags = flags;} else if (i & (i - 1)) {av_log(c, AV_LOG_ERROR,"Exactly one scaler algorithm must be chosen, got %X\n", i);return AVERROR(EINVAL);}/* sanity check */if (srcW < 1 || srcH < 1 || dstW < 1 || dstH < 1) {/* FIXME check if these are enough and try to lower them after* fixing the relevant parts of the code */av_log(c, AV_LOG_ERROR, "%dx%d -> %dx%d is invalid scaling dimension\n",srcW, srcH, dstW, dstH);return AVERROR(EINVAL);}

5.初始化Filter。這一步根據拉伸方法的不同,初始化不同的Filter。

  • 這一部分的工作在函數initFilter()中完成,暫時不詳細分析。

6.如果flags中設置了“打印信息”選項SWS_PRINT_INFO,則輸出信息。

  • SwsContext初始化的時候,可以給flags設置SWS_PRINT_INFO標記。
  • 這樣SwsContext初始化完成的時候就可以打印出一些配置信息。
  • 與打印相關的代碼如下所示。
if (flags & SWS_PRINT_INFO) {const char *scaler = NULL, *cpucaps;for (i = 0; i < FF_ARRAY_ELEMS(scale_algorithms); i++) {if (flags & scale_algorithms[i].flag) {scaler = scale_algorithms[i].description;break;}}if (!scaler)scaler =  "ehh flags invalid?!";av_log(c, AV_LOG_INFO, "%s scaler, from %s to %s%s ",scaler,av_get_pix_fmt_name(srcFormat),
#ifdef DITHER1XBPPdstFormat == AV_PIX_FMT_BGR555   || dstFormat == AV_PIX_FMT_BGR565   ||dstFormat == AV_PIX_FMT_RGB444BE || dstFormat == AV_PIX_FMT_RGB444LE ||dstFormat == AV_PIX_FMT_BGR444BE || dstFormat == AV_PIX_FMT_BGR444LE ?"dithered " : "",
#else"",
#endifav_get_pix_fmt_name(dstFormat));if (INLINE_MMXEXT(cpu_flags))cpucaps = "MMXEXT";else if (INLINE_AMD3DNOW(cpu_flags))cpucaps = "3DNOW";else if (INLINE_MMX(cpu_flags))cpucaps = "MMX";else if (PPC_ALTIVEC(cpu_flags))cpucaps = "AltiVec";elsecpucaps = "C";av_log(c, AV_LOG_INFO, "using %s\n", cpucaps);av_log(c, AV_LOG_VERBOSE, "%dx%d -> %dx%d\n", srcW, srcH, dstW, dstH);av_log(c, AV_LOG_DEBUG,"lum srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",c->srcW, c->srcH, c->dstW, c->dstH, c->lumXInc, c->lumYInc);av_log(c, AV_LOG_DEBUG,"chr srcW=%d srcH=%d dstW=%d dstH=%d xInc=%d yInc=%d\n",c->chrSrcW, c->chrSrcH, c->chrDstW, c->chrDstH,c->chrXInc, c->chrYInc);}

?7.如果不需要拉伸的話,就會調用ff_get_unscaled_swscale()將特定的像素轉換函數的指針賦值給SwsContext中的swscale指針。
ff_get_unscaled_swscale()

  • ff_get_unscaled_swscale()的定義如下所示。
  • 該函數根據輸入圖像像素格式和輸出圖像像素格式,選擇不同的像素格式轉換函數。
void ff_get_unscaled_swscale(SwsContext *c)
{const enum AVPixelFormat srcFormat = c->srcFormat;const enum AVPixelFormat dstFormat = c->dstFormat;const int flags = c->flags;const int dstH = c->dstH;const int dstW = c->dstW;int needsDither;needsDither = isAnyRGB(dstFormat) &&c->dstFormatBpp < 24 &&(c->dstFormatBpp < c->srcFormatBpp || (!isAnyRGB(srcFormat)));/* yv12_to_nv12 */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&(dstFormat == AV_PIX_FMT_NV12 || dstFormat == AV_PIX_FMT_NV21)) {c->convert_unscaled = planarToNv12Wrapper;}/* yv24_to_nv24 */if ((srcFormat == AV_PIX_FMT_YUV444P || srcFormat == AV_PIX_FMT_YUVA444P) &&(dstFormat == AV_PIX_FMT_NV24 || dstFormat == AV_PIX_FMT_NV42)) {c->convert_unscaled = planarToNv24Wrapper;}/* nv12_to_yv12 */if (dstFormat == AV_PIX_FMT_YUV420P &&(srcFormat == AV_PIX_FMT_NV12 || srcFormat == AV_PIX_FMT_NV21)) {c->convert_unscaled = nv12ToPlanarWrapper;}/* nv24_to_yv24 */if (dstFormat == AV_PIX_FMT_YUV444P &&(srcFormat == AV_PIX_FMT_NV24 || srcFormat == AV_PIX_FMT_NV42)) {c->convert_unscaled = nv24ToPlanarWrapper;}/* yuv2bgr */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUV422P ||srcFormat == AV_PIX_FMT_YUVA420P) && isAnyRGB(dstFormat) &&!(flags & SWS_ACCURATE_RND) && (c->dither == SWS_DITHER_BAYER || c->dither == SWS_DITHER_AUTO) && !(dstH & 1)) {c->convert_unscaled = ff_yuv2rgb_get_func_ptr(c);c->dst_slice_align = 2;}/* yuv420p1x_to_p01x */if ((srcFormat == AV_PIX_FMT_YUV420P10 || srcFormat == AV_PIX_FMT_YUVA420P10 ||srcFormat == AV_PIX_FMT_YUV420P12 ||srcFormat == AV_PIX_FMT_YUV420P14 ||srcFormat == AV_PIX_FMT_YUV420P16 || srcFormat == AV_PIX_FMT_YUVA420P16) &&(dstFormat == AV_PIX_FMT_P010 || dstFormat == AV_PIX_FMT_P016)) {c->convert_unscaled = planarToP01xWrapper;}/* yuv420p_to_p01xle */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&(dstFormat == AV_PIX_FMT_P010LE || dstFormat == AV_PIX_FMT_P016LE)) {c->convert_unscaled = planar8ToP01xleWrapper;}if (srcFormat == AV_PIX_FMT_YUV410P && !(dstH & 3) &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_BITEXACT)) {c->convert_unscaled = yvu9ToYv12Wrapper;c->dst_slice_align = 4;}/* bgr24toYV12 */if (srcFormat == AV_PIX_FMT_BGR24 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_ACCURATE_RND) && !(dstW&1))c->convert_unscaled = bgr24ToYv12Wrapper;/* RGB/BGR -> RGB/BGR (no dither needed forms) */if (isAnyRGB(srcFormat) && isAnyRGB(dstFormat) && findRgbConvFn(c)&& (!needsDither || (c->flags&(SWS_FAST_BILINEAR|SWS_POINT))))c->convert_unscaled = rgbToRgbWrapper;/* RGB to planar RGB */if ((srcFormat == AV_PIX_FMT_GBRP && dstFormat == AV_PIX_FMT_GBRAP) ||(srcFormat == AV_PIX_FMT_GBRAP && dstFormat == AV_PIX_FMT_GBRP))c->convert_unscaled = planarRgbToplanarRgbWrapper;#define isByteRGB(f) (             \f == AV_PIX_FMT_RGB32   || \f == AV_PIX_FMT_RGB32_1 || \f == AV_PIX_FMT_RGB24   || \f == AV_PIX_FMT_BGR32   || \f == AV_PIX_FMT_BGR32_1 || \f == AV_PIX_FMT_BGR24)if (srcFormat == AV_PIX_FMT_GBRP && isPlanar(srcFormat) && isByteRGB(dstFormat))c->convert_unscaled = planarRgbToRgbWrapper;if (srcFormat == AV_PIX_FMT_GBRAP && isByteRGB(dstFormat))c->convert_unscaled = planarRgbaToRgbWrapper;if ((srcFormat == AV_PIX_FMT_RGB48LE  || srcFormat == AV_PIX_FMT_RGB48BE  ||srcFormat == AV_PIX_FMT_BGR48LE  || srcFormat == AV_PIX_FMT_BGR48BE  ||srcFormat == AV_PIX_FMT_RGBA64LE || srcFormat == AV_PIX_FMT_RGBA64BE ||srcFormat == AV_PIX_FMT_BGRA64LE || srcFormat == AV_PIX_FMT_BGRA64BE) &&(dstFormat == AV_PIX_FMT_GBRP9LE  || dstFormat == AV_PIX_FMT_GBRP9BE  ||dstFormat == AV_PIX_FMT_GBRP10LE || dstFormat == AV_PIX_FMT_GBRP10BE ||dstFormat == AV_PIX_FMT_GBRP12LE || dstFormat == AV_PIX_FMT_GBRP12BE ||dstFormat == AV_PIX_FMT_GBRP14LE || dstFormat == AV_PIX_FMT_GBRP14BE ||dstFormat == AV_PIX_FMT_GBRP16LE || dstFormat == AV_PIX_FMT_GBRP16BE ||dstFormat == AV_PIX_FMT_GBRAP10LE || dstFormat == AV_PIX_FMT_GBRAP10BE ||dstFormat == AV_PIX_FMT_GBRAP12LE || dstFormat == AV_PIX_FMT_GBRAP12BE ||dstFormat == AV_PIX_FMT_GBRAP16LE || dstFormat == AV_PIX_FMT_GBRAP16BE ))c->convert_unscaled = Rgb16ToPlanarRgb16Wrapper;if ((srcFormat == AV_PIX_FMT_GBRP9LE  || srcFormat == AV_PIX_FMT_GBRP9BE  ||srcFormat == AV_PIX_FMT_GBRP16LE || srcFormat == AV_PIX_FMT_GBRP16BE ||srcFormat == AV_PIX_FMT_GBRP10LE || srcFormat == AV_PIX_FMT_GBRP10BE ||srcFormat == AV_PIX_FMT_GBRP12LE || srcFormat == AV_PIX_FMT_GBRP12BE ||srcFormat == AV_PIX_FMT_GBRP14LE || srcFormat == AV_PIX_FMT_GBRP14BE ||srcFormat == AV_PIX_FMT_GBRAP10LE || srcFormat == AV_PIX_FMT_GBRAP10BE ||srcFormat == AV_PIX_FMT_GBRAP12LE || srcFormat == AV_PIX_FMT_GBRAP12BE ||srcFormat == AV_PIX_FMT_GBRAP16LE || srcFormat == AV_PIX_FMT_GBRAP16BE) &&(dstFormat == AV_PIX_FMT_RGB48LE  || dstFormat == AV_PIX_FMT_RGB48BE  ||dstFormat == AV_PIX_FMT_BGR48LE  || dstFormat == AV_PIX_FMT_BGR48BE  ||dstFormat == AV_PIX_FMT_RGBA64LE || dstFormat == AV_PIX_FMT_RGBA64BE ||dstFormat == AV_PIX_FMT_BGRA64LE || dstFormat == AV_PIX_FMT_BGRA64BE))c->convert_unscaled = planarRgb16ToRgb16Wrapper;if (av_pix_fmt_desc_get(srcFormat)->comp[0].depth == 8 &&isPackedRGB(srcFormat) && dstFormat == AV_PIX_FMT_GBRP)c->convert_unscaled = rgbToPlanarRgbWrapper;if (isBayer(srcFormat)) {if (dstFormat == AV_PIX_FMT_RGB24)c->convert_unscaled = bayer_to_rgb24_wrapper;else if (dstFormat == AV_PIX_FMT_RGB48)c->convert_unscaled = bayer_to_rgb48_wrapper;else if (dstFormat == AV_PIX_FMT_YUV420P)c->convert_unscaled = bayer_to_yv12_wrapper;else if (!isBayer(dstFormat)) {av_log(c, AV_LOG_ERROR, "unsupported bayer conversion\n");av_assert0(0);}}/* bswap 16 bits per pixel/component packed formats */if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_BGGR16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_RGGB16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GBRG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GRBG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR48)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGRA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YA16)   ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_AYUV64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB48)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGBA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_XYZ12)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P16))c->convert_unscaled = bswap_16bpc;/* bswap 32 bits per pixel/component formats */if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRPF32) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAPF32))c->convert_unscaled = bswap_32bpc;if (usePal(srcFormat) && isByteRGB(dstFormat))c->convert_unscaled = palToRgbWrapper;if (srcFormat == AV_PIX_FMT_YUV422P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->convert_unscaled = yuv422pToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->convert_unscaled = yuv422pToUyvyWrapper;}/* uint Y to float Y */if (srcFormat == AV_PIX_FMT_GRAY8 && dstFormat == AV_PIX_FMT_GRAYF32){c->convert_unscaled = uint_y_to_float_y_wrapper;}/* float Y to uint Y */if (srcFormat == AV_PIX_FMT_GRAYF32 && dstFormat == AV_PIX_FMT_GRAY8){c->convert_unscaled = float_y_to_uint_y_wrapper;}/* LQ converters if -sws 0 or -sws 4*/if (c->flags&(SWS_FAST_BILINEAR|SWS_POINT)) {/* yv12_to_yuy2 */if (srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->convert_unscaled = planarToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->convert_unscaled = planarToUyvyWrapper;}}if (srcFormat == AV_PIX_FMT_YUYV422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->convert_unscaled = yuyvToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->convert_unscaled = uyvyToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_YUYV422 && dstFormat == AV_PIX_FMT_YUV422P)c->convert_unscaled = yuyvToYuv422Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 && dstFormat == AV_PIX_FMT_YUV422P)c->convert_unscaled = uyvyToYuv422Wrapper;#define isPlanarGray(x) (isGray(x) && (x) != AV_PIX_FMT_YA8 && (x) != AV_PIX_FMT_YA16LE && (x) != AV_PIX_FMT_YA16BE)/* simple copy */if ( srcFormat == dstFormat ||(srcFormat == AV_PIX_FMT_YUVA420P && dstFormat == AV_PIX_FMT_YUV420P) ||(srcFormat == AV_PIX_FMT_YUV420P && dstFormat == AV_PIX_FMT_YUVA420P) ||(isFloat(srcFormat) == isFloat(dstFormat)) && ((isPlanarYUV(srcFormat) && isPlanarGray(dstFormat)) ||(isPlanarYUV(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarGray(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarYUV(srcFormat) && isPlanarYUV(dstFormat) &&c->chrDstHSubSample == c->chrSrcHSubSample &&c->chrDstVSubSample == c->chrSrcVSubSample &&!isSemiPlanarYUV(srcFormat) && !isSemiPlanarYUV(dstFormat)))){if (isPacked(c->srcFormat))c->convert_unscaled = packedCopyWrapper;else /* Planar YUV or gray */c->convert_unscaled = planarCopyWrapper;}if (ARCH_PPC)ff_get_unscaled_swscale_ppc(c);if (ARCH_ARM)ff_get_unscaled_swscale_arm(c);if (ARCH_AARCH64)ff_get_unscaled_swscale_aarch64(c);
}
  • 從ff_get_unscaled_swscale()源代碼中可以看出,賦值給SwsContext的swscale指針的函數名稱大多數為XXXWrapper()。實際上這些函數封裝了一些基本的像素格式轉換函數。
  • 例如yuyvToYuv422Wrapper()的定義如下所示。
static int yuyvToYuv422Wrapper(SwsContext *c, const uint8_t *src[],int srcStride[], int srcSliceY, int srcSliceH,uint8_t *dstParam[], int dstStride[])
{uint8_t *ydst = dstParam[0] + dstStride[0] * srcSliceY;uint8_t *udst = dstParam[1] + dstStride[1] * srcSliceY;uint8_t *vdst = dstParam[2] + dstStride[2] * srcSliceY;yuyvtoyuv422(ydst, udst, vdst, src[0], c->srcW, srcSliceH, dstStride[0],dstStride[1], srcStride[0]);return srcSliceH;
}
  • ?從yuyvToYuv422Wrapper()的定義中可以看出,它調用了yuyvtoyuv422()。
  • 而yuyvtoyuv422()則是rgb2rgb.c中的一個函數,用于將YUVU轉換為YUV422(該函數在前文中已經記錄)。

8.如果需要拉伸的話,就會調用ff_getSwsFunc()將通用的swscale()賦值給SwsContext中的swscale指針,然后返回。

  • 上一步驟(圖像不用縮放)實際上是一種不太常見的情況,更多的情況下會執行本步驟。
  • 這個時候就會調用ff_getSwsFunc()獲取圖像的縮放函數。

ff_getSwsFunc()

  • ff_getSwsFunc()用于獲取通用的swscale()函數。
  • ff_getSwsFunc 已被 棄用
SwsFunc ff_getSwsFunc(SwsContext *c)
{sws_init_swscale(c);if (ARCH_PPC)ff_sws_init_swscale_ppc(c);if (ARCH_X86)ff_sws_init_swscale_x86(c);return swscale;
}
  • ff_sws_init_scale函數的內部執行邏輯和?ff_getSwsFunc 類似
void ff_sws_init_scale(SwsContext *c)
{sws_init_swscale(c);if (ARCH_PPC)ff_sws_init_swscale_ppc(c);if (ARCH_X86)ff_sws_init_swscale_x86(c);if (ARCH_AARCH64)ff_sws_init_swscale_aarch64(c);if (ARCH_ARM)ff_sws_init_swscale_arm(c);
}
  • 從源代碼中可以看出ff_getSwsFunc()調用了函數sws_init_swscale()。
  • 如果系統支持X86匯編的話,還會調用ff_sws_init_swscale_x86()。

sws_init_swscale()

  • sws_init_swscale()的定義位于libswscale\swscale.c,如下所示。
static av_cold void sws_init_swscale(SwsContext *c)
{enum AVPixelFormat srcFormat = c->srcFormat;ff_sws_init_output_funcs(c, &c->yuv2plane1, &c->yuv2planeX,&c->yuv2nv12cX, &c->yuv2packed1,&c->yuv2packed2, &c->yuv2packedX, &c->yuv2anyX);ff_sws_init_input_funcs(c);if (c->srcBpc == 8) {if (c->dstBpc <= 14) {c->hyScale = c->hcScale = hScale8To15_c;if (c->flags & SWS_FAST_BILINEAR) {c->hyscale_fast = ff_hyscale_fast_c;c->hcscale_fast = ff_hcscale_fast_c;}} else {c->hyScale = c->hcScale = hScale8To19_c;}} else {c->hyScale = c->hcScale = c->dstBpc > 14 ? hScale16To19_c: hScale16To15_c;}ff_sws_init_range_convert(c);if (!(isGray(srcFormat) || isGray(c->dstFormat) ||srcFormat == AV_PIX_FMT_MONOBLACK || srcFormat == AV_PIX_FMT_MONOWHITE))c->needs_hcscale = 1;
}
  • ?從函數中可以看出,sws_init_swscale()主要調用了3個函數:ff_sws_init_output_funcs(),ff_sws_init_input_funcs(),ff_sws_init_range_convert()。
  • 其中,ff_sws_init_output_funcs()用于初始化輸出的函數,ff_sws_init_input_funcs()用于初始化輸入的函數,ff_sws_init_range_convert()用于初始化像素值范圍轉換的函數。

ff_sws_init_output_funcs()

  • ff_sws_init_output_funcs()用于初始化“輸出函數”。“輸出函數”在libswscale中的作用就是將處理后的一行像素數據輸出出來。
  • ff_sws_init_output_funcs()的定義位于libswscale\output.c,如下所示。
av_cold void ff_sws_init_output_funcs(SwsContext *c,yuv2planar1_fn *yuv2plane1,yuv2planarX_fn *yuv2planeX,yuv2interleavedX_fn *yuv2nv12cX,yuv2packed1_fn *yuv2packed1,yuv2packed2_fn *yuv2packed2,yuv2packedX_fn *yuv2packedX,yuv2anyX_fn *yuv2anyX)
{enum AVPixelFormat dstFormat = c->dstFormat;const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);if (isSemiPlanarYUV(dstFormat) && isDataInHighBits(dstFormat)) {av_assert0(desc->comp[0].depth == 10);*yuv2plane1 = isBE(dstFormat) ? yuv2p010l1_BE_c : yuv2p010l1_LE_c;*yuv2planeX = isBE(dstFormat) ? yuv2p010lX_BE_c : yuv2p010lX_LE_c;*yuv2nv12cX = isBE(dstFormat) ? yuv2p010cX_BE_c : yuv2p010cX_LE_c;} else if (is16BPS(dstFormat)) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_16BE_c  : yuv2planeX_16LE_c;*yuv2plane1 = isBE(dstFormat) ? yuv2plane1_16BE_c  : yuv2plane1_16LE_c;if (isSemiPlanarYUV(dstFormat)) {*yuv2nv12cX = isBE(dstFormat) ? yuv2nv12cX_16BE_c : yuv2nv12cX_16LE_c;}} else if (isNBPS(dstFormat)) {if (desc->comp[0].depth == 9) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_9BE_c  : yuv2planeX_9LE_c;*yuv2plane1 = isBE(dstFormat) ? yuv2plane1_9BE_c  : yuv2plane1_9LE_c;} else if (desc->comp[0].depth == 10) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_10BE_c  : yuv2planeX_10LE_c;*yuv2plane1 = isBE(dstFormat) ? yuv2plane1_10BE_c  : yuv2plane1_10LE_c;} else if (desc->comp[0].depth == 12) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_12BE_c  : yuv2planeX_12LE_c;*yuv2plane1 = isBE(dstFormat) ? yuv2plane1_12BE_c  : yuv2plane1_12LE_c;} else if (desc->comp[0].depth == 14) {*yuv2planeX = isBE(dstFormat) ? yuv2planeX_14BE_c  : yuv2planeX_14LE_c;*yuv2plane1 = isBE(dstFormat) ? yuv2plane1_14BE_c  : yuv2plane1_14LE_c;} elseav_assert0(0);} else if (dstFormat == AV_PIX_FMT_GRAYF32BE) {*yuv2planeX = yuv2planeX_floatBE_c;*yuv2plane1 = yuv2plane1_floatBE_c;} else if (dstFormat == AV_PIX_FMT_GRAYF32LE) {*yuv2planeX = yuv2planeX_floatLE_c;*yuv2plane1 = yuv2plane1_floatLE_c;} else {*yuv2plane1 = yuv2plane1_8_c;*yuv2planeX = yuv2planeX_8_c;if (isSemiPlanarYUV(dstFormat))*yuv2nv12cX = yuv2nv12cX_c;}if(c->flags & SWS_FULL_CHR_H_INT) {switch (dstFormat) {case AV_PIX_FMT_RGBA:
#if CONFIG_SMALL*yuv2packedX = yuv2rgba32_full_X_c;*yuv2packed2 = yuv2rgba32_full_2_c;*yuv2packed1 = yuv2rgba32_full_1_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2rgba32_full_X_c;*yuv2packed2 = yuv2rgba32_full_2_c;*yuv2packed1 = yuv2rgba32_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2rgbx32_full_X_c;*yuv2packed2 = yuv2rgbx32_full_2_c;*yuv2packed1 = yuv2rgbx32_full_1_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_ARGB:
#if CONFIG_SMALL*yuv2packedX = yuv2argb32_full_X_c;*yuv2packed2 = yuv2argb32_full_2_c;*yuv2packed1 = yuv2argb32_full_1_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2argb32_full_X_c;*yuv2packed2 = yuv2argb32_full_2_c;*yuv2packed1 = yuv2argb32_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2xrgb32_full_X_c;*yuv2packed2 = yuv2xrgb32_full_2_c;*yuv2packed1 = yuv2xrgb32_full_1_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_BGRA:
#if CONFIG_SMALL*yuv2packedX = yuv2bgra32_full_X_c;*yuv2packed2 = yuv2bgra32_full_2_c;*yuv2packed1 = yuv2bgra32_full_1_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2bgra32_full_X_c;*yuv2packed2 = yuv2bgra32_full_2_c;*yuv2packed1 = yuv2bgra32_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2bgrx32_full_X_c;*yuv2packed2 = yuv2bgrx32_full_2_c;*yuv2packed1 = yuv2bgrx32_full_1_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_ABGR:
#if CONFIG_SMALL*yuv2packedX = yuv2abgr32_full_X_c;*yuv2packed2 = yuv2abgr32_full_2_c;*yuv2packed1 = yuv2abgr32_full_1_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2abgr32_full_X_c;*yuv2packed2 = yuv2abgr32_full_2_c;*yuv2packed1 = yuv2abgr32_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2xbgr32_full_X_c;*yuv2packed2 = yuv2xbgr32_full_2_c;*yuv2packed1 = yuv2xbgr32_full_1_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_RGBA64LE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2rgba64le_full_X_c;*yuv2packed2 = yuv2rgba64le_full_2_c;*yuv2packed1 = yuv2rgba64le_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2rgbx64le_full_X_c;*yuv2packed2 = yuv2rgbx64le_full_2_c;*yuv2packed1 = yuv2rgbx64le_full_1_c;}break;case AV_PIX_FMT_RGBA64BE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2rgba64be_full_X_c;*yuv2packed2 = yuv2rgba64be_full_2_c;*yuv2packed1 = yuv2rgba64be_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2rgbx64be_full_X_c;*yuv2packed2 = yuv2rgbx64be_full_2_c;*yuv2packed1 = yuv2rgbx64be_full_1_c;}break;case AV_PIX_FMT_BGRA64LE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2bgra64le_full_X_c;*yuv2packed2 = yuv2bgra64le_full_2_c;*yuv2packed1 = yuv2bgra64le_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2bgrx64le_full_X_c;*yuv2packed2 = yuv2bgrx64le_full_2_c;*yuv2packed1 = yuv2bgrx64le_full_1_c;}break;case AV_PIX_FMT_BGRA64BE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packedX = yuv2bgra64be_full_X_c;*yuv2packed2 = yuv2bgra64be_full_2_c;*yuv2packed1 = yuv2bgra64be_full_1_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packedX = yuv2bgrx64be_full_X_c;*yuv2packed2 = yuv2bgrx64be_full_2_c;*yuv2packed1 = yuv2bgrx64be_full_1_c;}break;case AV_PIX_FMT_RGB24:*yuv2packedX = yuv2rgb24_full_X_c;*yuv2packed2 = yuv2rgb24_full_2_c;*yuv2packed1 = yuv2rgb24_full_1_c;break;case AV_PIX_FMT_BGR24:*yuv2packedX = yuv2bgr24_full_X_c;*yuv2packed2 = yuv2bgr24_full_2_c;*yuv2packed1 = yuv2bgr24_full_1_c;break;case AV_PIX_FMT_RGB48LE:*yuv2packedX = yuv2rgb48le_full_X_c;*yuv2packed2 = yuv2rgb48le_full_2_c;*yuv2packed1 = yuv2rgb48le_full_1_c;break;case AV_PIX_FMT_BGR48LE:*yuv2packedX = yuv2bgr48le_full_X_c;*yuv2packed2 = yuv2bgr48le_full_2_c;*yuv2packed1 = yuv2bgr48le_full_1_c;break;case AV_PIX_FMT_RGB48BE:*yuv2packedX = yuv2rgb48be_full_X_c;*yuv2packed2 = yuv2rgb48be_full_2_c;*yuv2packed1 = yuv2rgb48be_full_1_c;break;case AV_PIX_FMT_BGR48BE:*yuv2packedX = yuv2bgr48be_full_X_c;*yuv2packed2 = yuv2bgr48be_full_2_c;*yuv2packed1 = yuv2bgr48be_full_1_c;break;case AV_PIX_FMT_BGR4_BYTE:*yuv2packedX = yuv2bgr4_byte_full_X_c;*yuv2packed2 = yuv2bgr4_byte_full_2_c;*yuv2packed1 = yuv2bgr4_byte_full_1_c;break;case AV_PIX_FMT_RGB4_BYTE:*yuv2packedX = yuv2rgb4_byte_full_X_c;*yuv2packed2 = yuv2rgb4_byte_full_2_c;*yuv2packed1 = yuv2rgb4_byte_full_1_c;break;case AV_PIX_FMT_BGR8:*yuv2packedX = yuv2bgr8_full_X_c;*yuv2packed2 = yuv2bgr8_full_2_c;*yuv2packed1 = yuv2bgr8_full_1_c;break;case AV_PIX_FMT_RGB8:*yuv2packedX = yuv2rgb8_full_X_c;*yuv2packed2 = yuv2rgb8_full_2_c;*yuv2packed1 = yuv2rgb8_full_1_c;break;case AV_PIX_FMT_GBRP:case AV_PIX_FMT_GBRP9BE:case AV_PIX_FMT_GBRP9LE:case AV_PIX_FMT_GBRP10BE:case AV_PIX_FMT_GBRP10LE:case AV_PIX_FMT_GBRP12BE:case AV_PIX_FMT_GBRP12LE:case AV_PIX_FMT_GBRP14BE:case AV_PIX_FMT_GBRP14LE:case AV_PIX_FMT_GBRAP:case AV_PIX_FMT_GBRAP10BE:case AV_PIX_FMT_GBRAP10LE:case AV_PIX_FMT_GBRAP12BE:case AV_PIX_FMT_GBRAP12LE:*yuv2anyX = yuv2gbrp_full_X_c;break;case AV_PIX_FMT_GBRP16BE:case AV_PIX_FMT_GBRP16LE:case AV_PIX_FMT_GBRAP16BE:case AV_PIX_FMT_GBRAP16LE:*yuv2anyX = yuv2gbrp16_full_X_c;break;case AV_PIX_FMT_GBRPF32BE:case AV_PIX_FMT_GBRPF32LE:case AV_PIX_FMT_GBRAPF32BE:case AV_PIX_FMT_GBRAPF32LE:*yuv2anyX = yuv2gbrpf32_full_X_c;break;}if (!*yuv2packedX && !*yuv2anyX)goto YUV_PACKED;} else {YUV_PACKED:switch (dstFormat) {case AV_PIX_FMT_RGBA64LE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2rgba64le_1_c;*yuv2packed2 = yuv2rgba64le_2_c;*yuv2packedX = yuv2rgba64le_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2rgbx64le_1_c;*yuv2packed2 = yuv2rgbx64le_2_c;*yuv2packedX = yuv2rgbx64le_X_c;}break;case AV_PIX_FMT_RGBA64BE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2rgba64be_1_c;*yuv2packed2 = yuv2rgba64be_2_c;*yuv2packedX = yuv2rgba64be_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2rgbx64be_1_c;*yuv2packed2 = yuv2rgbx64be_2_c;*yuv2packedX = yuv2rgbx64be_X_c;}break;case AV_PIX_FMT_BGRA64LE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2bgra64le_1_c;*yuv2packed2 = yuv2bgra64le_2_c;*yuv2packedX = yuv2bgra64le_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2bgrx64le_1_c;*yuv2packed2 = yuv2bgrx64le_2_c;*yuv2packedX = yuv2bgrx64le_X_c;}break;case AV_PIX_FMT_BGRA64BE:
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2bgra64be_1_c;*yuv2packed2 = yuv2bgra64be_2_c;*yuv2packedX = yuv2bgra64be_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2bgrx64be_1_c;*yuv2packed2 = yuv2bgrx64be_2_c;*yuv2packedX = yuv2bgrx64be_X_c;}break;case AV_PIX_FMT_RGB48LE:*yuv2packed1 = yuv2rgb48le_1_c;*yuv2packed2 = yuv2rgb48le_2_c;*yuv2packedX = yuv2rgb48le_X_c;break;case AV_PIX_FMT_RGB48BE:*yuv2packed1 = yuv2rgb48be_1_c;*yuv2packed2 = yuv2rgb48be_2_c;*yuv2packedX = yuv2rgb48be_X_c;break;case AV_PIX_FMT_BGR48LE:*yuv2packed1 = yuv2bgr48le_1_c;*yuv2packed2 = yuv2bgr48le_2_c;*yuv2packedX = yuv2bgr48le_X_c;break;case AV_PIX_FMT_BGR48BE:*yuv2packed1 = yuv2bgr48be_1_c;*yuv2packed2 = yuv2bgr48be_2_c;*yuv2packedX = yuv2bgr48be_X_c;break;case AV_PIX_FMT_RGB32:case AV_PIX_FMT_BGR32:
#if CONFIG_SMALL*yuv2packed1 = yuv2rgb32_1_c;*yuv2packed2 = yuv2rgb32_2_c;*yuv2packedX = yuv2rgb32_X_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2rgba32_1_c;*yuv2packed2 = yuv2rgba32_2_c;*yuv2packedX = yuv2rgba32_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2rgbx32_1_c;*yuv2packed2 = yuv2rgbx32_2_c;*yuv2packedX = yuv2rgbx32_X_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_RGB32_1:case AV_PIX_FMT_BGR32_1:
#if CONFIG_SMALL*yuv2packed1 = yuv2rgb32_1_1_c;*yuv2packed2 = yuv2rgb32_1_2_c;*yuv2packedX = yuv2rgb32_1_X_c;
#else
#if CONFIG_SWSCALE_ALPHAif (c->needAlpha) {*yuv2packed1 = yuv2rgba32_1_1_c;*yuv2packed2 = yuv2rgba32_1_2_c;*yuv2packedX = yuv2rgba32_1_X_c;} else
#endif /* CONFIG_SWSCALE_ALPHA */{*yuv2packed1 = yuv2rgbx32_1_1_c;*yuv2packed2 = yuv2rgbx32_1_2_c;*yuv2packedX = yuv2rgbx32_1_X_c;}
#endif /* !CONFIG_SMALL */break;case AV_PIX_FMT_RGB24:*yuv2packed1 = yuv2rgb24_1_c;*yuv2packed2 = yuv2rgb24_2_c;*yuv2packedX = yuv2rgb24_X_c;break;case AV_PIX_FMT_BGR24:*yuv2packed1 = yuv2bgr24_1_c;*yuv2packed2 = yuv2bgr24_2_c;*yuv2packedX = yuv2bgr24_X_c;break;case AV_PIX_FMT_RGB565LE:case AV_PIX_FMT_RGB565BE:case AV_PIX_FMT_BGR565LE:case AV_PIX_FMT_BGR565BE:*yuv2packed1 = yuv2rgb16_1_c;*yuv2packed2 = yuv2rgb16_2_c;*yuv2packedX = yuv2rgb16_X_c;break;case AV_PIX_FMT_RGB555LE:case AV_PIX_FMT_RGB555BE:case AV_PIX_FMT_BGR555LE:case AV_PIX_FMT_BGR555BE:*yuv2packed1 = yuv2rgb15_1_c;*yuv2packed2 = yuv2rgb15_2_c;*yuv2packedX = yuv2rgb15_X_c;break;case AV_PIX_FMT_RGB444LE:case AV_PIX_FMT_RGB444BE:case AV_PIX_FMT_BGR444LE:case AV_PIX_FMT_BGR444BE:*yuv2packed1 = yuv2rgb12_1_c;*yuv2packed2 = yuv2rgb12_2_c;*yuv2packedX = yuv2rgb12_X_c;break;case AV_PIX_FMT_RGB8:case AV_PIX_FMT_BGR8:*yuv2packed1 = yuv2rgb8_1_c;*yuv2packed2 = yuv2rgb8_2_c;*yuv2packedX = yuv2rgb8_X_c;break;case AV_PIX_FMT_RGB4:case AV_PIX_FMT_BGR4:*yuv2packed1 = yuv2rgb4_1_c;*yuv2packed2 = yuv2rgb4_2_c;*yuv2packedX = yuv2rgb4_X_c;break;case AV_PIX_FMT_RGB4_BYTE:case AV_PIX_FMT_BGR4_BYTE:*yuv2packed1 = yuv2rgb4b_1_c;*yuv2packed2 = yuv2rgb4b_2_c;*yuv2packedX = yuv2rgb4b_X_c;break;case AV_PIX_FMT_X2RGB10LE:case AV_PIX_FMT_X2RGB10BE:*yuv2packed1 = yuv2x2rgb10_1_c;*yuv2packed2 = yuv2x2rgb10_2_c;*yuv2packedX = yuv2x2rgb10_X_c;break;case AV_PIX_FMT_X2BGR10LE:case AV_PIX_FMT_X2BGR10BE:*yuv2packed1 = yuv2x2bgr10_1_c;*yuv2packed2 = yuv2x2bgr10_2_c;*yuv2packedX = yuv2x2bgr10_X_c;break;}}switch (dstFormat) {case AV_PIX_FMT_MONOWHITE:*yuv2packed1 = yuv2monowhite_1_c;*yuv2packed2 = yuv2monowhite_2_c;*yuv2packedX = yuv2monowhite_X_c;break;case AV_PIX_FMT_MONOBLACK:*yuv2packed1 = yuv2monoblack_1_c;*yuv2packed2 = yuv2monoblack_2_c;*yuv2packedX = yuv2monoblack_X_c;break;case AV_PIX_FMT_YUYV422:*yuv2packed1 = yuv2yuyv422_1_c;*yuv2packed2 = yuv2yuyv422_2_c;*yuv2packedX = yuv2yuyv422_X_c;break;case AV_PIX_FMT_YVYU422:*yuv2packed1 = yuv2yvyu422_1_c;*yuv2packed2 = yuv2yvyu422_2_c;*yuv2packedX = yuv2yvyu422_X_c;break;case AV_PIX_FMT_UYVY422:*yuv2packed1 = yuv2uyvy422_1_c;*yuv2packed2 = yuv2uyvy422_2_c;*yuv2packedX = yuv2uyvy422_X_c;break;case AV_PIX_FMT_YA8:*yuv2packed1 = yuv2ya8_1_c;*yuv2packed2 = yuv2ya8_2_c;*yuv2packedX = yuv2ya8_X_c;break;case AV_PIX_FMT_YA16LE:*yuv2packed1 = yuv2ya16le_1_c;*yuv2packed2 = yuv2ya16le_2_c;*yuv2packedX = yuv2ya16le_X_c;break;case AV_PIX_FMT_YA16BE:*yuv2packed1 = yuv2ya16be_1_c;*yuv2packed2 = yuv2ya16be_2_c;*yuv2packedX = yuv2ya16be_X_c;break;case AV_PIX_FMT_AYUV64LE:*yuv2packedX = yuv2ayuv64le_X_c;break;}
}
void ff_sws_init_output_funcs(SwsContext *c,yuv2planar1_fn *yuv2plane1,yuv2planarX_fn *yuv2planeX,yuv2interleavedX_fn *yuv2nv12cX,yuv2packed1_fn *yuv2packed1,yuv2packed2_fn *yuv2packed2,yuv2packedX_fn *yuv2packedX,yuv2anyX_fn *yuv2anyX);
  • ff_sws_init_output_funcs()根據輸出像素格式的不同,對以下幾個函數指針進行賦值:
    • yuv2plane1:是yuv2planar1_fn類型的函數指針。該函數用于輸出一行水平拉伸后的planar格式數據。數據沒有使用垂直拉伸。
    • yuv2planeX:是yuv2planarX_fn類型的函數指針。該函數用于輸出一行水平拉伸后的planar格式數據。數據使用垂直拉伸。
    • yuv2packed1:是yuv2packed1_fn類型的函數指針。該函數用于輸出一行水平拉伸后的packed格式數據。數據沒有使用垂直拉伸。
    • yuv2packed2:是yuv2packed2_fn類型的函數指針。該函數用于輸出一行水平拉伸后的packed格式數據。數據使用兩行數據進行垂直拉伸。
    • yuv2packedX:是yuv2packedX_fn類型的函數指針。該函數用于輸出一行水平拉伸后的packed格式數據。數據使用垂直拉伸。
    • yuv2nv12cX:是yuv2interleavedX_fn類型的函數指針。還沒有研究該函數。
    • yuv2anyX:是yuv2anyX_fn類型的函數指針。還沒有研究該函數。

ff_sws_init_input_funcs()

  • ff_sws_init_input_funcs()用于初始化“輸入函數”。
  • “輸入函數”在libswscale中的作用就是任意格式的像素轉換為YUV格式以供后續的處理。
  • ff_sws_init_input_funcs()的定義位于libswscale\input.c,如下所示。
av_cold void ff_sws_init_input_funcs(SwsContext *c)
{enum AVPixelFormat srcFormat = c->srcFormat;c->chrToYV12 = NULL;switch (srcFormat) {case AV_PIX_FMT_YUYV422:c->chrToYV12 = yuy2ToUV_c;break;case AV_PIX_FMT_YVYU422:c->chrToYV12 = yvy2ToUV_c;break;case AV_PIX_FMT_UYVY422:c->chrToYV12 = uyvyToUV_c;break;case AV_PIX_FMT_NV12:case AV_PIX_FMT_NV24:c->chrToYV12 = nv12ToUV_c;break;case AV_PIX_FMT_NV21:case AV_PIX_FMT_NV42:c->chrToYV12 = nv21ToUV_c;break;case AV_PIX_FMT_RGB8:case AV_PIX_FMT_BGR8:case AV_PIX_FMT_PAL8:case AV_PIX_FMT_BGR4_BYTE:case AV_PIX_FMT_RGB4_BYTE:c->chrToYV12 = palToUV_c;break;case AV_PIX_FMT_GBRP9LE:c->readChrPlanar = planar_rgb9le_to_uv;break;case AV_PIX_FMT_GBRAP10LE:case AV_PIX_FMT_GBRP10LE:c->readChrPlanar = planar_rgb10le_to_uv;break;case AV_PIX_FMT_GBRAP12LE:case AV_PIX_FMT_GBRP12LE:c->readChrPlanar = planar_rgb12le_to_uv;break;case AV_PIX_FMT_GBRP14LE:c->readChrPlanar = planar_rgb14le_to_uv;break;case AV_PIX_FMT_GBRAP16LE:case AV_PIX_FMT_GBRP16LE:c->readChrPlanar = planar_rgb16le_to_uv;break;case AV_PIX_FMT_GBRAPF32LE:case AV_PIX_FMT_GBRPF32LE:c->readChrPlanar = planar_rgbf32le_to_uv;break;case AV_PIX_FMT_GBRP9BE:c->readChrPlanar = planar_rgb9be_to_uv;break;case AV_PIX_FMT_GBRAP10BE:case AV_PIX_FMT_GBRP10BE:c->readChrPlanar = planar_rgb10be_to_uv;break;case AV_PIX_FMT_GBRAP12BE:case AV_PIX_FMT_GBRP12BE:c->readChrPlanar = planar_rgb12be_to_uv;break;case AV_PIX_FMT_GBRP14BE:c->readChrPlanar = planar_rgb14be_to_uv;break;case AV_PIX_FMT_GBRAP16BE:case AV_PIX_FMT_GBRP16BE:c->readChrPlanar = planar_rgb16be_to_uv;break;case AV_PIX_FMT_GBRAPF32BE:case AV_PIX_FMT_GBRPF32BE:c->readChrPlanar = planar_rgbf32be_to_uv;break;case AV_PIX_FMT_GBRAP:case AV_PIX_FMT_GBRP:c->readChrPlanar = planar_rgb_to_uv;break;
#if HAVE_BIGENDIANcase AV_PIX_FMT_YUV420P9LE:case AV_PIX_FMT_YUV422P9LE:case AV_PIX_FMT_YUV444P9LE:case AV_PIX_FMT_YUV420P10LE:case AV_PIX_FMT_YUV422P10LE:case AV_PIX_FMT_YUV440P10LE:case AV_PIX_FMT_YUV444P10LE:case AV_PIX_FMT_YUV420P12LE:case AV_PIX_FMT_YUV422P12LE:case AV_PIX_FMT_YUV440P12LE:case AV_PIX_FMT_YUV444P12LE:case AV_PIX_FMT_YUV420P14LE:case AV_PIX_FMT_YUV422P14LE:case AV_PIX_FMT_YUV444P14LE:case AV_PIX_FMT_YUV420P16LE:case AV_PIX_FMT_YUV422P16LE:case AV_PIX_FMT_YUV444P16LE:case AV_PIX_FMT_YUVA420P9LE:case AV_PIX_FMT_YUVA422P9LE:case AV_PIX_FMT_YUVA444P9LE:case AV_PIX_FMT_YUVA420P10LE:case AV_PIX_FMT_YUVA422P10LE:case AV_PIX_FMT_YUVA444P10LE:case AV_PIX_FMT_YUVA422P12LE:case AV_PIX_FMT_YUVA444P12LE:case AV_PIX_FMT_YUVA420P16LE:case AV_PIX_FMT_YUVA422P16LE:case AV_PIX_FMT_YUVA444P16LE:c->chrToYV12 = bswap16UV_c;break;
#elsecase AV_PIX_FMT_YUV420P9BE:case AV_PIX_FMT_YUV422P9BE:case AV_PIX_FMT_YUV444P9BE:case AV_PIX_FMT_YUV420P10BE:case AV_PIX_FMT_YUV422P10BE:case AV_PIX_FMT_YUV440P10BE:case AV_PIX_FMT_YUV444P10BE:case AV_PIX_FMT_YUV420P12BE:case AV_PIX_FMT_YUV422P12BE:case AV_PIX_FMT_YUV440P12BE:case AV_PIX_FMT_YUV444P12BE:case AV_PIX_FMT_YUV420P14BE:case AV_PIX_FMT_YUV422P14BE:case AV_PIX_FMT_YUV444P14BE:case AV_PIX_FMT_YUV420P16BE:case AV_PIX_FMT_YUV422P16BE:case AV_PIX_FMT_YUV444P16BE:case AV_PIX_FMT_YUVA420P9BE:case AV_PIX_FMT_YUVA422P9BE:case AV_PIX_FMT_YUVA444P9BE:case AV_PIX_FMT_YUVA420P10BE:case AV_PIX_FMT_YUVA422P10BE:case AV_PIX_FMT_YUVA444P10BE:case AV_PIX_FMT_YUVA422P12BE:case AV_PIX_FMT_YUVA444P12BE:case AV_PIX_FMT_YUVA420P16BE:case AV_PIX_FMT_YUVA422P16BE:case AV_PIX_FMT_YUVA444P16BE:c->chrToYV12 = bswap16UV_c;break;
#endifcase AV_PIX_FMT_AYUV64LE:c->chrToYV12 = read_ayuv64le_UV_c;break;case AV_PIX_FMT_P010LE:case AV_PIX_FMT_P210LE:case AV_PIX_FMT_P410LE:c->chrToYV12 = p010LEToUV_c;break;case AV_PIX_FMT_P010BE:case AV_PIX_FMT_P210BE:case AV_PIX_FMT_P410BE:c->chrToYV12 = p010BEToUV_c;break;case AV_PIX_FMT_P016LE:case AV_PIX_FMT_P216LE:case AV_PIX_FMT_P416LE:c->chrToYV12 = p016LEToUV_c;break;case AV_PIX_FMT_P016BE:case AV_PIX_FMT_P216BE:case AV_PIX_FMT_P416BE:c->chrToYV12 = p016BEToUV_c;break;case AV_PIX_FMT_Y210LE:c->chrToYV12 = y210le_UV_c;break;}if (c->chrSrcHSubSample) {switch (srcFormat) {case AV_PIX_FMT_RGBA64BE:c->chrToYV12 = rgb64BEToUV_half_c;break;case AV_PIX_FMT_RGBA64LE:c->chrToYV12 = rgb64LEToUV_half_c;break;case AV_PIX_FMT_BGRA64BE:c->chrToYV12 = bgr64BEToUV_half_c;break;case AV_PIX_FMT_BGRA64LE:c->chrToYV12 = bgr64LEToUV_half_c;break;case AV_PIX_FMT_RGB48BE:c->chrToYV12 = rgb48BEToUV_half_c;break;case AV_PIX_FMT_RGB48LE:c->chrToYV12 = rgb48LEToUV_half_c;break;case AV_PIX_FMT_BGR48BE:c->chrToYV12 = bgr48BEToUV_half_c;break;case AV_PIX_FMT_BGR48LE:c->chrToYV12 = bgr48LEToUV_half_c;break;case AV_PIX_FMT_RGB32:c->chrToYV12 = bgr32ToUV_half_c;break;case AV_PIX_FMT_RGB32_1:c->chrToYV12 = bgr321ToUV_half_c;break;case AV_PIX_FMT_BGR24:c->chrToYV12 = bgr24ToUV_half_c;break;case AV_PIX_FMT_BGR565LE:c->chrToYV12 = bgr16leToUV_half_c;break;case AV_PIX_FMT_BGR565BE:c->chrToYV12 = bgr16beToUV_half_c;break;case AV_PIX_FMT_BGR555LE:c->chrToYV12 = bgr15leToUV_half_c;break;case AV_PIX_FMT_BGR555BE:c->chrToYV12 = bgr15beToUV_half_c;break;case AV_PIX_FMT_GBRAP:case AV_PIX_FMT_GBRP:c->chrToYV12 = gbr24pToUV_half_c;break;case AV_PIX_FMT_BGR444LE:c->chrToYV12 = bgr12leToUV_half_c;break;case AV_PIX_FMT_BGR444BE:c->chrToYV12 = bgr12beToUV_half_c;break;case AV_PIX_FMT_BGR32:c->chrToYV12 = rgb32ToUV_half_c;break;case AV_PIX_FMT_BGR32_1:c->chrToYV12 = rgb321ToUV_half_c;break;case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_half_c;break;case AV_PIX_FMT_RGB565LE:c->chrToYV12 = rgb16leToUV_half_c;break;case AV_PIX_FMT_RGB565BE:c->chrToYV12 = rgb16beToUV_half_c;break;case AV_PIX_FMT_RGB555LE:c->chrToYV12 = rgb15leToUV_half_c;break;case AV_PIX_FMT_RGB555BE:c->chrToYV12 = rgb15beToUV_half_c;break;case AV_PIX_FMT_RGB444LE:c->chrToYV12 = rgb12leToUV_half_c;break;case AV_PIX_FMT_RGB444BE:c->chrToYV12 = rgb12beToUV_half_c;break;case AV_PIX_FMT_X2RGB10LE:c->chrToYV12 = rgb30leToUV_half_c;break;case AV_PIX_FMT_X2BGR10LE:c->chrToYV12 = bgr30leToUV_half_c;break;}} else {switch (srcFormat) {case AV_PIX_FMT_RGBA64BE:c->chrToYV12 = rgb64BEToUV_c;break;case AV_PIX_FMT_RGBA64LE:c->chrToYV12 = rgb64LEToUV_c;break;case AV_PIX_FMT_BGRA64BE:c->chrToYV12 = bgr64BEToUV_c;break;case AV_PIX_FMT_BGRA64LE:c->chrToYV12 = bgr64LEToUV_c;break;case AV_PIX_FMT_RGB48BE:c->chrToYV12 = rgb48BEToUV_c;break;case AV_PIX_FMT_RGB48LE:c->chrToYV12 = rgb48LEToUV_c;break;case AV_PIX_FMT_BGR48BE:c->chrToYV12 = bgr48BEToUV_c;break;case AV_PIX_FMT_BGR48LE:c->chrToYV12 = bgr48LEToUV_c;break;case AV_PIX_FMT_RGB32:c->chrToYV12 = bgr32ToUV_c;break;case AV_PIX_FMT_RGB32_1:c->chrToYV12 = bgr321ToUV_c;break;case AV_PIX_FMT_BGR24:c->chrToYV12 = bgr24ToUV_c;break;case AV_PIX_FMT_BGR565LE:c->chrToYV12 = bgr16leToUV_c;break;case AV_PIX_FMT_BGR565BE:c->chrToYV12 = bgr16beToUV_c;break;case AV_PIX_FMT_BGR555LE:c->chrToYV12 = bgr15leToUV_c;break;case AV_PIX_FMT_BGR555BE:c->chrToYV12 = bgr15beToUV_c;break;case AV_PIX_FMT_BGR444LE:c->chrToYV12 = bgr12leToUV_c;break;case AV_PIX_FMT_BGR444BE:c->chrToYV12 = bgr12beToUV_c;break;case AV_PIX_FMT_BGR32:c->chrToYV12 = rgb32ToUV_c;break;case AV_PIX_FMT_BGR32_1:c->chrToYV12 = rgb321ToUV_c;break;case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_c;break;case AV_PIX_FMT_RGB565LE:c->chrToYV12 = rgb16leToUV_c;break;case AV_PIX_FMT_RGB565BE:c->chrToYV12 = rgb16beToUV_c;break;case AV_PIX_FMT_RGB555LE:c->chrToYV12 = rgb15leToUV_c;break;case AV_PIX_FMT_RGB555BE:c->chrToYV12 = rgb15beToUV_c;break;case AV_PIX_FMT_RGB444LE:c->chrToYV12 = rgb12leToUV_c;break;case AV_PIX_FMT_RGB444BE:c->chrToYV12 = rgb12beToUV_c;break;case AV_PIX_FMT_X2RGB10LE:c->chrToYV12 = rgb30leToUV_c;break;case AV_PIX_FMT_X2BGR10LE:c->chrToYV12 = bgr30leToUV_c;break;}}c->lumToYV12 = NULL;c->alpToYV12 = NULL;switch (srcFormat) {case AV_PIX_FMT_GBRP9LE:c->readLumPlanar = planar_rgb9le_to_y;break;case AV_PIX_FMT_GBRAP10LE:c->readAlpPlanar = planar_rgb10le_to_a;case AV_PIX_FMT_GBRP10LE:c->readLumPlanar = planar_rgb10le_to_y;break;case AV_PIX_FMT_GBRAP12LE:c->readAlpPlanar = planar_rgb12le_to_a;case AV_PIX_FMT_GBRP12LE:c->readLumPlanar = planar_rgb12le_to_y;break;case AV_PIX_FMT_GBRP14LE:c->readLumPlanar = planar_rgb14le_to_y;break;case AV_PIX_FMT_GBRAP16LE:c->readAlpPlanar = planar_rgb16le_to_a;case AV_PIX_FMT_GBRP16LE:c->readLumPlanar = planar_rgb16le_to_y;break;case AV_PIX_FMT_GBRAPF32LE:c->readAlpPlanar = planar_rgbf32le_to_a;case AV_PIX_FMT_GBRPF32LE:c->readLumPlanar = planar_rgbf32le_to_y;break;case AV_PIX_FMT_GBRP9BE:c->readLumPlanar = planar_rgb9be_to_y;break;case AV_PIX_FMT_GBRAP10BE:c->readAlpPlanar = planar_rgb10be_to_a;case AV_PIX_FMT_GBRP10BE:c->readLumPlanar = planar_rgb10be_to_y;break;case AV_PIX_FMT_GBRAP12BE:c->readAlpPlanar = planar_rgb12be_to_a;case AV_PIX_FMT_GBRP12BE:c->readLumPlanar = planar_rgb12be_to_y;break;case AV_PIX_FMT_GBRP14BE:c->readLumPlanar = planar_rgb14be_to_y;break;case AV_PIX_FMT_GBRAP16BE:c->readAlpPlanar = planar_rgb16be_to_a;case AV_PIX_FMT_GBRP16BE:c->readLumPlanar = planar_rgb16be_to_y;break;case AV_PIX_FMT_GBRAPF32BE:c->readAlpPlanar = planar_rgbf32be_to_a;case AV_PIX_FMT_GBRPF32BE:c->readLumPlanar = planar_rgbf32be_to_y;break;case AV_PIX_FMT_GBRAP:c->readAlpPlanar = planar_rgb_to_a;case AV_PIX_FMT_GBRP:c->readLumPlanar = planar_rgb_to_y;break;
#if HAVE_BIGENDIANcase AV_PIX_FMT_YUV420P9LE:case AV_PIX_FMT_YUV422P9LE:case AV_PIX_FMT_YUV444P9LE:case AV_PIX_FMT_YUV420P10LE:case AV_PIX_FMT_YUV422P10LE:case AV_PIX_FMT_YUV440P10LE:case AV_PIX_FMT_YUV444P10LE:case AV_PIX_FMT_YUV420P12LE:case AV_PIX_FMT_YUV422P12LE:case AV_PIX_FMT_YUV440P12LE:case AV_PIX_FMT_YUV444P12LE:case AV_PIX_FMT_YUV420P14LE:case AV_PIX_FMT_YUV422P14LE:case AV_PIX_FMT_YUV444P14LE:case AV_PIX_FMT_YUV420P16LE:case AV_PIX_FMT_YUV422P16LE:case AV_PIX_FMT_YUV444P16LE:case AV_PIX_FMT_GRAY9LE:case AV_PIX_FMT_GRAY10LE:case AV_PIX_FMT_GRAY12LE:case AV_PIX_FMT_GRAY14LE:case AV_PIX_FMT_GRAY16LE:case AV_PIX_FMT_P016LE:case AV_PIX_FMT_P216LE:case AV_PIX_FMT_P416LE:c->lumToYV12 = bswap16Y_c;break;case AV_PIX_FMT_YUVA420P9LE:case AV_PIX_FMT_YUVA422P9LE:case AV_PIX_FMT_YUVA444P9LE:case AV_PIX_FMT_YUVA420P10LE:case AV_PIX_FMT_YUVA422P10LE:case AV_PIX_FMT_YUVA444P10LE:case AV_PIX_FMT_YUVA422P12LE:case AV_PIX_FMT_YUVA444P12LE:case AV_PIX_FMT_YUVA420P16LE:case AV_PIX_FMT_YUVA422P16LE:case AV_PIX_FMT_YUVA444P16LE:c->lumToYV12 = bswap16Y_c;c->alpToYV12 = bswap16Y_c;break;
#elsecase AV_PIX_FMT_YUV420P9BE:case AV_PIX_FMT_YUV422P9BE:case AV_PIX_FMT_YUV444P9BE:case AV_PIX_FMT_YUV420P10BE:case AV_PIX_FMT_YUV422P10BE:case AV_PIX_FMT_YUV440P10BE:case AV_PIX_FMT_YUV444P10BE:case AV_PIX_FMT_YUV420P12BE:case AV_PIX_FMT_YUV422P12BE:case AV_PIX_FMT_YUV440P12BE:case AV_PIX_FMT_YUV444P12BE:case AV_PIX_FMT_YUV420P14BE:case AV_PIX_FMT_YUV422P14BE:case AV_PIX_FMT_YUV444P14BE:case AV_PIX_FMT_YUV420P16BE:case AV_PIX_FMT_YUV422P16BE:case AV_PIX_FMT_YUV444P16BE:case AV_PIX_FMT_GRAY9BE:case AV_PIX_FMT_GRAY10BE:case AV_PIX_FMT_GRAY12BE:case AV_PIX_FMT_GRAY14BE:case AV_PIX_FMT_GRAY16BE:case AV_PIX_FMT_P016BE:case AV_PIX_FMT_P216BE:case AV_PIX_FMT_P416BE:c->lumToYV12 = bswap16Y_c;break;case AV_PIX_FMT_YUVA420P9BE:case AV_PIX_FMT_YUVA422P9BE:case AV_PIX_FMT_YUVA444P9BE:case AV_PIX_FMT_YUVA420P10BE:case AV_PIX_FMT_YUVA422P10BE:case AV_PIX_FMT_YUVA444P10BE:case AV_PIX_FMT_YUVA422P12BE:case AV_PIX_FMT_YUVA444P12BE:case AV_PIX_FMT_YUVA420P16BE:case AV_PIX_FMT_YUVA422P16BE:case AV_PIX_FMT_YUVA444P16BE:c->lumToYV12 = bswap16Y_c;c->alpToYV12 = bswap16Y_c;break;
#endifcase AV_PIX_FMT_YA16LE:c->lumToYV12 = read_ya16le_gray_c;break;case AV_PIX_FMT_YA16BE:c->lumToYV12 = read_ya16be_gray_c;break;case AV_PIX_FMT_AYUV64LE:c->lumToYV12 = read_ayuv64le_Y_c;break;case AV_PIX_FMT_YUYV422:case AV_PIX_FMT_YVYU422:case AV_PIX_FMT_YA8:c->lumToYV12 = yuy2ToY_c;break;case AV_PIX_FMT_UYVY422:c->lumToYV12 = uyvyToY_c;break;case AV_PIX_FMT_BGR24:c->lumToYV12 = bgr24ToY_c;break;case AV_PIX_FMT_BGR565LE:c->lumToYV12 = bgr16leToY_c;break;case AV_PIX_FMT_BGR565BE:c->lumToYV12 = bgr16beToY_c;break;case AV_PIX_FMT_BGR555LE:c->lumToYV12 = bgr15leToY_c;break;case AV_PIX_FMT_BGR555BE:c->lumToYV12 = bgr15beToY_c;break;case AV_PIX_FMT_BGR444LE:c->lumToYV12 = bgr12leToY_c;break;case AV_PIX_FMT_BGR444BE:c->lumToYV12 = bgr12beToY_c;break;case AV_PIX_FMT_RGB24:c->lumToYV12 = rgb24ToY_c;break;case AV_PIX_FMT_RGB565LE:c->lumToYV12 = rgb16leToY_c;break;case AV_PIX_FMT_RGB565BE:c->lumToYV12 = rgb16beToY_c;break;case AV_PIX_FMT_RGB555LE:c->lumToYV12 = rgb15leToY_c;break;case AV_PIX_FMT_RGB555BE:c->lumToYV12 = rgb15beToY_c;break;case AV_PIX_FMT_RGB444LE:c->lumToYV12 = rgb12leToY_c;break;case AV_PIX_FMT_RGB444BE:c->lumToYV12 = rgb12beToY_c;break;case AV_PIX_FMT_RGB8:case AV_PIX_FMT_BGR8:case AV_PIX_FMT_PAL8:case AV_PIX_FMT_BGR4_BYTE:case AV_PIX_FMT_RGB4_BYTE:c->lumToYV12 = palToY_c;break;case AV_PIX_FMT_MONOBLACK:c->lumToYV12 = monoblack2Y_c;break;case AV_PIX_FMT_MONOWHITE:c->lumToYV12 = monowhite2Y_c;break;case AV_PIX_FMT_RGB32:c->lumToYV12 = bgr32ToY_c;break;case AV_PIX_FMT_RGB32_1:c->lumToYV12 = bgr321ToY_c;break;case AV_PIX_FMT_BGR32:c->lumToYV12 = rgb32ToY_c;break;case AV_PIX_FMT_BGR32_1:c->lumToYV12 = rgb321ToY_c;break;case AV_PIX_FMT_RGB48BE:c->lumToYV12 = rgb48BEToY_c;break;case AV_PIX_FMT_RGB48LE:c->lumToYV12 = rgb48LEToY_c;break;case AV_PIX_FMT_BGR48BE:c->lumToYV12 = bgr48BEToY_c;break;case AV_PIX_FMT_BGR48LE:c->lumToYV12 = bgr48LEToY_c;break;case AV_PIX_FMT_RGBA64BE:c->lumToYV12 = rgb64BEToY_c;break;case AV_PIX_FMT_RGBA64LE:c->lumToYV12 = rgb64LEToY_c;break;case AV_PIX_FMT_BGRA64BE:c->lumToYV12 = bgr64BEToY_c;break;case AV_PIX_FMT_BGRA64LE:c->lumToYV12 = bgr64LEToY_c;break;case AV_PIX_FMT_P010LE:case AV_PIX_FMT_P210LE:case AV_PIX_FMT_P410LE:c->lumToYV12 = p010LEToY_c;break;case AV_PIX_FMT_P010BE:case AV_PIX_FMT_P210BE:case AV_PIX_FMT_P410BE:c->lumToYV12 = p010BEToY_c;break;case AV_PIX_FMT_GRAYF32LE:c->lumToYV12 = grayf32leToY16_c;break;case AV_PIX_FMT_GRAYF32BE:c->lumToYV12 = grayf32beToY16_c;break;case AV_PIX_FMT_Y210LE:c->lumToYV12 = y210le_Y_c;break;case AV_PIX_FMT_X2RGB10LE:c->lumToYV12 = rgb30leToY_c;break;case AV_PIX_FMT_X2BGR10LE:c->lumToYV12 = bgr30leToY_c;break;}if (c->needAlpha) {if (is16BPS(srcFormat) || isNBPS(srcFormat)) {if (HAVE_BIGENDIAN == !isBE(srcFormat) && !c->readAlpPlanar)c->alpToYV12 = bswap16Y_c;}switch (srcFormat) {case AV_PIX_FMT_BGRA64LE:case AV_PIX_FMT_RGBA64LE:  c->alpToYV12 = rgba64leToA_c; break;case AV_PIX_FMT_BGRA64BE:case AV_PIX_FMT_RGBA64BE:  c->alpToYV12 = rgba64beToA_c; break;case AV_PIX_FMT_BGRA:case AV_PIX_FMT_RGBA:c->alpToYV12 = rgbaToA_c;break;case AV_PIX_FMT_ABGR:case AV_PIX_FMT_ARGB:c->alpToYV12 = abgrToA_c;break;case AV_PIX_FMT_YA8:c->alpToYV12 = uyvyToY_c;break;case AV_PIX_FMT_YA16LE:c->alpToYV12 = read_ya16le_alpha_c;break;case AV_PIX_FMT_YA16BE:c->alpToYV12 = read_ya16be_alpha_c;break;case AV_PIX_FMT_AYUV64LE:c->alpToYV12 = read_ayuv64le_A_c;break;case AV_PIX_FMT_PAL8 :c->alpToYV12 = palToA_c;break;}}
}
  • 函數聲明如下
void ff_sws_init_input_funcs(SwsContext *c);
  • ?ff_sws_init_input_funcs()根據輸入像素格式的不同,對以下幾個函數指針進行賦值:
  • lumToYV12:轉換得到Y分量。
  • chrToYV12:轉換得到UV分量。
  • alpToYV12:轉換得到Alpha分量。
  • readLumPlanar:讀取planar格式的數據轉換為Y。
  • readChrPlanar:讀取planar格式的數據轉換為UV。

下面看幾個例子。

  • 當輸入像素格式為AV_PIX_FMT_RGB24的時候,lumToYV12()指針指向的函數是rgb24ToY_c(),如下所示。
    case AV_PIX_FMT_RGB24:c->lumToYV12 = rgb24ToY_c;break;

rgb24ToY_c()

  • rgb24ToY_c()的定義如下。
static void rgb24ToY_c(uint8_t *_dst, const uint8_t *src, const uint8_t *unused1, const uint8_t *unused2, int width,uint32_t *rgb2yuv)
{int16_t *dst = (int16_t *)_dst;int32_t ry = rgb2yuv[RY_IDX], gy = rgb2yuv[GY_IDX], by = rgb2yuv[BY_IDX];int i;for (i = 0; i < width; i++) {int r = src[i * 3 + 0];int g = src[i * 3 + 1];int b = src[i * 3 + 2];dst[i] = ((ry*r + gy*g + by*b + (32<<(RGB2YUV_SHIFT-1)) + (1<<(RGB2YUV_SHIFT-7)))>>(RGB2YUV_SHIFT-6));}
}
  • ?從源代碼中可以看出,該函數主要完成了以下三步:
    • 1. ?取系數。通過讀取rgb2yuv數組中存儲的參數獲得R,G,B每個分量的系數。
    • 2. ?取像素值。分別讀取R,G,B每個分量的像素值。
    • 3. ?計算得到亮度值。
  • 根據R,G,B的系數和值,計算得到亮度值Y。
  • 當輸入像素格式為AV_PIX_FMT_RGB24的時候,chrToYV12 ()指針指向的函數是rgb24ToUV_half_c(),如下所示。
        case AV_PIX_FMT_RGB24:c->chrToYV12 = rgb24ToUV_half_c;break;

rgb24ToUV_half_c()

  • rgb24ToUV_half_c()定義如下。?
static void rgb24ToUV_half_c(uint8_t *_dstU, uint8_t *_dstV, const uint8_t *unused0, const uint8_t *src1,const uint8_t *src2, int width, uint32_t *rgb2yuv)
{int16_t *dstU = (int16_t *)_dstU;int16_t *dstV = (int16_t *)_dstV;int i;int32_t ru = rgb2yuv[RU_IDX], gu = rgb2yuv[GU_IDX], bu = rgb2yuv[BU_IDX];int32_t rv = rgb2yuv[RV_IDX], gv = rgb2yuv[GV_IDX], bv = rgb2yuv[BV_IDX];av_assert1(src1 == src2);for (i = 0; i < width; i++) {int r = src1[6 * i + 0] + src1[6 * i + 3];int g = src1[6 * i + 1] + src1[6 * i + 4];int b = src1[6 * i + 2] + src1[6 * i + 5];dstU[i] = (ru*r + gu*g + bu*b + (256<<RGB2YUV_SHIFT) + (1<<(RGB2YUV_SHIFT-6)))>>(RGB2YUV_SHIFT-5);dstV[i] = (rv*r + gv*g + bv*b + (256<<RGB2YUV_SHIFT) + (1<<(RGB2YUV_SHIFT-6)))>>(RGB2YUV_SHIFT-5);}
}
  • ?rgb24ToUV_half_c()的過程相比rgb24ToY_c()要稍微復雜些。這主要是因為U,V取值的數量只有Y的一半。因此需要首先求出每2個像素點的平均值之后,再進行計算。
  • 當輸入像素格式為AV_PIX_FMT_GBRP(注意這個是planar格式,三個分量分別為G,B,R)的時候,readLumPlanar指向的函數是planar_rgb_to_y(),如下所示。
    case AV_PIX_FMT_GBRP:c->readLumPlanar = planar_rgb_to_y;break;

planar_rgb_to_y()

  • planar_rgb_to_y()定義如下。
static void planar_rgb_to_y(uint8_t *_dst, const uint8_t *src[4], int width, int32_t *rgb2yuv)
{uint16_t *dst = (uint16_t *)_dst;int32_t ry = rgb2yuv[RY_IDX], gy = rgb2yuv[GY_IDX], by = rgb2yuv[BY_IDX];int i;for (i = 0; i < width; i++) {int g = src[0][i];int b = src[1][i];int r = src[2][i];dst[i] = (ry*r + gy*g + by*b + (0x801<<(RGB2YUV_SHIFT-7))) >> (RGB2YUV_SHIFT-6);}
}

?ff_sws_init_range_convert()

  • ff_sws_init_range_convert()用于初始化像素值范圍轉換的函數,它的定義位于libswscale\swscale.c,如下所示。
av_cold void ff_sws_init_range_convert(SwsContext *c)
{c->lumConvertRange = NULL;c->chrConvertRange = NULL;if (c->srcRange != c->dstRange && !isAnyRGB(c->dstFormat)) {if (c->dstBpc <= 14) {if (c->srcRange) {c->lumConvertRange = lumRangeFromJpeg_c;c->chrConvertRange = chrRangeFromJpeg_c;} else {c->lumConvertRange = lumRangeToJpeg_c;c->chrConvertRange = chrRangeToJpeg_c;}} else {if (c->srcRange) {c->lumConvertRange = lumRangeFromJpeg16_c;c->chrConvertRange = chrRangeFromJpeg16_c;} else {c->lumConvertRange = lumRangeToJpeg16_c;c->chrConvertRange = chrRangeToJpeg16_c;}}}
}
  • ff_sws_init_range_convert()包含了兩種像素取值范圍的轉換: 
    lumConvertRange:亮度分量取值范圍的轉換。 
    chrConvertRange:色度分量取值范圍的轉換。 
    從JPEG標準轉換為MPEG標準的函數有:lumRangeFromJpeg_c()和chrRangeFromJpeg_c()。
    

lumRangeFromJpeg_c()

  • 亮度轉換(0-255轉換為16-235)函數lumRangeFromJpeg_c()如下所示。
static void lumRangeFromJpeg_c(int16_t *dst, int width)
{int i;for (i = 0; i < width; i++)dst[i] = (dst[i] * 14071 + 33561947) >> 14;
}
  • 可以簡單代入一個數字驗證一下上述函數的正確性。該函數將亮度值“0”映射成“16”,“255”映射成“235”,因此我們可以代入一個“255”看看轉換后的數值是否為“235”。在這里需要注意,dst中存儲的像素數值是15bit的亮度值。因此我們需要將8bit的數值“255”左移7位后帶入。經過計算,255左移7位后取值為32640,計算后得到的數值為30080,右移7位后得到的8bit亮度值即為235。
  • 后續幾個函數都可以用上面描述的方法進行驗證,就不再重復了。?

chrRangeFromJpeg_c()

  • 色度轉換(0-255轉換為16-240)函數chrRangeFromJpeg_c()如下所示。
static void chrRangeFromJpeg_c(int16_t *dstU, int16_t *dstV, int width)
{int i;for (i = 0; i < width; i++) {dstU[i] = (dstU[i] * 1799 + 4081085) >> 11; // 1469dstV[i] = (dstV[i] * 1799 + 4081085) >> 11; // 1469}
}
  • 從MPEG標準轉換為JPEG標準的函數有:lumRangeToJpeg_c()和chrRangeToJpeg_c()。

lumRangeToJpeg_c()

  • 亮度轉換(16-235轉換為0-255)函數lumRangeToJpeg_c()定義如下所示。
static void lumRangeToJpeg_c(int16_t *dst, int width)
{int i;for (i = 0; i < width; i++)dst[i] = (FFMIN(dst[i], 30189) * 19077 - 39057361) >> 14;
}

chrRangeToJpeg_c()

  • 色度轉換(16-240轉換為0-255)函數chrRangeToJpeg_c()定義如下所示。?
// FIXME all pal and rgb srcFormats could do this conversion as well
// FIXME all scalers more complex than bilinear could do half of this transform
static void chrRangeToJpeg_c(int16_t *dstU, int16_t *dstV, int width)
{int i;for (i = 0; i < width; i++) {dstU[i] = (FFMIN(dstU[i], 30775) * 4663 - 9289992) >> 12; // -264dstV[i] = (FFMIN(dstV[i], 30775) * 4663 - 9289992) >> 12; // -264}
}

函數調用結構圖

  • 分析得到的libswscale的函數調用關系如下圖所示。

Libswscale處理數據流程

  • Libswscale處理像素數據的流程可以概括為下圖。

?

  • 從圖中可以看出,libswscale處理數據有兩條最主要的方式:unscaled和scaled。
  • unscaled用于處理不需要拉伸的像素數據(屬于比較特殊的情況),scaled用于處理需要拉伸的像素數據。
  • Unscaled只需要對圖像像素格式進行轉換;而Scaled則除了對像素格式進行轉換之外,還需要對圖像進行縮放。
  • Scaled方式可以分成以下幾個步驟:
    • XXX to YUV Converter:首先將數據像素數據轉換為8bitYUV格式;
    • Horizontal scaler:水平拉伸圖像,并且轉換為15bitYUV;
    • Vertical scaler:垂直拉伸圖像;
    • Output converter:轉換為輸出像素格式。

SwsContext

  • SwsContext是使用libswscale時候一個貫穿始終的結構體。
  • 但是我們在使用FFmpeg的類庫進行開發的時候,是無法看到它的內部結構的。
  • 在libswscale\swscale.h中只能看到一行定義:struct SwsContext;
  • 一般人看到這個只有一行定義的結構體,會猜測它的內部一定十分簡單。但是假使我們看一下FFmpeg的源代碼,會發現這個猜測是完全錯誤的SwsContext的定義是十分復雜的。
  • 它的定義位于libswscale\swscale_internal.h中,如下所示。

/* This struct should be aligned on at least a 32-byte boundary. */
typedef struct SwsContext {/*** info on struct for av_log*/const AVClass *av_class;struct SwsContext *parent;AVSliceThread      *slicethread;struct SwsContext **slice_ctx;int                *slice_err;int              nb_slice_ctx;// values passed to current sws_receive_slice() callint dst_slice_start;int dst_slice_height;/*** Note that src, dst, srcStride, dstStride will be copied in the* sws_scale() wrapper so they can be freely modified here.*/SwsFunc convert_unscaled;int srcW;                     ///< Width  of source      luma/alpha planes.int srcH;                     ///< Height of source      luma/alpha planes.int dstH;                     ///< Height of destination luma/alpha planes.int chrSrcW;                  ///< Width  of source      chroma     planes.int chrSrcH;                  ///< Height of source      chroma     planes.int chrDstW;                  ///< Width  of destination chroma     planes.int chrDstH;                  ///< Height of destination chroma     planes.int lumXInc, chrXInc;int lumYInc, chrYInc;enum AVPixelFormat dstFormat; ///< Destination pixel format.enum AVPixelFormat srcFormat; ///< Source      pixel format.int dstFormatBpp;             ///< Number of bits per pixel of the destination pixel format.int srcFormatBpp;             ///< Number of bits per pixel of the source      pixel format.int dstBpc, srcBpc;int chrSrcHSubSample;         ///< Binary logarithm of horizontal subsampling factor between luma/alpha and chroma planes in source      image.int chrSrcVSubSample;         ///< Binary logarithm of vertical   subsampling factor between luma/alpha and chroma planes in source      image.int chrDstHSubSample;         ///< Binary logarithm of horizontal subsampling factor between luma/alpha and chroma planes in destination image.int chrDstVSubSample;         ///< Binary logarithm of vertical   subsampling factor between luma/alpha and chroma planes in destination image.int vChrDrop;                 ///< Binary logarithm of extra vertical subsampling factor in source image chroma planes specified by user.int sliceDir;                 ///< Direction that slices are fed to the scaler (1 = top-to-bottom, -1 = bottom-to-top).int nb_threads;               ///< Number of threads used for scalingdouble param[2];              ///< Input parameters for scaling algorithms that need them.AVFrame *frame_src;AVFrame *frame_dst;RangeList src_ranges;/* The cascaded_* fields allow spliting a scaler task into multiple* sequential steps, this is for example used to limit the maximum* downscaling factor that needs to be supported in one scaler.*/struct SwsContext *cascaded_context[3];int cascaded_tmpStride[4];uint8_t *cascaded_tmp[4];int cascaded1_tmpStride[4];uint8_t *cascaded1_tmp[4];int cascaded_mainindex;double gamma_value;int gamma_flag;int is_internal_gamma;uint16_t *gamma;uint16_t *inv_gamma;int numDesc;int descIndex[2];int numSlice;struct SwsSlice *slice;struct SwsFilterDescriptor *desc;uint32_t pal_yuv[256];uint32_t pal_rgb[256];float uint2float_lut[256];/*** @name Scaled horizontal lines ring buffer.* The horizontal scaler keeps just enough scaled lines in a ring buffer* so they may be passed to the vertical scaler. The pointers to the* allocated buffers for each line are duplicated in sequence in the ring* buffer to simplify indexing and avoid wrapping around between lines* inside the vertical scaler code. The wrapping is done before the* vertical scaler is called.*///@{int lastInLumBuf;             ///< Last scaled horizontal luma/alpha line from source in the ring buffer.int lastInChrBuf;             ///< Last scaled horizontal chroma     line from source in the ring buffer.//@}uint8_t *formatConvBuffer;int needAlpha;/*** @name Horizontal and vertical filters.* To better understand the following fields, here is a pseudo-code of* their usage in filtering a horizontal line:* @code* for (i = 0; i < width; i++) {*     dst[i] = 0;*     for (j = 0; j < filterSize; j++)*         dst[i] += src[ filterPos[i] + j ] * filter[ filterSize * i + j ];*     dst[i] >>= FRAC_BITS; // The actual implementation is fixed-point.* }* @endcode*///@{int16_t *hLumFilter;          ///< Array of horizontal filter coefficients for luma/alpha planes.int16_t *hChrFilter;          ///< Array of horizontal filter coefficients for chroma     planes.int16_t *vLumFilter;          ///< Array of vertical   filter coefficients for luma/alpha planes.int16_t *vChrFilter;          ///< Array of vertical   filter coefficients for chroma     planes.int32_t *hLumFilterPos;       ///< Array of horizontal filter starting positions for each dst[i] for luma/alpha planes.int32_t *hChrFilterPos;       ///< Array of horizontal filter starting positions for each dst[i] for chroma     planes.int32_t *vLumFilterPos;       ///< Array of vertical   filter starting positions for each dst[i] for luma/alpha planes.int32_t *vChrFilterPos;       ///< Array of vertical   filter starting positions for each dst[i] for chroma     planes.int hLumFilterSize;           ///< Horizontal filter size for luma/alpha pixels.int hChrFilterSize;           ///< Horizontal filter size for chroma     pixels.int vLumFilterSize;           ///< Vertical   filter size for luma/alpha pixels.int vChrFilterSize;           ///< Vertical   filter size for chroma     pixels.//@}int lumMmxextFilterCodeSize;  ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code size for luma/alpha planes.int chrMmxextFilterCodeSize;  ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code size for chroma planes.uint8_t *lumMmxextFilterCode; ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code for luma/alpha planes.uint8_t *chrMmxextFilterCode; ///< Runtime-generated MMXEXT horizontal fast bilinear scaler code for chroma planes.int canMMXEXTBeUsed;int warned_unuseable_bilinear;int dstY;                     ///< Last destination vertical line output from last slice.int flags;                    ///< Flags passed by the user to select scaler algorithm, optimizations, subsampling, etc...void *yuvTable;             // pointer to the yuv->rgb table start so it can be freed()// alignment ensures the offset can be added in a single// instruction on e.g. ARMDECLARE_ALIGNED(16, int, table_gV)[256 + 2*YUVRGB_TABLE_HEADROOM];uint8_t *table_rV[256 + 2*YUVRGB_TABLE_HEADROOM];uint8_t *table_gU[256 + 2*YUVRGB_TABLE_HEADROOM];uint8_t *table_bU[256 + 2*YUVRGB_TABLE_HEADROOM];DECLARE_ALIGNED(16, int32_t, input_rgb2yuv_table)[16+40*4]; // This table can contain both C and SIMD formatted values, the C vales are always at the XY_IDX points
#define RY_IDX 0
#define GY_IDX 1
#define BY_IDX 2
#define RU_IDX 3
#define GU_IDX 4
#define BU_IDX 5
#define RV_IDX 6
#define GV_IDX 7
#define BV_IDX 8
#define RGB2YUV_SHIFT 15int *dither_error[4];//Colorspace stuffint contrast, brightness, saturation;    // for sws_getColorspaceDetailsint srcColorspaceTable[4];int dstColorspaceTable[4];int srcRange;                 ///< 0 = MPG YUV range, 1 = JPG YUV range (source      image).int dstRange;                 ///< 0 = MPG YUV range, 1 = JPG YUV range (destination image).int src0Alpha;int dst0Alpha;int srcXYZ;int dstXYZ;int src_h_chr_pos;int dst_h_chr_pos;int src_v_chr_pos;int dst_v_chr_pos;int yuv2rgb_y_offset;int yuv2rgb_y_coeff;int yuv2rgb_v2r_coeff;int yuv2rgb_v2g_coeff;int yuv2rgb_u2g_coeff;int yuv2rgb_u2b_coeff;#define RED_DITHER            "0*8"
#define GREEN_DITHER          "1*8"
#define BLUE_DITHER           "2*8"
#define Y_COEFF               "3*8"
#define VR_COEFF              "4*8"
#define UB_COEFF              "5*8"
#define VG_COEFF              "6*8"
#define UG_COEFF              "7*8"
#define Y_OFFSET              "8*8"
#define U_OFFSET              "9*8"
#define V_OFFSET              "10*8"
#define LUM_MMX_FILTER_OFFSET "11*8"
#define CHR_MMX_FILTER_OFFSET "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)
#define DSTW_OFFSET           "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2"
#define ESP_OFFSET            "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+8"
#define VROUNDER_OFFSET       "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+16"
#define U_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+24"
#define V_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+32"
#define Y_TEMP                "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+40"
#define ALP_MMX_FILTER_OFFSET "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*2+48"
#define UV_OFF_PX             "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+48"
#define UV_OFF_BYTE           "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+56"
#define DITHER16              "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+64"
#define DITHER32              "11*8+4*4*"AV_STRINGIFY(MAX_FILTER_SIZE)"*3+80"
#define DITHER32_INT          (11*8+4*4*MAX_FILTER_SIZE*3+80) // value equal to above, used for checking that the struct hasn't been changed by mistakeDECLARE_ALIGNED(8, uint64_t, redDither);DECLARE_ALIGNED(8, uint64_t, greenDither);DECLARE_ALIGNED(8, uint64_t, blueDither);DECLARE_ALIGNED(8, uint64_t, yCoeff);DECLARE_ALIGNED(8, uint64_t, vrCoeff);DECLARE_ALIGNED(8, uint64_t, ubCoeff);DECLARE_ALIGNED(8, uint64_t, vgCoeff);DECLARE_ALIGNED(8, uint64_t, ugCoeff);DECLARE_ALIGNED(8, uint64_t, yOffset);DECLARE_ALIGNED(8, uint64_t, uOffset);DECLARE_ALIGNED(8, uint64_t, vOffset);int32_t lumMmxFilter[4 * MAX_FILTER_SIZE];int32_t chrMmxFilter[4 * MAX_FILTER_SIZE];int dstW;                     ///< Width  of destination luma/alpha planes.DECLARE_ALIGNED(8, uint64_t, esp);DECLARE_ALIGNED(8, uint64_t, vRounder);DECLARE_ALIGNED(8, uint64_t, u_temp);DECLARE_ALIGNED(8, uint64_t, v_temp);DECLARE_ALIGNED(8, uint64_t, y_temp);int32_t alpMmxFilter[4 * MAX_FILTER_SIZE];// alignment of these values is not necessary, but merely here// to maintain the same offset across x8632 and x86-64. Once we// use proper offset macros in the asm, they can be removed.DECLARE_ALIGNED(8, ptrdiff_t, uv_off); ///< offset (in pixels) between u and v planesDECLARE_ALIGNED(8, ptrdiff_t, uv_offx2); ///< offset (in bytes) between u and v planesDECLARE_ALIGNED(8, uint16_t, dither16)[8];DECLARE_ALIGNED(8, uint32_t, dither32)[8];const uint8_t *chrDither8, *lumDither8;#if HAVE_ALTIVECvector signed short   CY;vector signed short   CRV;vector signed short   CBU;vector signed short   CGU;vector signed short   CGV;vector signed short   OY;vector unsigned short CSHIFT;vector signed short  *vYCoeffsBank, *vCCoeffsBank;
#endifint use_mmx_vfilter;/* pre defined color-spaces gamma */
#define XYZ_GAMMA (2.6f)
#define RGB_GAMMA (2.2f)int16_t *xyzgamma;int16_t *rgbgamma;int16_t *xyzgammainv;int16_t *rgbgammainv;int16_t xyz2rgb_matrix[3][4];int16_t rgb2xyz_matrix[3][4];/* function pointers for swscale() */yuv2planar1_fn yuv2plane1;yuv2planarX_fn yuv2planeX;yuv2interleavedX_fn yuv2nv12cX;yuv2packed1_fn yuv2packed1;yuv2packed2_fn yuv2packed2;yuv2packedX_fn yuv2packedX;yuv2anyX_fn yuv2anyX;/// Unscaled conversion of luma plane to YV12 for horizontal scaler.void (*lumToYV12)(uint8_t *dst, const uint8_t *src, const uint8_t *src2, const uint8_t *src3,int width, uint32_t *pal);/// Unscaled conversion of alpha plane to YV12 for horizontal scaler.void (*alpToYV12)(uint8_t *dst, const uint8_t *src, const uint8_t *src2, const uint8_t *src3,int width, uint32_t *pal);/// Unscaled conversion of chroma planes to YV12 for horizontal scaler.void (*chrToYV12)(uint8_t *dstU, uint8_t *dstV,const uint8_t *src1, const uint8_t *src2, const uint8_t *src3,int width, uint32_t *pal);/*** Functions to read planar input, such as planar RGB, and convert* internally to Y/UV/A.*//** @{ */void (*readLumPlanar)(uint8_t *dst, const uint8_t *src[4], int width, int32_t *rgb2yuv);void (*readChrPlanar)(uint8_t *dstU, uint8_t *dstV, const uint8_t *src[4],int width, int32_t *rgb2yuv);void (*readAlpPlanar)(uint8_t *dst, const uint8_t *src[4], int width, int32_t *rgb2yuv);/** @} *//*** Scale one horizontal line of input data using a bilinear filter* to produce one line of output data. Compared to SwsContext->hScale(),* please take note of the following caveats when using these:* - Scaling is done using only 7 bits instead of 14-bit coefficients.* - You can use no more than 5 input pixels to produce 4 output*   pixels. Therefore, this filter should not be used for downscaling*   by more than ~20% in width (because that equals more than 5/4th*   downscaling and thus more than 5 pixels input per 4 pixels output).* - In general, bilinear filters create artifacts during downscaling*   (even when <20%), because one output pixel will span more than one*   input pixel, and thus some pixels will need edges of both neighbor*   pixels to interpolate the output pixel. Since you can use at most*   two input pixels per output pixel in bilinear scaling, this is*   impossible and thus downscaling by any size will create artifacts.* To enable this type of scaling, set SWS_FLAG_FAST_BILINEAR* in SwsContext->flags.*//** @{ */void (*hyscale_fast)(struct SwsContext *c,int16_t *dst, int dstWidth,const uint8_t *src, int srcW, int xInc);void (*hcscale_fast)(struct SwsContext *c,int16_t *dst1, int16_t *dst2, int dstWidth,const uint8_t *src1, const uint8_t *src2,int srcW, int xInc);/** @} *//*** Scale one horizontal line of input data using a filter over the input* lines, to produce one (differently sized) line of output data.** @param dst        pointer to destination buffer for horizontally scaled*                   data. If the number of bits per component of one*                   destination pixel (SwsContext->dstBpc) is <= 10, data*                   will be 15 bpc in 16 bits (int16_t) width. Else (i.e.*                   SwsContext->dstBpc == 16), data will be 19bpc in*                   32 bits (int32_t) width.* @param dstW       width of destination image* @param src        pointer to source data to be scaled. If the number of*                   bits per component of a source pixel (SwsContext->srcBpc)*                   is 8, this is 8bpc in 8 bits (uint8_t) width. Else*                   (i.e. SwsContext->dstBpc > 8), this is native depth*                   in 16 bits (uint16_t) width. In other words, for 9-bit*                   YUV input, this is 9bpc, for 10-bit YUV input, this is*                   10bpc, and for 16-bit RGB or YUV, this is 16bpc.* @param filter     filter coefficients to be used per output pixel for*                   scaling. This contains 14bpp filtering coefficients.*                   Guaranteed to contain dstW * filterSize entries.* @param filterPos  position of the first input pixel to be used for*                   each output pixel during scaling. Guaranteed to*                   contain dstW entries.* @param filterSize the number of input coefficients to be used (and*                   thus the number of input pixels to be used) for*                   creating a single output pixel. Is aligned to 4*                   (and input coefficients thus padded with zeroes)*                   to simplify creating SIMD code.*//** @{ */void (*hyScale)(struct SwsContext *c, int16_t *dst, int dstW,const uint8_t *src, const int16_t *filter,const int32_t *filterPos, int filterSize);void (*hcScale)(struct SwsContext *c, int16_t *dst, int dstW,const uint8_t *src, const int16_t *filter,const int32_t *filterPos, int filterSize);/** @} *//// Color range conversion function for luma plane if needed.void (*lumConvertRange)(int16_t *dst, int width);/// Color range conversion function for chroma planes if needed.void (*chrConvertRange)(int16_t *dst1, int16_t *dst2, int width);int needs_hcscale; ///< Set if there are chroma planes to be converted.SwsDither dither;SwsAlphaBlend alphablend;// scratch buffer for converting packed rgb0 sources// filled with a copy of the input frame + fully opaque alpha,// then passed as input to further conversionuint8_t     *rgb0_scratch;unsigned int rgb0_scratch_allocated;// scratch buffer for converting XYZ sources// filled with the input converted to rgb48// then passed as input to further conversionuint8_t     *xyz_scratch;unsigned int xyz_scratch_allocated;unsigned int dst_slice_align;atomic_int   stride_unaligned_warned;atomic_int   data_unaligned_warned;
} SwsContext;
//FIXME check init (where 0)
  • 這個結構體的定義確實比較復雜,里面包含了libswscale所需要的全部變量。一一分析這些變量是不太現實的,在后文中會簡單分析其中的幾個變量。? 緬懷大佬
  • 至此sws_getContext()的源代碼就基本上分析完畢了。

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/445928.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/445928.shtml
英文地址,請注明出處:http://en.pswp.cn/news/445928.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

IntelliJ IDEA 學習筆記

IDEA教學視頻 文章目錄1.IntelliJ IDEA的介紹和優勢IDEA 的主要優勢2.版本介紹與安裝前的準備3.IDEA的卸載4.IDEA的安裝5.安裝目錄和設置目錄結構的說明安裝目錄設置目錄6.啟動IDEA并執行HelloWorld7.Module的使用8.IDEA的常用設置9.快捷鍵的設置10.常用的快捷鍵的使用111.常用…

機器學習頂刊文獻_人工智能頂刊TPAMI2019最新《多模態機器學習綜述》

原標題&#xff1a;人工智能頂刊TPAMI2019最新《多模態機器學習綜述》來源&#xff1a;專知摘要&#xff1a;”當研究問題或數據集包括多個這樣的模態時&#xff0c;其特征在于多模態。【導讀】人工智能領域最頂級國際期刊IEEE Transactions on Pattern Analysis and Machine I…

Windows上同時運行兩個Tomcat

步驟 1.獲得免安裝包 從Tomcat官網下載免安裝包。 2.解壓復制 解壓并創建兩個副本tomcat1和tomcat2&#xff0c;它們的路徑分別為&#xff1a; tomcat1&#xff1a;C:\tomcat\double\apache-tomcat-7.0.90-8081tomcat2&#xff1a;C:\tomcat\double\apache-tomcat-7.0.90-…

FFmpeg源代碼簡單分析-其他-libswscale的sws_scale()

參考鏈接 FFmpeg源代碼簡單分析&#xff1a;libswscale的sws_scale()_雷霄驊的博客-CSDN博客_bad dst image pointers libswscale的sws_scale() FFmpeg的圖像處理&#xff08;縮放&#xff0c;YUV/RGB格式轉換&#xff09;類庫libswsscale中的sws_scale()函數。libswscale是一…

布朗橋python_MATLAB 里面有哪些加快程序運行速度的方法呢,求分享?

挖墳了…睡不著覺當個備忘錄記一下用過的方法吧1. 循環向量化2. 利用函數的矩陣輸入功能批量處理3. 必須用for且費時的地方改成單層parfor&#xff0c;要是循環次數比cpu核數還少反而會拖慢程序4. 非常大的矩陣的運算可以用gpuArray(這個在matlab 深度學習工具箱中深有體會)5. …

FFmpeg源代碼簡單分析-其他-libavdevice的avdevice_register_all()

參考鏈接 FFmpeg源代碼簡單分析&#xff1a;libavdevice的avdevice_register_all()_雷霄驊的博客-CSDN博客 libavdevice的avdevice_register_all() FFmpeg中libavdevice注冊設備的函數avdevice_register_all()。avdevice_register_all()在編程中的使用示例可以參考文章&#…

Tomcat無需輸入項目名,直接用域名訪問項目

問題 在Tomcat上開發Web應用&#xff0c;通常是將應用放置Tomcat主目錄下webapps&#xff0c;然后在瀏覽器地址欄輸入域名應用名&#xff08;如http://localhost:8080/app&#xff09;對應用進行訪問。 為了方便開發&#xff0c;打算直接用域名訪問項目。例如&#xff0c;在瀏…

藍牙該串口設備不存在或已被占用_電腦識別不了串口設備如何解決_電腦檢測不到串口怎么辦...

2015-09-07 10:46:45win8.1系統USB轉串口不能識別設備出現錯誤代碼10的解決方法分享給大家&#xff0c;win8.1系統插入USB設備提示“指定不存在的設備”&#xff0c;左下角有小黃色感嘆號&#xff0c;導致設備無法識別不能識別...2016-12-02 10:52:57一般情況下&#xff0c;win…

FFmpeg源代碼簡單分析-其他-libavdevice的gdigrab

參考鏈接 FFmpeg源代碼簡單分析&#xff1a;libavdevice的gdigrab_雷霄驊的博客-CSDN博客_gdigrab libavdevice的gdigrab GDIGrab用于在Windows下屏幕錄像&#xff08;抓屏&#xff09;gdigrab的源代碼位于libavdevice\gdigrab.c。關鍵函數的調用關系圖如下圖所示。圖中綠色背…

分區和分片的區別_PHP: 分區和分片 - Manual

分區和分片數據庫群組是由于各種各樣的原因建立的&#xff0c;他可以提升處理能力、容忍錯誤&#xff0c;并且提升大量服務器同時工作的的性能。群組有時會組合分區和共享功能&#xff0c;來將大量復雜的任務分拆成更加簡單的任務&#xff0c;更加可控的單元。插件可以支持各種…

Ubuntu安裝GmSSL庫適用于ubuntu18和ubuntu20版本

參考鏈接 編譯與安裝【GmSSL】GmSSL 與 OpenSSL 共存的安裝方法_阿卡基YUAN的博客-CSDN博客_openssl和gmssl在Linux下安裝GmSSL_百里楊的博客-CSDN博客_安裝gmssl ubuntu18操作 需要超級管理員權限本人將下載的安裝包master.zip和安裝的位置都設定在/usr/local下創建文件夾/u…

Windows7右鍵菜單欄添加打開cmd項

背景簡介 眾所周知&#xff0c;在Linux桌面操作系統中的工作目錄窗口中&#xff0c;單擊鼠標右鍵&#xff0c;彈出的菜單欄通常有一項“打開終端”&#xff0c;然后移動鼠標點擊該項&#xff0c;就可以打開Shell窗口&#xff0c;在當前工作目錄進行命令行操作。 但是&#xf…

python11_Python11,文件操作

整了這么多雜七雜八又“沒用”的&#xff0c;終于來點實際的操作了。Python中用open()方法來對打開文件。我們來看看它的用法&#xff1a;path "C:\\Users\Frank\Desktop\\text.txt"f open(path,r,encoding"utf-8")首先給變量path指定一個路徑&#xff0…

在ubuntu環境下執行openssl編譯和安裝

參考鏈接 工具系列 | Ubuntu18.04安裝Openssl-1.1.1_Tinywan的技術博客_51CTO博客密碼學專題 openssl編譯和安裝_MY CUP OF TEA的博客-CSDN博客_openssl 編譯安裝 下載 /source/index.html編譯 使用命令sudo tar -xvzf openssl-1.1.1q.tar.gz 解壓。使用cd openssl-1.1.1q/進…

chrome 使用gpu 加速_一招解決 Chrome / Edge 卡頓緩慢 讓瀏覽器重回流暢順滑

最近一段時間,我發現電腦上的 Chrome 谷歌瀏覽器越用越卡了。特別是網頁打開比較多,同時還有視頻播放時,整個瀏覽器的響應速度都會變得非常緩慢,視頻也會卡頓掉幀。 我用的是 iMac / 32GB 內存 / Intel 四核 i7 4Ghz CPU,硬件性能應該足以讓 Chrome 流暢打開幾十個網頁標簽…

CLion運行程序時添加命令行參數 即設置argv輸入參數

參考鏈接 CLion運行程序時添加命令行參數_三豐雜貨鋪的博客-CSDN博客_clion命令行參數 操作流程 Run -> Edit -> Configuration -> Program arguments那里添內容最快捷的方式是&#xff0c;點擊錘子編譯圖標和運行圖標之間的的圖標&#xff0c;進行Edit Configurati…

python的userlist_Python Collections.UserList用法及代碼示例

Python列表是array-like數據結構&#xff0c;但與之不同的是它是同質的。單個列表可能包含數據類型&#xff0c;例如整數&#xff0c;字符串以及對象。 Python中的列表是有序的&#xff0c;并且有一定數量。根據確定的序列對列表中的元素進行索引&#xff0c;并使用0作為第一個…

解決 SSL_CTX_use_certificate:ca md too weak:ssl/ssl_rsa.c 問題

報錯原因分析 原因是openssl調整了安全級別&#xff0c;要求ca具備更高等級的安全&#xff0c;因此先前發布的證書&#xff0c;如果采用了不安全的算法&#xff0c;比如MD5&#xff0c;就會顯示上述這個錯誤 解決辦法 重新生成證書&#xff0c;先前證書棄用使用函數 SSL_CTX_…

向上滾動 終端_ubuntu

Ubuntu終端Terminal常用快捷鍵Ubuntu終端Terminal常用快捷鍵 快捷鍵 功能 Tab 自動補全 Ctrla 光標移動到開始位置 Ctrle 光標移動到最末尾 Ctrlk 刪除此處至末尾的所有內容 Ctrlu 刪除此處至開始的所有內容 Ctrld 刪除當前字符 Ctrlh 刪除當前字符前一個字符 Ctrlw 刪除此處到…

openssl實現雙向認證教程(服務端代碼+客戶端代碼+證書生成)

參考鏈接 openssl實現雙向認證教程&#xff08;服務端代碼客戶端代碼證書生成&#xff09;_huang714的博客-CSDN博客_ssl_ctx_load_verify_locations基于openssl實現https雙向身份認證及安全通信_tutu-hu的博客-CSDN博客_基于openssl實現 注意事項 openssl版本差異很可能導致程…