FFmpeg源代碼簡單分析-其他-libswscale的sws_scale()

參考鏈接

  • FFmpeg源代碼簡單分析:libswscale的sws_scale()_雷霄驊的博客-CSDN博客_bad dst image pointers

libswscale的sws_scale()

  • FFmpeg的圖像處理(縮放,YUV/RGB格式轉換)類庫libswsscale中的sws_scale()函數。
  • libswscale是一個主要用于處理圖片像素數據的類庫。可以完成圖片像素格式的轉換,圖片的拉伸等工作。
  • 該類庫常用的函數數量很少,一般情況下就3個:
    • sws_getContext():初始化一個SwsContext。
    • sws_scale():處理圖像數據。
    • sws_freeContext():釋放一個SwsContext。

Libswscale處理數據流程

  • Libswscale處理像素數據的流程可以概括為下圖

  • 從圖中可以看出,libswscale處理數據有兩條最主要的方式:unscaled和scaled。
  • unscaled用于處理不需要拉伸的像素數據(屬于比較特殊的情況),scaled用于處理需要拉伸的像素數據。
  • Unscaled只需要對圖像像素格式進行轉換;而Scaled則除了對像素格式進行轉換之外,還需要對圖像進行縮放。
  • Scaled方式可以分成以下幾個步驟:
    • XXX to YUV Converter:首先將數據像素數據轉換為8bitYUV格式;
    • Horizontal scaler:水平拉伸圖像,并且轉換為15bitYUV;
    • Vertical scaler:垂直拉伸圖像;
    • Output converter:轉換為輸出像素格式。

sws_scale()

  • sws_scale()是用于轉換像素的函數。它的聲明位于libswscale\swscale.h,如下所示。
/*** Scale the image slice in srcSlice and put the resulting scaled* slice in the image in dst. A slice is a sequence of consecutive* rows in an image.** Slices have to be provided in sequential order, either in* top-bottom or bottom-top order. If slices are provided in* non-sequential order the behavior of the function is undefined.** @param c         the scaling context previously created with*                  sws_getContext()* @param srcSlice  the array containing the pointers to the planes of*                  the source slice* @param srcStride the array containing the strides for each plane of*                  the source image* @param srcSliceY the position in the source image of the slice to*                  process, that is the number (counted starting from*                  zero) in the image of the first row of the slice* @param srcSliceH the height of the source slice, that is the number*                  of rows in the slice* @param dst       the array containing the pointers to the planes of*                  the destination image* @param dstStride the array containing the strides for each plane of*                  the destination image* @return          the height of the output slice*/
int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[],const int srcStride[], int srcSliceY, int srcSliceH,uint8_t *const dst[], const int dstStride[]);
  • sws_scale()的定義位于libswscale\swscale.c,如下所示。
/*** swscale wrapper, so we don't need to export the SwsContext.* Assumes planar YUV to be in YUV order instead of YVU.*/
int attribute_align_arg sws_scale(struct SwsContext *c,const uint8_t * const srcSlice[],const int srcStride[], int srcSliceY,int srcSliceH, uint8_t *const dst[],const int dstStride[])
{if (c->nb_slice_ctx)c = c->slice_ctx[0];return scale_internal(c, srcSlice, srcStride, srcSliceY, srcSliceH,dst, dstStride, 0, c->dstH);
}
  • sws_scale內部調用了scale_internal,scale_internal函數封裝了sws_scale的大多數代碼?
static int scale_internal(SwsContext *c,const uint8_t * const srcSlice[], const int srcStride[],int srcSliceY, int srcSliceH,uint8_t *const dstSlice[], const int dstStride[],int dstSliceY, int dstSliceH)
{const int scale_dst = dstSliceY > 0 || dstSliceH < c->dstH;const int frame_start = scale_dst || !c->sliceDir;int i, ret;const uint8_t *src2[4];uint8_t *dst2[4];int macro_height_src = isBayer(c->srcFormat) ? 2 : (1 << c->chrSrcVSubSample);int macro_height_dst = isBayer(c->dstFormat) ? 2 : (1 << c->chrDstVSubSample);// copy strides, so they can safely be modifiedint srcStride2[4];int dstStride2[4];int srcSliceY_internal = srcSliceY;if (!srcStride || !dstStride || !dstSlice || !srcSlice) {av_log(c, AV_LOG_ERROR, "One of the input parameters to sws_scale() is NULL, please check the calling code\n");return AVERROR(EINVAL);}if ((srcSliceY  & (macro_height_src - 1)) ||((srcSliceH & (macro_height_src - 1)) && srcSliceY + srcSliceH != c->srcH) ||srcSliceY + srcSliceH > c->srcH) {av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", srcSliceY, srcSliceH);return AVERROR(EINVAL);}if ((dstSliceY  & (macro_height_dst - 1)) ||((dstSliceH & (macro_height_dst - 1)) && dstSliceY + dstSliceH != c->dstH) ||dstSliceY + dstSliceH > c->dstH) {av_log(c, AV_LOG_ERROR, "Slice parameters %d, %d are invalid\n", dstSliceY, dstSliceH);return AVERROR(EINVAL);}if (!check_image_pointers(srcSlice, c->srcFormat, srcStride)) {av_log(c, AV_LOG_ERROR, "bad src image pointers\n");return AVERROR(EINVAL);}if (!check_image_pointers((const uint8_t* const*)dstSlice, c->dstFormat, dstStride)) {av_log(c, AV_LOG_ERROR, "bad dst image pointers\n");return AVERROR(EINVAL);}// do not mess up sliceDir if we have a "trailing" 0-size sliceif (srcSliceH == 0)return 0;if (c->gamma_flag && c->cascaded_context[0])return scale_gamma(c, srcSlice, srcStride, srcSliceY, srcSliceH,dstSlice, dstStride, dstSliceY, dstSliceH);if (c->cascaded_context[0] && srcSliceY == 0 && srcSliceH == c->cascaded_context[0]->srcH)return scale_cascaded(c, srcSlice, srcStride, srcSliceY, srcSliceH,dstSlice, dstStride, dstSliceY, dstSliceH);if (!srcSliceY && (c->flags & SWS_BITEXACT) && c->dither == SWS_DITHER_ED && c->dither_error[0])for (i = 0; i < 4; i++)memset(c->dither_error[i], 0, sizeof(c->dither_error[0][0]) * (c->dstW+2));if (usePal(c->srcFormat))update_palette(c, (const uint32_t *)srcSlice[1]);memcpy(src2,       srcSlice,  sizeof(src2));memcpy(dst2,       dstSlice,  sizeof(dst2));memcpy(srcStride2, srcStride, sizeof(srcStride2));memcpy(dstStride2, dstStride, sizeof(dstStride2));if (frame_start && !scale_dst) {if (srcSliceY != 0 && srcSliceY + srcSliceH != c->srcH) {av_log(c, AV_LOG_ERROR, "Slices start in the middle!\n");return AVERROR(EINVAL);}c->sliceDir = (srcSliceY == 0) ? 1 : -1;} else if (scale_dst)c->sliceDir = 1;if (c->src0Alpha && !c->dst0Alpha && isALPHA(c->dstFormat)) {uint8_t *base;int x,y;av_fast_malloc(&c->rgb0_scratch, &c->rgb0_scratch_allocated,FFABS(srcStride[0]) * srcSliceH + 32);if (!c->rgb0_scratch)return AVERROR(ENOMEM);base = srcStride[0] < 0 ? c->rgb0_scratch - srcStride[0] * (srcSliceH-1) :c->rgb0_scratch;for (y=0; y<srcSliceH; y++){memcpy(base + srcStride[0]*y, src2[0] + srcStride[0]*y, 4*c->srcW);for (x=c->src0Alpha-1; x<4*c->srcW; x+=4) {base[ srcStride[0]*y + x] = 0xFF;}}src2[0] = base;}if (c->srcXYZ && !(c->dstXYZ && c->srcW==c->dstW && c->srcH==c->dstH)) {uint8_t *base;av_fast_malloc(&c->xyz_scratch, &c->xyz_scratch_allocated,FFABS(srcStride[0]) * srcSliceH + 32);if (!c->xyz_scratch)return AVERROR(ENOMEM);base = srcStride[0] < 0 ? c->xyz_scratch - srcStride[0] * (srcSliceH-1) :c->xyz_scratch;xyz12Torgb48(c, (uint16_t*)base, (const uint16_t*)src2[0], srcStride[0]/2, srcSliceH);src2[0] = base;}if (c->sliceDir != 1) {// slices go from bottom to top => we flip the image internallyfor (i=0; i<4; i++) {srcStride2[i] *= -1;dstStride2[i] *= -1;}src2[0] += (srcSliceH - 1) * srcStride[0];if (!usePal(c->srcFormat))src2[1] += ((srcSliceH >> c->chrSrcVSubSample) - 1) * srcStride[1];src2[2] += ((srcSliceH >> c->chrSrcVSubSample) - 1) * srcStride[2];src2[3] += (srcSliceH - 1) * srcStride[3];dst2[0] += ( c->dstH                         - 1) * dstStride[0];dst2[1] += ((c->dstH >> c->chrDstVSubSample) - 1) * dstStride[1];dst2[2] += ((c->dstH >> c->chrDstVSubSample) - 1) * dstStride[2];dst2[3] += ( c->dstH                         - 1) * dstStride[3];srcSliceY_internal = c->srcH-srcSliceY-srcSliceH;}reset_ptr(src2, c->srcFormat);reset_ptr((void*)dst2, c->dstFormat);if (c->convert_unscaled) {int offset  = srcSliceY_internal;int slice_h = srcSliceH;// for dst slice scaling, offset the pointers to match the unscaled APIif (scale_dst) {av_assert0(offset == 0);for (i = 0; i < 4 && src2[i]; i++) {if (!src2[i] || (i > 0 && usePal(c->srcFormat)))break;src2[i] += (dstSliceY >> ((i == 1 || i == 2) ? c->chrSrcVSubSample : 0)) * srcStride2[i];}for (i = 0; i < 4 && dst2[i]; i++) {if (!dst2[i] || (i > 0 && usePal(c->dstFormat)))break;dst2[i] -= (dstSliceY >> ((i == 1 || i == 2) ? c->chrDstVSubSample : 0)) * dstStride2[i];}offset  = dstSliceY;slice_h = dstSliceH;}ret = c->convert_unscaled(c, src2, srcStride2, offset, slice_h,dst2, dstStride2);if (scale_dst)dst2[0] += dstSliceY * dstStride2[0];} else {ret = swscale(c, src2, srcStride2, srcSliceY_internal, srcSliceH,dst2, dstStride2, dstSliceY, dstSliceH);}if (c->dstXYZ && !(c->srcXYZ && c->srcW==c->dstW && c->srcH==c->dstH)) {uint16_t *dst16;if (scale_dst) {dst16 = (uint16_t *)dst2[0];} else {int dstY = c->dstY ? c->dstY : srcSliceY + srcSliceH;av_assert0(dstY >= ret);av_assert0(ret >= 0);av_assert0(c->dstH >= dstY);dst16 = (uint16_t*)(dst2[0] + (dstY - ret) * dstStride2[0]);}/* replace on the same data */rgb48Toxyz12(c, dst16, dst16, dstStride2[0]/2, ret);}/* reset slice direction at end of frame */if ((srcSliceY_internal + srcSliceH == c->srcH) || scale_dst)c->sliceDir = 0;return ret;
}

  • 從sws_scale()的定義可以看出,它封裝了SwsContext中的swscale()(注意這個函數中間沒有“_”)。函數最重要的一句代碼就是“swscale()”。
  • 除此之外,函數還做了一些增加“兼容性”的一些處理。
  • 函數的主要步驟如下所示。

1.檢查輸入的圖像參數的合理性。

  • 這一步驟首先檢查輸入輸出的參數是否為空,然后通過調用check_image_pointers()檢查輸入輸出圖像的內存是否正確分配。
  • check_image_pointers()的定義如下所示。
static int check_image_pointers(const uint8_t * const data[4], enum AVPixelFormat pix_fmt,const int linesizes[4])
{const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(pix_fmt);int i;av_assert2(desc);for (i = 0; i < 4; i++) {int plane = desc->comp[i].plane;if (!data[plane] || !linesizes[plane])return 0;}return 1;
}
  • 從check_image_pointers()的定義可以看出,在特定像素格式前提下,如果該像素格式應該包含像素的分量為空,就返回0,否則返回1。

2.如果輸入像素數據中使用了“調色板”(palette),則進行一些相應的處理。

  • 這一步通過函數usePal()來判定。
  • usePal()的定義如下。
static av_always_inline int usePal(enum AVPixelFormat pix_fmt)
{switch (pix_fmt) {case AV_PIX_FMT_PAL8:case AV_PIX_FMT_BGR4_BYTE:case AV_PIX_FMT_BGR8:case AV_PIX_FMT_GRAY8:case AV_PIX_FMT_RGB4_BYTE:case AV_PIX_FMT_RGB8:return 1;default:return 0;}
}

3.其它一些特殊格式的處理,比如說Alpha,XYZ等的處理(這方面沒有研究過)。
4.如果輸入的圖像的掃描方式是從底部到頂部的(一般情況下是從頂部到底部),則將圖像進行反轉。
5.調用SwsContext中的swscale()。
SwsContext中的swscale()

  • swscale這個變量的類型是SwsFunc,實際上就是一個函數指針。它是整個類庫的核心。當我們從外部調用swscale()函數的時候,實際上就是調用了SwsContext中的這個名稱為swscale的變量(注意外部函數接口和這個內部函數指針的名字是一樣的,但不是一回事)。
  • 可以看一下SwsFunc這個類型的定義:
typedef int (*SwsFunc)(struct SwsContext *context, const uint8_t *src[],int srcStride[], int srcSliceY, int srcSliceH,uint8_t *dst[], int dstStride[]);
  • 可以看出SwsFunc的定義的參數類型和libswscale類庫外部接口函數swscale()的參數類型一模一樣。
  • 在libswscale中,該指針的指向可以分成2種情況:
    • 1.圖像沒有伸縮的時候,指向專有的像素轉換函數
    • 2.圖像有伸縮的時候,指向swscale()函數。
  • 在調用sws_getContext()初始化SwsContext的時候,會在其子函數sws_init_context()中對swscale指針進行賦值。如果圖像沒有進行拉伸,則會調用ff_get_unscaled_swscale()對其進行賦值;如果圖像進行了拉伸,則會調用ff_getSwsFunc()對其進行賦值。
  • 下面分別看一下這2種情況。

沒有拉伸--專有的像素轉換函數

  • 如果圖像沒有進行拉伸,則會調用ff_get_unscaled_swscale()對SwsContext的swscale進行賦值。
  • 上篇文章中記錄了這個函數,在這里回顧一下。

ff_get_unscaled_swscale()

  • ff_get_unscaled_swscale()的定義如下。
void ff_get_unscaled_swscale(SwsContext *c)
{const enum AVPixelFormat srcFormat = c->srcFormat;const enum AVPixelFormat dstFormat = c->dstFormat;const int flags = c->flags;const int dstH = c->dstH;const int dstW = c->dstW;int needsDither;needsDither = isAnyRGB(dstFormat) &&c->dstFormatBpp < 24 &&(c->dstFormatBpp < c->srcFormatBpp || (!isAnyRGB(srcFormat)));/* yv12_to_nv12 */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&(dstFormat == AV_PIX_FMT_NV12 || dstFormat == AV_PIX_FMT_NV21)) {c->convert_unscaled = planarToNv12Wrapper;}/* yv24_to_nv24 */if ((srcFormat == AV_PIX_FMT_YUV444P || srcFormat == AV_PIX_FMT_YUVA444P) &&(dstFormat == AV_PIX_FMT_NV24 || dstFormat == AV_PIX_FMT_NV42)) {c->convert_unscaled = planarToNv24Wrapper;}/* nv12_to_yv12 */if (dstFormat == AV_PIX_FMT_YUV420P &&(srcFormat == AV_PIX_FMT_NV12 || srcFormat == AV_PIX_FMT_NV21)) {c->convert_unscaled = nv12ToPlanarWrapper;}/* nv24_to_yv24 */if (dstFormat == AV_PIX_FMT_YUV444P &&(srcFormat == AV_PIX_FMT_NV24 || srcFormat == AV_PIX_FMT_NV42)) {c->convert_unscaled = nv24ToPlanarWrapper;}/* yuv2bgr */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUV422P ||srcFormat == AV_PIX_FMT_YUVA420P) && isAnyRGB(dstFormat) &&!(flags & SWS_ACCURATE_RND) && (c->dither == SWS_DITHER_BAYER || c->dither == SWS_DITHER_AUTO) && !(dstH & 1)) {c->convert_unscaled = ff_yuv2rgb_get_func_ptr(c);c->dst_slice_align = 2;}/* yuv420p1x_to_p01x */if ((srcFormat == AV_PIX_FMT_YUV420P10 || srcFormat == AV_PIX_FMT_YUVA420P10 ||srcFormat == AV_PIX_FMT_YUV420P12 ||srcFormat == AV_PIX_FMT_YUV420P14 ||srcFormat == AV_PIX_FMT_YUV420P16 || srcFormat == AV_PIX_FMT_YUVA420P16) &&(dstFormat == AV_PIX_FMT_P010 || dstFormat == AV_PIX_FMT_P016)) {c->convert_unscaled = planarToP01xWrapper;}/* yuv420p_to_p01xle */if ((srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) &&(dstFormat == AV_PIX_FMT_P010LE || dstFormat == AV_PIX_FMT_P016LE)) {c->convert_unscaled = planar8ToP01xleWrapper;}if (srcFormat == AV_PIX_FMT_YUV410P && !(dstH & 3) &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_BITEXACT)) {c->convert_unscaled = yvu9ToYv12Wrapper;c->dst_slice_align = 4;}/* bgr24toYV12 */if (srcFormat == AV_PIX_FMT_BGR24 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P) &&!(flags & SWS_ACCURATE_RND) && !(dstW&1))c->convert_unscaled = bgr24ToYv12Wrapper;/* RGB/BGR -> RGB/BGR (no dither needed forms) */if (isAnyRGB(srcFormat) && isAnyRGB(dstFormat) && findRgbConvFn(c)&& (!needsDither || (c->flags&(SWS_FAST_BILINEAR|SWS_POINT))))c->convert_unscaled = rgbToRgbWrapper;/* RGB to planar RGB */if ((srcFormat == AV_PIX_FMT_GBRP && dstFormat == AV_PIX_FMT_GBRAP) ||(srcFormat == AV_PIX_FMT_GBRAP && dstFormat == AV_PIX_FMT_GBRP))c->convert_unscaled = planarRgbToplanarRgbWrapper;#define isByteRGB(f) (             \f == AV_PIX_FMT_RGB32   || \f == AV_PIX_FMT_RGB32_1 || \f == AV_PIX_FMT_RGB24   || \f == AV_PIX_FMT_BGR32   || \f == AV_PIX_FMT_BGR32_1 || \f == AV_PIX_FMT_BGR24)if (srcFormat == AV_PIX_FMT_GBRP && isPlanar(srcFormat) && isByteRGB(dstFormat))c->convert_unscaled = planarRgbToRgbWrapper;if (srcFormat == AV_PIX_FMT_GBRAP && isByteRGB(dstFormat))c->convert_unscaled = planarRgbaToRgbWrapper;if ((srcFormat == AV_PIX_FMT_RGB48LE  || srcFormat == AV_PIX_FMT_RGB48BE  ||srcFormat == AV_PIX_FMT_BGR48LE  || srcFormat == AV_PIX_FMT_BGR48BE  ||srcFormat == AV_PIX_FMT_RGBA64LE || srcFormat == AV_PIX_FMT_RGBA64BE ||srcFormat == AV_PIX_FMT_BGRA64LE || srcFormat == AV_PIX_FMT_BGRA64BE) &&(dstFormat == AV_PIX_FMT_GBRP9LE  || dstFormat == AV_PIX_FMT_GBRP9BE  ||dstFormat == AV_PIX_FMT_GBRP10LE || dstFormat == AV_PIX_FMT_GBRP10BE ||dstFormat == AV_PIX_FMT_GBRP12LE || dstFormat == AV_PIX_FMT_GBRP12BE ||dstFormat == AV_PIX_FMT_GBRP14LE || dstFormat == AV_PIX_FMT_GBRP14BE ||dstFormat == AV_PIX_FMT_GBRP16LE || dstFormat == AV_PIX_FMT_GBRP16BE ||dstFormat == AV_PIX_FMT_GBRAP10LE || dstFormat == AV_PIX_FMT_GBRAP10BE ||dstFormat == AV_PIX_FMT_GBRAP12LE || dstFormat == AV_PIX_FMT_GBRAP12BE ||dstFormat == AV_PIX_FMT_GBRAP16LE || dstFormat == AV_PIX_FMT_GBRAP16BE ))c->convert_unscaled = Rgb16ToPlanarRgb16Wrapper;if ((srcFormat == AV_PIX_FMT_GBRP9LE  || srcFormat == AV_PIX_FMT_GBRP9BE  ||srcFormat == AV_PIX_FMT_GBRP16LE || srcFormat == AV_PIX_FMT_GBRP16BE ||srcFormat == AV_PIX_FMT_GBRP10LE || srcFormat == AV_PIX_FMT_GBRP10BE ||srcFormat == AV_PIX_FMT_GBRP12LE || srcFormat == AV_PIX_FMT_GBRP12BE ||srcFormat == AV_PIX_FMT_GBRP14LE || srcFormat == AV_PIX_FMT_GBRP14BE ||srcFormat == AV_PIX_FMT_GBRAP10LE || srcFormat == AV_PIX_FMT_GBRAP10BE ||srcFormat == AV_PIX_FMT_GBRAP12LE || srcFormat == AV_PIX_FMT_GBRAP12BE ||srcFormat == AV_PIX_FMT_GBRAP16LE || srcFormat == AV_PIX_FMT_GBRAP16BE) &&(dstFormat == AV_PIX_FMT_RGB48LE  || dstFormat == AV_PIX_FMT_RGB48BE  ||dstFormat == AV_PIX_FMT_BGR48LE  || dstFormat == AV_PIX_FMT_BGR48BE  ||dstFormat == AV_PIX_FMT_RGBA64LE || dstFormat == AV_PIX_FMT_RGBA64BE ||dstFormat == AV_PIX_FMT_BGRA64LE || dstFormat == AV_PIX_FMT_BGRA64BE))c->convert_unscaled = planarRgb16ToRgb16Wrapper;if (av_pix_fmt_desc_get(srcFormat)->comp[0].depth == 8 &&isPackedRGB(srcFormat) && dstFormat == AV_PIX_FMT_GBRP)c->convert_unscaled = rgbToPlanarRgbWrapper;if (isBayer(srcFormat)) {if (dstFormat == AV_PIX_FMT_RGB24)c->convert_unscaled = bayer_to_rgb24_wrapper;else if (dstFormat == AV_PIX_FMT_RGB48)c->convert_unscaled = bayer_to_rgb48_wrapper;else if (dstFormat == AV_PIX_FMT_YUV420P)c->convert_unscaled = bayer_to_yv12_wrapper;else if (!isBayer(dstFormat)) {av_log(c, AV_LOG_ERROR, "unsupported bayer conversion\n");av_assert0(0);}}/* bswap 16 bits per pixel/component packed formats */if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_BGGR16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_RGGB16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GBRG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BAYER_GRBG16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR48)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGR565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_BGRA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GRAY16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YA16)   ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_AYUV64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAP16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB444) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB48)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB555) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGB565) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_RGBA64) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_XYZ12)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV420P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV422P16) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV440P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P9)  ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P10) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P12) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P14) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_YUV444P16))c->convert_unscaled = bswap_16bpc;/* bswap 32 bits per pixel/component formats */if (IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRPF32) ||IS_DIFFERENT_ENDIANESS(srcFormat, dstFormat, AV_PIX_FMT_GBRAPF32))c->convert_unscaled = bswap_32bpc;if (usePal(srcFormat) && isByteRGB(dstFormat))c->convert_unscaled = palToRgbWrapper;if (srcFormat == AV_PIX_FMT_YUV422P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->convert_unscaled = yuv422pToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->convert_unscaled = yuv422pToUyvyWrapper;}/* uint Y to float Y */if (srcFormat == AV_PIX_FMT_GRAY8 && dstFormat == AV_PIX_FMT_GRAYF32){c->convert_unscaled = uint_y_to_float_y_wrapper;}/* float Y to uint Y */if (srcFormat == AV_PIX_FMT_GRAYF32 && dstFormat == AV_PIX_FMT_GRAY8){c->convert_unscaled = float_y_to_uint_y_wrapper;}/* LQ converters if -sws 0 or -sws 4*/if (c->flags&(SWS_FAST_BILINEAR|SWS_POINT)) {/* yv12_to_yuy2 */if (srcFormat == AV_PIX_FMT_YUV420P || srcFormat == AV_PIX_FMT_YUVA420P) {if (dstFormat == AV_PIX_FMT_YUYV422)c->convert_unscaled = planarToYuy2Wrapper;else if (dstFormat == AV_PIX_FMT_UYVY422)c->convert_unscaled = planarToUyvyWrapper;}}if (srcFormat == AV_PIX_FMT_YUYV422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->convert_unscaled = yuyvToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 &&(dstFormat == AV_PIX_FMT_YUV420P || dstFormat == AV_PIX_FMT_YUVA420P))c->convert_unscaled = uyvyToYuv420Wrapper;if (srcFormat == AV_PIX_FMT_YUYV422 && dstFormat == AV_PIX_FMT_YUV422P)c->convert_unscaled = yuyvToYuv422Wrapper;if (srcFormat == AV_PIX_FMT_UYVY422 && dstFormat == AV_PIX_FMT_YUV422P)c->convert_unscaled = uyvyToYuv422Wrapper;#define isPlanarGray(x) (isGray(x) && (x) != AV_PIX_FMT_YA8 && (x) != AV_PIX_FMT_YA16LE && (x) != AV_PIX_FMT_YA16BE)/* simple copy */if ( srcFormat == dstFormat ||(srcFormat == AV_PIX_FMT_YUVA420P && dstFormat == AV_PIX_FMT_YUV420P) ||(srcFormat == AV_PIX_FMT_YUV420P && dstFormat == AV_PIX_FMT_YUVA420P) ||(isFloat(srcFormat) == isFloat(dstFormat)) && ((isPlanarYUV(srcFormat) && isPlanarGray(dstFormat)) ||(isPlanarYUV(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarGray(dstFormat) && isPlanarGray(srcFormat)) ||(isPlanarYUV(srcFormat) && isPlanarYUV(dstFormat) &&c->chrDstHSubSample == c->chrSrcHSubSample &&c->chrDstVSubSample == c->chrSrcVSubSample &&!isSemiPlanarYUV(srcFormat) && !isSemiPlanarYUV(dstFormat)))){if (isPacked(c->srcFormat))c->convert_unscaled = packedCopyWrapper;else /* Planar YUV or gray */c->convert_unscaled = planarCopyWrapper;}if (ARCH_PPC)ff_get_unscaled_swscale_ppc(c);if (ARCH_ARM)ff_get_unscaled_swscale_arm(c);if (ARCH_AARCH64)ff_get_unscaled_swscale_aarch64(c);
}
  • 從代碼中可以看出,它根據輸入輸出像素格式的不同,選擇了不同的轉換函數。
  • 例如YUV420P轉換NV12的時候,就會將planarToNv12Wrapper()賦值給SwsContext的swscale指針。

有拉伸--swscale()

  • 如果圖像進行了拉伸,則會調用ff_getSwsFunc()對SwsContext的swscale進行賦值。
  • ff_getSwsFunc 函數被棄用
  • 參考鏈接:FFmpeg源代碼簡單分析-其他-libswscale的sws_getContext()_MY CUP OF TEA的博客-CSDN博客
  • ff_sws_init_scale函數的內部執行邏輯和?ff_getSwsFunc 類似
void ff_sws_init_scale(SwsContext *c)
{sws_init_swscale(c);if (ARCH_PPC)ff_sws_init_swscale_ppc(c);if (ARCH_X86)ff_sws_init_swscale_x86(c);if (ARCH_AARCH64)ff_sws_init_swscale_aarch64(c);if (ARCH_ARM)ff_sws_init_swscale_arm(c);
}
static av_cold void sws_init_swscale(SwsContext *c)
{enum AVPixelFormat srcFormat = c->srcFormat;ff_sws_init_output_funcs(c, &c->yuv2plane1, &c->yuv2planeX,&c->yuv2nv12cX, &c->yuv2packed1,&c->yuv2packed2, &c->yuv2packedX, &c->yuv2anyX);ff_sws_init_input_funcs(c);if (c->srcBpc == 8) {if (c->dstBpc <= 14) {c->hyScale = c->hcScale = hScale8To15_c;if (c->flags & SWS_FAST_BILINEAR) {c->hyscale_fast = ff_hyscale_fast_c;c->hcscale_fast = ff_hcscale_fast_c;}} else {c->hyScale = c->hcScale = hScale8To19_c;}} else {c->hyScale = c->hcScale = c->dstBpc > 14 ? hScale16To19_c: hScale16To15_c;}ff_sws_init_range_convert(c);if (!(isGray(srcFormat) || isGray(c->dstFormat) ||srcFormat == AV_PIX_FMT_MONOBLACK || srcFormat == AV_PIX_FMT_MONOWHITE))c->needs_hcscale = 1;
}
  • 未找到 代碼依據
  • ?注意,sws_init_context()對SwsContext的swscale進行賦值的語句是:
  • c->swscale = ff_getSwsFunc(c);
  • 即把ff_getSwsFunc()的返回值賦值給SwsContext的swscale指針;而ff_getSwsFunc()的返回值是一個靜態函數,名稱就叫做“swscale”。
  • 下面我們看一下這個swscale()靜態函數的定義。
static int swscale(SwsContext *c, const uint8_t *src[],int srcStride[], int srcSliceY, int srcSliceH,uint8_t *dst[], int dstStride[],int dstSliceY, int dstSliceH)
{const int scale_dst = dstSliceY > 0 || dstSliceH < c->dstH;/* load a few things into local vars to make the code more readable?* and faster */const int dstW                   = c->dstW;int dstH                         = c->dstH;const enum AVPixelFormat dstFormat = c->dstFormat;const int flags                  = c->flags;int32_t *vLumFilterPos           = c->vLumFilterPos;int32_t *vChrFilterPos           = c->vChrFilterPos;const int vLumFilterSize         = c->vLumFilterSize;const int vChrFilterSize         = c->vChrFilterSize;yuv2planar1_fn yuv2plane1        = c->yuv2plane1;yuv2planarX_fn yuv2planeX        = c->yuv2planeX;yuv2interleavedX_fn yuv2nv12cX   = c->yuv2nv12cX;yuv2packed1_fn yuv2packed1       = c->yuv2packed1;yuv2packed2_fn yuv2packed2       = c->yuv2packed2;yuv2packedX_fn yuv2packedX       = c->yuv2packedX;yuv2anyX_fn yuv2anyX             = c->yuv2anyX;const int chrSrcSliceY           =                srcSliceY >> c->chrSrcVSubSample;const int chrSrcSliceH           = AV_CEIL_RSHIFT(srcSliceH,   c->chrSrcVSubSample);int should_dither                = isNBPS(c->srcFormat) ||is16BPS(c->srcFormat);int lastDstY;/* vars which will change and which we need to store back in the context */int dstY         = c->dstY;int lastInLumBuf = c->lastInLumBuf;int lastInChrBuf = c->lastInChrBuf;int lumStart = 0;int lumEnd = c->descIndex[0];int chrStart = lumEnd;int chrEnd = c->descIndex[1];int vStart = chrEnd;int vEnd = c->numDesc;SwsSlice *src_slice = &c->slice[lumStart];SwsSlice *hout_slice = &c->slice[c->numSlice-2];SwsSlice *vout_slice = &c->slice[c->numSlice-1];SwsFilterDescriptor *desc = c->desc;int needAlpha = c->needAlpha;int hasLumHoles = 1;int hasChrHoles = 1;if (isPacked(c->srcFormat)) {src[1] =src[2] =src[3] = src[0];srcStride[1] =srcStride[2] =srcStride[3] = srcStride[0];}srcStride[1] *= 1 << c->vChrDrop;srcStride[2] *= 1 << c->vChrDrop;DEBUG_BUFFERS("swscale() %p[%d] %p[%d] %p[%d] %p[%d] -> %p[%d] %p[%d] %p[%d] %p[%d]\n",src[0], srcStride[0], src[1], srcStride[1],src[2], srcStride[2], src[3], srcStride[3],dst[0], dstStride[0], dst[1], dstStride[1],dst[2], dstStride[2], dst[3], dstStride[3]);DEBUG_BUFFERS("srcSliceY: %d srcSliceH: %d dstY: %d dstH: %d\n",srcSliceY, srcSliceH, dstY, dstH);DEBUG_BUFFERS("vLumFilterSize: %d vChrFilterSize: %d\n",vLumFilterSize, vChrFilterSize);if (dstStride[0]&15 || dstStride[1]&15 ||dstStride[2]&15 || dstStride[3]&15) {SwsContext *const ctx = c->parent ? c->parent : c;if (flags & SWS_PRINT_INFO &&!atomic_exchange_explicit(&ctx->stride_unaligned_warned, 1, memory_order_relaxed)) {av_log(c, AV_LOG_WARNING,"Warning: dstStride is not aligned!\n""         ->cannot do aligned memory accesses anymore\n");}}#if ARCH_X86if (   (uintptr_t)dst[0]&15 || (uintptr_t)dst[1]&15 || (uintptr_t)dst[2]&15|| (uintptr_t)src[0]&15 || (uintptr_t)src[1]&15 || (uintptr_t)src[2]&15|| dstStride[0]&15 || dstStride[1]&15 || dstStride[2]&15 || dstStride[3]&15|| srcStride[0]&15 || srcStride[1]&15 || srcStride[2]&15 || srcStride[3]&15) {SwsContext *const ctx = c->parent ? c->parent : c;int cpu_flags = av_get_cpu_flags();if (flags & SWS_PRINT_INFO && HAVE_MMXEXT && (cpu_flags & AV_CPU_FLAG_SSE2) &&!atomic_exchange_explicit(&ctx->stride_unaligned_warned,1, memory_order_relaxed)) {av_log(c, AV_LOG_WARNING, "Warning: data is not aligned! This can lead to a speed loss\n");}}
#endifif (scale_dst) {dstY         = dstSliceY;dstH         = dstY + dstSliceH;lastInLumBuf = -1;lastInChrBuf = -1;} else if (srcSliceY == 0) {/* Note the user might start scaling the picture in the middle so this* will not get executed. This is not really intended but works* currently, so people might do it. */dstY         = 0;lastInLumBuf = -1;lastInChrBuf = -1;}if (!should_dither) {c->chrDither8 = c->lumDither8 = sws_pb_64;}lastDstY = dstY;ff_init_vscale_pfn(c, yuv2plane1, yuv2planeX, yuv2nv12cX,yuv2packed1, yuv2packed2, yuv2packedX, yuv2anyX, c->use_mmx_vfilter);ff_init_slice_from_src(src_slice, (uint8_t**)src, srcStride, c->srcW,srcSliceY, srcSliceH, chrSrcSliceY, chrSrcSliceH, 1);ff_init_slice_from_src(vout_slice, (uint8_t**)dst, dstStride, c->dstW,dstY, dstSliceH, dstY >> c->chrDstVSubSample,AV_CEIL_RSHIFT(dstSliceH, c->chrDstVSubSample), scale_dst);if (srcSliceY == 0) {hout_slice->plane[0].sliceY = lastInLumBuf + 1;hout_slice->plane[1].sliceY = lastInChrBuf + 1;hout_slice->plane[2].sliceY = lastInChrBuf + 1;hout_slice->plane[3].sliceY = lastInLumBuf + 1;hout_slice->plane[0].sliceH =hout_slice->plane[1].sliceH =hout_slice->plane[2].sliceH =hout_slice->plane[3].sliceH = 0;hout_slice->width = dstW;}for (; dstY < dstH; dstY++) {const int chrDstY = dstY >> c->chrDstVSubSample;int use_mmx_vfilter= c->use_mmx_vfilter;// First line needed as inputconst int firstLumSrcY  = FFMAX(1 - vLumFilterSize, vLumFilterPos[dstY]);const int firstLumSrcY2 = FFMAX(1 - vLumFilterSize, vLumFilterPos[FFMIN(dstY | ((1 << c->chrDstVSubSample) - 1), c->dstH - 1)]);// First line needed as inputconst int firstChrSrcY  = FFMAX(1 - vChrFilterSize, vChrFilterPos[chrDstY]);// Last line needed as inputint lastLumSrcY  = FFMIN(c->srcH,    firstLumSrcY  + vLumFilterSize) - 1;int lastLumSrcY2 = FFMIN(c->srcH,    firstLumSrcY2 + vLumFilterSize) - 1;int lastChrSrcY  = FFMIN(c->chrSrcH, firstChrSrcY  + vChrFilterSize) - 1;int enough_lines;int i;int posY, cPosY, firstPosY, lastPosY, firstCPosY, lastCPosY;// handle holes (FAST_BILINEAR & weird filters)if (firstLumSrcY > lastInLumBuf) {hasLumHoles = lastInLumBuf != firstLumSrcY - 1;if (hasLumHoles) {hout_slice->plane[0].sliceY = firstLumSrcY;hout_slice->plane[3].sliceY = firstLumSrcY;hout_slice->plane[0].sliceH =hout_slice->plane[3].sliceH = 0;}lastInLumBuf = firstLumSrcY - 1;}if (firstChrSrcY > lastInChrBuf) {hasChrHoles = lastInChrBuf != firstChrSrcY - 1;if (hasChrHoles) {hout_slice->plane[1].sliceY = firstChrSrcY;hout_slice->plane[2].sliceY = firstChrSrcY;hout_slice->plane[1].sliceH =hout_slice->plane[2].sliceH = 0;}lastInChrBuf = firstChrSrcY - 1;}DEBUG_BUFFERS("dstY: %d\n", dstY);DEBUG_BUFFERS("\tfirstLumSrcY: %d lastLumSrcY: %d lastInLumBuf: %d\n",firstLumSrcY, lastLumSrcY, lastInLumBuf);DEBUG_BUFFERS("\tfirstChrSrcY: %d lastChrSrcY: %d lastInChrBuf: %d\n",firstChrSrcY, lastChrSrcY, lastInChrBuf);// Do we have enough lines in this slice to output the dstY lineenough_lines = lastLumSrcY2 < srcSliceY + srcSliceH &&lastChrSrcY < AV_CEIL_RSHIFT(srcSliceY + srcSliceH, c->chrSrcVSubSample);if (!enough_lines) {lastLumSrcY = srcSliceY + srcSliceH - 1;lastChrSrcY = chrSrcSliceY + chrSrcSliceH - 1;DEBUG_BUFFERS("buffering slice: lastLumSrcY %d lastChrSrcY %d\n",lastLumSrcY, lastChrSrcY);}av_assert0((lastLumSrcY - firstLumSrcY + 1) <= hout_slice->plane[0].available_lines);av_assert0((lastChrSrcY - firstChrSrcY + 1) <= hout_slice->plane[1].available_lines);posY = hout_slice->plane[0].sliceY + hout_slice->plane[0].sliceH;if (posY <= lastLumSrcY && !hasLumHoles) {firstPosY = FFMAX(firstLumSrcY, posY);lastPosY = FFMIN(firstLumSrcY + hout_slice->plane[0].available_lines - 1, srcSliceY + srcSliceH - 1);} else {firstPosY = posY;lastPosY = lastLumSrcY;}cPosY = hout_slice->plane[1].sliceY + hout_slice->plane[1].sliceH;if (cPosY <= lastChrSrcY && !hasChrHoles) {firstCPosY = FFMAX(firstChrSrcY, cPosY);lastCPosY = FFMIN(firstChrSrcY + hout_slice->plane[1].available_lines - 1, AV_CEIL_RSHIFT(srcSliceY + srcSliceH, c->chrSrcVSubSample) - 1);} else {firstCPosY = cPosY;lastCPosY = lastChrSrcY;}ff_rotate_slice(hout_slice, lastPosY, lastCPosY);if (posY < lastLumSrcY + 1) {for (i = lumStart; i < lumEnd; ++i)desc[i].process(c, &desc[i], firstPosY, lastPosY - firstPosY + 1);}lastInLumBuf = lastLumSrcY;if (cPosY < lastChrSrcY + 1) {for (i = chrStart; i < chrEnd; ++i)desc[i].process(c, &desc[i], firstCPosY, lastCPosY - firstCPosY + 1);}lastInChrBuf = lastChrSrcY;if (!enough_lines)break;  // we can't output a dstY line so let's try with the next slice#if HAVE_MMX_INLINEff_updateMMXDitherTables(c, dstY);
#endifif (should_dither) {c->chrDither8 = ff_dither_8x8_128[chrDstY & 7];c->lumDither8 = ff_dither_8x8_128[dstY    & 7];}if (dstY >= c->dstH - 2) {/* hmm looks like we can't use MMX here without overwriting* this array's tail */ff_sws_init_output_funcs(c, &yuv2plane1, &yuv2planeX, &yuv2nv12cX,&yuv2packed1, &yuv2packed2, &yuv2packedX, &yuv2anyX);use_mmx_vfilter= 0;ff_init_vscale_pfn(c, yuv2plane1, yuv2planeX, yuv2nv12cX,yuv2packed1, yuv2packed2, yuv2packedX, yuv2anyX, use_mmx_vfilter);}for (i = vStart; i < vEnd; ++i)desc[i].process(c, &desc[i], dstY, 1);}if (isPlanar(dstFormat) && isALPHA(dstFormat) && !needAlpha) {int offset = lastDstY - dstSliceY;int length = dstW;int height = dstY - lastDstY;if (is16BPS(dstFormat) || isNBPS(dstFormat)) {const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);fillPlane16(dst[3], dstStride[3], length, height, offset,1, desc->comp[3].depth,isBE(dstFormat));} else if (is32BPS(dstFormat)) {const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(dstFormat);fillPlane32(dst[3], dstStride[3], length, height, offset,1, desc->comp[3].depth,isBE(dstFormat), desc->flags & AV_PIX_FMT_FLAG_FLOAT);} elsefillPlane(dst[3], dstStride[3], length, height, offset, 255);}#if HAVE_MMXEXT_INLINEif (av_get_cpu_flags() & AV_CPU_FLAG_MMXEXT)__asm__ volatile ("sfence" ::: "memory");
#endifemms_c();/* store changed local vars back in the context */c->dstY         = dstY;c->lastInLumBuf = lastInLumBuf;c->lastInChrBuf = lastInChrBuf;return dstY - lastDstY;
}
  • 可以看出swscale()是一行一行的進行圖像縮放工作的。其中每行數據的處理按照“先水平拉伸,然后垂直拉伸”的方式進行處理。
  • 具體的實現函數如下所示:
  • 1. ?水平拉伸
    • a) 亮度水平拉伸:hyscale()
    • b) 色度水平拉伸:hcscale()
  • 2. 垂直拉伸
  • a) Planar
    • i. 亮度垂直拉伸-不拉伸:yuv2plane1()
    • ii. 亮度垂直拉伸-拉伸:yuv2planeX()
    • iii. 色度垂直拉伸-不拉伸:yuv2plane1()
    • iv. 色度垂直拉伸-拉伸:yuv2planeX()
  • b) Packed
    • i. 垂直拉伸-不拉伸:yuv2packed1()
    • ii. 垂直拉伸-拉伸:yuv2packedX()
  • 下面具體看看這幾個函數的定義。

hyscale()

  • 水平亮度拉伸函數hyscale()的定義位于libswscale\swscale.c,如下所示。? 并不存在

    /*** Scale one horizontal line of input data using a filter over the input* lines, to produce one (differently sized) line of output data.** @param dst        pointer to destination buffer for horizontally scaled*                   data. If the number of bits per component of one*                   destination pixel (SwsContext->dstBpc) is <= 10, data*                   will be 15 bpc in 16 bits (int16_t) width. Else (i.e.*                   SwsContext->dstBpc == 16), data will be 19bpc in*                   32 bits (int32_t) width.* @param dstW       width of destination image* @param src        pointer to source data to be scaled. If the number of*                   bits per component of a source pixel (SwsContext->srcBpc)*                   is 8, this is 8bpc in 8 bits (uint8_t) width. Else*                   (i.e. SwsContext->dstBpc > 8), this is native depth*                   in 16 bits (uint16_t) width. In other words, for 9-bit*                   YUV input, this is 9bpc, for 10-bit YUV input, this is*                   10bpc, and for 16-bit RGB or YUV, this is 16bpc.* @param filter     filter coefficients to be used per output pixel for*                   scaling. This contains 14bpp filtering coefficients.*                   Guaranteed to contain dstW * filterSize entries.* @param filterPos  position of the first input pixel to be used for*                   each output pixel during scaling. Guaranteed to*                   contain dstW entries.* @param filterSize the number of input coefficients to be used (and*                   thus the number of input pixels to be used) for*                   creating a single output pixel. Is aligned to 4*                   (and input coefficients thus padded with zeroes)*                   to simplify creating SIMD code.*//** @{ */void (*hyScale)(struct SwsContext *c, int16_t *dst, int dstW,const uint8_t *src, const int16_t *filter,const int32_t *filterPos, int filterSize);void (*hcScale)(struct SwsContext *c, int16_t *dst, int dstW,const uint8_t *src, const int16_t *filter,const int32_t *filterPos, int filterSize);

缺失內容

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/445924.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/445924.shtml
英文地址,請注明出處:http://en.pswp.cn/news/445924.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

布朗橋python_MATLAB 里面有哪些加快程序運行速度的方法呢,求分享?

挖墳了…睡不著覺當個備忘錄記一下用過的方法吧1. 循環向量化2. 利用函數的矩陣輸入功能批量處理3. 必須用for且費時的地方改成單層parfor&#xff0c;要是循環次數比cpu核數還少反而會拖慢程序4. 非常大的矩陣的運算可以用gpuArray(這個在matlab 深度學習工具箱中深有體會)5. …

FFmpeg源代碼簡單分析-其他-libavdevice的avdevice_register_all()

參考鏈接 FFmpeg源代碼簡單分析&#xff1a;libavdevice的avdevice_register_all()_雷霄驊的博客-CSDN博客 libavdevice的avdevice_register_all() FFmpeg中libavdevice注冊設備的函數avdevice_register_all()。avdevice_register_all()在編程中的使用示例可以參考文章&#…

Tomcat無需輸入項目名,直接用域名訪問項目

問題 在Tomcat上開發Web應用&#xff0c;通常是將應用放置Tomcat主目錄下webapps&#xff0c;然后在瀏覽器地址欄輸入域名應用名&#xff08;如http://localhost:8080/app&#xff09;對應用進行訪問。 為了方便開發&#xff0c;打算直接用域名訪問項目。例如&#xff0c;在瀏…

藍牙該串口設備不存在或已被占用_電腦識別不了串口設備如何解決_電腦檢測不到串口怎么辦...

2015-09-07 10:46:45win8.1系統USB轉串口不能識別設備出現錯誤代碼10的解決方法分享給大家&#xff0c;win8.1系統插入USB設備提示“指定不存在的設備”&#xff0c;左下角有小黃色感嘆號&#xff0c;導致設備無法識別不能識別...2016-12-02 10:52:57一般情況下&#xff0c;win…

FFmpeg源代碼簡單分析-其他-libavdevice的gdigrab

參考鏈接 FFmpeg源代碼簡單分析&#xff1a;libavdevice的gdigrab_雷霄驊的博客-CSDN博客_gdigrab libavdevice的gdigrab GDIGrab用于在Windows下屏幕錄像&#xff08;抓屏&#xff09;gdigrab的源代碼位于libavdevice\gdigrab.c。關鍵函數的調用關系圖如下圖所示。圖中綠色背…

分區和分片的區別_PHP: 分區和分片 - Manual

分區和分片數據庫群組是由于各種各樣的原因建立的&#xff0c;他可以提升處理能力、容忍錯誤&#xff0c;并且提升大量服務器同時工作的的性能。群組有時會組合分區和共享功能&#xff0c;來將大量復雜的任務分拆成更加簡單的任務&#xff0c;更加可控的單元。插件可以支持各種…

Ubuntu安裝GmSSL庫適用于ubuntu18和ubuntu20版本

參考鏈接 編譯與安裝【GmSSL】GmSSL 與 OpenSSL 共存的安裝方法_阿卡基YUAN的博客-CSDN博客_openssl和gmssl在Linux下安裝GmSSL_百里楊的博客-CSDN博客_安裝gmssl ubuntu18操作 需要超級管理員權限本人將下載的安裝包master.zip和安裝的位置都設定在/usr/local下創建文件夾/u…

Windows7右鍵菜單欄添加打開cmd項

背景簡介 眾所周知&#xff0c;在Linux桌面操作系統中的工作目錄窗口中&#xff0c;單擊鼠標右鍵&#xff0c;彈出的菜單欄通常有一項“打開終端”&#xff0c;然后移動鼠標點擊該項&#xff0c;就可以打開Shell窗口&#xff0c;在當前工作目錄進行命令行操作。 但是&#xf…

python11_Python11,文件操作

整了這么多雜七雜八又“沒用”的&#xff0c;終于來點實際的操作了。Python中用open()方法來對打開文件。我們來看看它的用法&#xff1a;path "C:\\Users\Frank\Desktop\\text.txt"f open(path,r,encoding"utf-8")首先給變量path指定一個路徑&#xff0…

在ubuntu環境下執行openssl編譯和安裝

參考鏈接 工具系列 | Ubuntu18.04安裝Openssl-1.1.1_Tinywan的技術博客_51CTO博客密碼學專題 openssl編譯和安裝_MY CUP OF TEA的博客-CSDN博客_openssl 編譯安裝 下載 /source/index.html編譯 使用命令sudo tar -xvzf openssl-1.1.1q.tar.gz 解壓。使用cd openssl-1.1.1q/進…

chrome 使用gpu 加速_一招解決 Chrome / Edge 卡頓緩慢 讓瀏覽器重回流暢順滑

最近一段時間,我發現電腦上的 Chrome 谷歌瀏覽器越用越卡了。特別是網頁打開比較多,同時還有視頻播放時,整個瀏覽器的響應速度都會變得非常緩慢,視頻也會卡頓掉幀。 我用的是 iMac / 32GB 內存 / Intel 四核 i7 4Ghz CPU,硬件性能應該足以讓 Chrome 流暢打開幾十個網頁標簽…

CLion運行程序時添加命令行參數 即設置argv輸入參數

參考鏈接 CLion運行程序時添加命令行參數_三豐雜貨鋪的博客-CSDN博客_clion命令行參數 操作流程 Run -> Edit -> Configuration -> Program arguments那里添內容最快捷的方式是&#xff0c;點擊錘子編譯圖標和運行圖標之間的的圖標&#xff0c;進行Edit Configurati…

python的userlist_Python Collections.UserList用法及代碼示例

Python列表是array-like數據結構&#xff0c;但與之不同的是它是同質的。單個列表可能包含數據類型&#xff0c;例如整數&#xff0c;字符串以及對象。 Python中的列表是有序的&#xff0c;并且有一定數量。根據確定的序列對列表中的元素進行索引&#xff0c;并使用0作為第一個…

解決 SSL_CTX_use_certificate:ca md too weak:ssl/ssl_rsa.c 問題

報錯原因分析 原因是openssl調整了安全級別&#xff0c;要求ca具備更高等級的安全&#xff0c;因此先前發布的證書&#xff0c;如果采用了不安全的算法&#xff0c;比如MD5&#xff0c;就會顯示上述這個錯誤 解決辦法 重新生成證書&#xff0c;先前證書棄用使用函數 SSL_CTX_…

向上滾動 終端_ubuntu

Ubuntu終端Terminal常用快捷鍵Ubuntu終端Terminal常用快捷鍵 快捷鍵 功能 Tab 自動補全 Ctrla 光標移動到開始位置 Ctrle 光標移動到最末尾 Ctrlk 刪除此處至末尾的所有內容 Ctrlu 刪除此處至開始的所有內容 Ctrld 刪除當前字符 Ctrlh 刪除當前字符前一個字符 Ctrlw 刪除此處到…

openssl實現雙向認證教程(服務端代碼+客戶端代碼+證書生成)

參考鏈接 openssl實現雙向認證教程&#xff08;服務端代碼客戶端代碼證書生成&#xff09;_huang714的博客-CSDN博客_ssl_ctx_load_verify_locations基于openssl實現https雙向身份認證及安全通信_tutu-hu的博客-CSDN博客_基于openssl實現 注意事項 openssl版本差異很可能導致程…

python用pip安裝pillow_cent 6.5使用pip安裝pillow總是失敗

python:2.7.8阿里云cent os32位virtualenvvirtualenvwrapper之前有一個virtualenv不知道怎么回事成功裝上了pillow之后再在別的virtualenv裝就全都報錯這是為什么 太奇怪了?下載whl安裝&#xff0c;不管哪個版本都說不支持這個系統。imaging.c:3356: error: expected ?.?. ?…

基于openssl和國密算法生成CA、服務器和客戶端證書

參考鏈接 國密自簽名證書生成_三雷科技的博客-CSDN博客_國密證書生成openssl采用sm2進行自簽名的方法_dong_beijing的博客-CSDN博客_openssl sm 前提說明 OpenSSL 1.1.1q 5 Jul 2022 已經實現了國密算法查看是否支持SM2算法openssl ecparam -list_curves | grep -i sm2參考…

h5獲取http請求頭_React 前端獲取http請求頭信息

背景&#xff1a;前端通過react渲染頁面&#xff0c;使用了react-slingshot&#xff0c;相當于是前端跑在一個node服務上面需求&#xff1a;需要通過客戶端通過HTTP請求傳遞來的參數(header里放了token)進行用戶權限的驗證,比如訪問http://localhost:3000/rights/1&#xff0c;…

基于Gmssl庫靜態編譯,實現服務端和客戶端之間的SSL通信

前情提要 將gmssl庫采取靜態編譯的方式&#xff0c;存儲在/usr/local/gmssl路徑下&#xff0c;核心文件涵蓋 include、lib和bin等Ubuntu安裝GmSSL庫適用于ubuntu18和ubuntu20版本_MY CUP OF TEA的博客-CSDN博客 代碼 server #include <stdio.h> #include <stdlib.h&g…