WebRTC音視頻通話-實現GPUImage視頻美顏濾鏡效果
在WebRTC音視頻通話的GPUImage美顏效果圖如下
可以看下
之前搭建ossrs服務,可以查看:https://blog.csdn.net/gloryFlow/article/details/132257196
之前實現iOS端調用ossrs音視頻通話,可以查看:https://blog.csdn.net/gloryFlow/article/details/132262724
之前WebRTC音視頻通話高分辨率不顯示畫面問題,可以查看:https://blog.csdn.net/gloryFlow/article/details/132262724
修改SDP中的碼率Bitrate,可以查看:https://blog.csdn.net/gloryFlow/article/details/132263021
一、GPUImage是什么?
GPUImage是iOS上一個基于OpenGL進行圖像處理的開源框架,內置大量濾鏡,架構靈活,可以在其基礎上很輕松地實現各種圖像處理功能。
GPUImage中包含各種濾鏡,這里我不會使用那么多,使用的是GPUImageLookupFilter及GPUImagePicture
GPUImage中有一個專門針對lookup table進行處理的濾鏡函數GPUImageLookupFilter,使用這個函數就可以直接對圖片進行濾鏡添加操作了。代碼如下
/**GPUImage中有一個專門針對lookup table進行處理的濾鏡函數GPUImageLookupFilter,使用這個函數就可以直接對圖片進行濾鏡添加操作了。originalImg是你希望添加濾鏡的原始圖片@param image 原圖@return 處理后的圖片*/
+ (UIImage *)applyLookupFilter:(UIImage *)image lookUpImage:(UIImage *)lookUpImage {if (lookUpImage == nil) {return image;}UIImage *inputImage = image;UIImage *outputImage = nil;GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];//添加濾鏡GPUImageLookupFilter *lookUpFilter = [[GPUImageLookupFilter alloc] init];//導入之前保存的NewLookupTable.png文件GPUImagePicture *lookupImg = [[GPUImagePicture alloc] initWithImage:lookUpImage];[lookupImg addTarget:lookUpFilter atTextureLocation:1];[stillImageSource addTarget:lookUpFilter atTextureLocation:0];[lookUpFilter useNextFrameForImageCapture];if([lookupImg processImageWithCompletionHandler:nil] && [stillImageSource processImageWithCompletionHandler:nil]) {outputImage= [lookUpFilter imageFromCurrentFramebuffer];}return outputImage;
}
這個需要lookUpImage,圖列表如下
由于暫時沒有整理demo的git
這里在使用applyLomofiFilter再試下效果
SDApplyFilter.m中的幾個方法
+ (UIImage *)applyBeautyFilter:(UIImage *)image {GPUImageBeautifyFilter *filter = [[GPUImageBeautifyFilter alloc] init];[filter forceProcessingAtSize:image.size];GPUImagePicture *pic = [[GPUImagePicture alloc] initWithImage:image];[pic addTarget:filter];[pic processImage];[filter useNextFrameForImageCapture];return [filter imageFromCurrentFramebuffer];
}/**Amatorka濾鏡 Rise濾鏡,可以使人像皮膚得到很好的調整@param image image@return 處理后的圖片*/
+ (UIImage *)applyAmatorkaFilter:(UIImage *)image
{GPUImageAmatorkaFilter *filter = [[GPUImageAmatorkaFilter alloc] init];[filter forceProcessingAtSize:image.size];GPUImagePicture *pic = [[GPUImagePicture alloc] initWithImage:image];[pic addTarget:filter];[pic processImage];[filter useNextFrameForImageCapture];return [filter imageFromCurrentFramebuffer];
}/**復古型濾鏡,感覺像舊上海灘@param image image@return 處理后的圖片*/
+ (UIImage *)applySoftEleganceFilter:(UIImage *)image
{GPUImageSoftEleganceFilter *filter = [[GPUImageSoftEleganceFilter alloc] init];[filter forceProcessingAtSize:image.size];GPUImagePicture *pic = [[GPUImagePicture alloc] initWithImage:image];[pic addTarget:filter];[pic processImage];[filter useNextFrameForImageCapture];return [filter imageFromCurrentFramebuffer];
}/**圖像黑白化,并有大量噪點@param image 原圖@return 處理后的圖片*/
+ (UIImage *)applyLocalBinaryPatternFilter:(UIImage *)image
{GPUImageLocalBinaryPatternFilter *filter = [[GPUImageLocalBinaryPatternFilter alloc] init];[filter forceProcessingAtSize:image.size];GPUImagePicture *pic = [[GPUImagePicture alloc] initWithImage:image];[pic addTarget:filter];[pic processImage];[filter useNextFrameForImageCapture];return [filter imageFromCurrentFramebuffer];
}/**單色濾鏡@param image 原圖@return 處理后的圖片*/
+ (UIImage *)applyMonochromeFilter:(UIImage *)image
{GPUImageMonochromeFilter *filter = [[GPUImageMonochromeFilter alloc] init];[filter forceProcessingAtSize:image.size];GPUImagePicture *pic = [[GPUImagePicture alloc] initWithImage:image];[pic addTarget:filter];[pic processImage];[filter useNextFrameForImageCapture];return [filter imageFromCurrentFramebuffer];
}
使用GPUImageSoftEleganceFilter復古型濾鏡,感覺像舊上海灘效果圖如下
使用GPUImageLocalBinaryPatternFilter圖像黑白化效果圖如下
使用GPUImageMonochromeFilter 效果圖如下
二、WebRTC實現音視頻通話中視頻濾鏡處理
之前實現iOS端調用ossrs音視頻通話,可以查看:https://blog.csdn.net/gloryFlow/article/details/132262724
這個已經有完整的代碼了,這里暫時做一下調整。
為RTCCameraVideoCapturer的delegate指向代理
- (RTCVideoTrack *)createVideoTrack {RTCVideoSource *videoSource = [self.factory videoSource];self.localVideoSource = videoSource;// 如果是模擬器if (TARGET_IPHONE_SIMULATOR) {if (@available(iOS 10, *)) {self.videoCapturer = [[RTCFileVideoCapturer alloc] initWithDelegate:self];} else {// Fallback on earlier versions}} else{self.videoCapturer = [[RTCCameraVideoCapturer alloc] initWithDelegate:self];}RTCVideoTrack *videoTrack = [self.factory videoTrackWithSource:videoSource trackId:@"video0"];return videoTrack;
}
實現RTCVideoCapturerDelegate的方法didCaptureVideoFrame
#pragma mark - RTCVideoCapturerDelegate處理代理
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame {
// DebugLog(@"capturer:%@ didCaptureVideoFrame:%@", capturer, frame);// 調用SDWebRTCBufferFliter的濾鏡處理RTCVideoFrame *aFilterVideoFrame;if (self.delegate && [self.delegate respondsToSelector:@selector(webRTCClient:didCaptureVideoFrame:)]) {aFilterVideoFrame = [self.delegate webRTCClient:self didCaptureVideoFrame:frame];}// 操作C 需要手動釋放 否則內存暴漲
// CVPixelBufferRelease(_buffer)// 拿到pixelBuffer
// ((RTCCVPixelBuffer*)frame.buffer).pixelBufferif (!aFilterVideoFrame) {aFilterVideoFrame = frame;}[self.localVideoSource capturer:capturer didCaptureVideoFrame:frame];
}
之后調用SDWebRTCBufferFliter,實現濾鏡效果。
實現將((RTCCVPixelBuffer *)frame.buffer).pixelBuffer進行渲染,這里用到了EAGLContext、CIContext
EAGLContext是OpenGL繪制句柄或者上下文,在繪制試圖之前,需要指定使用創建的上下文繪制。
CIContext是用來渲染CIImage,將作用在CIImage上的濾鏡鏈應用到原始的圖片數據中。我這里需要將UIImage轉換成CIImage。
具體代碼實現如下
SDWebRTCBufferFliter.h
#import <Foundation/Foundation.h>
#import "WebRTCClient.h"@interface SDWebRTCBufferFliter : NSObject- (RTCVideoFrame *)webRTCClient:(WebRTCClient *)client didCaptureVideoFrame:(RTCVideoFrame *)frame;@end
SDWebRTCBufferFliter.m
#import "SDWebRTCBufferFliter.h"
#import <VideoToolbox/VideoToolbox.h>
#import "SDApplyFilter.h"@interface SDWebRTCBufferFliter ()
// 濾鏡
@property (nonatomic, strong) EAGLContext *eaglContext;@property (nonatomic, strong) CIContext *coreImageContext;@property (nonatomic, strong) UIImage *lookUpImage;@end@implementation SDWebRTCBufferFliter- (instancetype)init
{self = [super init];if (self) {self.eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];self.coreImageContext = [CIContext contextWithEAGLContext:self.eaglContext options:nil];self.lookUpImage = [UIImage imageNamed:@"lookup_jiari"];}return self;
}- (RTCVideoFrame *)webRTCClient:(WebRTCClient *)client didCaptureVideoFrame:(RTCVideoFrame *)frame {CVPixelBufferRef pixelBufferRef = ((RTCCVPixelBuffer *)frame.buffer).pixelBuffer;// CFRetain(pixelBufferRef);if (pixelBufferRef) {CIImage *inputImage = [CIImage imageWithCVPixelBuffer:pixelBufferRef];CGImageRef imgRef = [_coreImageContext createCGImage:inputImage fromRect:[inputImage extent]];UIImage *fromImage = nil;if (!fromImage) {fromImage = [UIImage imageWithCGImage:imgRef];}UIImage *toImage;toImage = [SDApplyFilter applyMonochromeFilter:fromImage];
//
// if (toImage == nil) {
// toImage = [SDApplyFilter applyLookupFilter:fromImage lookUpImage:self.lookUpImage];
// } else {
// toImage = [SDApplyFilter applyLookupFilter:fromImage lookUpImage:self.lookUpImage];
// }if (toImage == nil) {toImage = fromImage;}CGImageRef toImgRef = toImage.CGImage;CIImage *ciimage = [CIImage imageWithCGImage:toImgRef];[_coreImageContext render:ciimage toCVPixelBuffer:pixelBufferRef];CGImageRelease(imgRef);//必須釋放fromImage = nil;toImage = nil;ciimage = nil;inputImage = nil;}RTCCVPixelBuffer *rtcPixelBuffer =[[RTCCVPixelBuffer alloc] initWithPixelBuffer:pixelBufferRef];RTCVideoFrame *filteredFrame =[[RTCVideoFrame alloc] initWithBuffer:rtcPixelBufferrotation:frame.rotationtimeStampNs:frame.timeStampNs];return filteredFrame;
}@end
至此可以看到在WebRTC音視頻通話中GPUImage視頻美顏濾鏡的具體效果了。
三、小結
WebRTC音視頻通話-實現GPUImage視頻美顏濾鏡效果。主要用到GPUImage處理視頻畫面CVPixelBufferRef,將處理后的CVPixelBufferRef生成RTCVideoFrame,通過調用localVideoSource中實現的didCaptureVideoFrame方法。內容較多,描述可能不準確,請見諒。
本文地址:https://blog.csdn.net/gloryFlow/article/details/132265842
學習記錄,每天不停進步。