load 數據走的httpurlfetcher 的loaddata 從MultiLoader 調用而來
load到inputstream流后的處理 核心
圖片是glide 首先創建解釋器的時候 加了videodecoder 然后這里會從流中加載對應幀的圖片保存在手機cache目錄中 將這個file 作為bitmap傳遞 然后加載?
private static final class ByteBufferFetcher implements DataFetcher<ByteBuffer> {private final File file;@Synthetic@SuppressWarnings("WeakerAccess")ByteBufferFetcher(File file) {this.file = file;}
幾個核心的類 VideoDecoder
public interface VideoDecoder {/** Settings passed to the decoder by WebRTC. */public class Settings {public final int numberOfCores;public final int width;public final int height;@CalledByNative("Settings")public Settings(int numberOfCores, int width, int height) {this.numberOfCores = numberOfCores;this.width = width;this.height = height;}}/** Additional info for decoding. */public class DecodeInfo {public final boolean isMissingFrames;public final long renderTimeMs;public DecodeInfo(boolean isMissingFrames, long renderTimeMs) {this.isMissingFrames = isMissingFrames;this.renderTimeMs = renderTimeMs;}}public interface Callback {/*** Call to return a decoded frame. Can be called on any thread.** @param frame Decoded frame* @param decodeTimeMs Time it took to decode the frame in milliseconds or null if not available* @param qp QP value of the decoded frame or null if not available*/void onDecodedFrame(VideoFrame frame, Integer decodeTimeMs, Integer qp);}
以及VideoBitmapDecoder
@Deprecated
public class VideoBitmapDecoder extends VideoDecoder<ParcelFileDescriptor> {@SuppressWarnings("unused")public VideoBitmapDecoder(Context context) {this(Glide.get(context).getBitmapPool());}// Public API@SuppressWarnings("WeakerAccess")public VideoBitmapDecoder(BitmapPool bitmapPool) {super(bitmapPool, new ParcelFileDescriptorInitializer());}
}
以上就是 video加載核心部分的分析 在VideoDecoder::decode()? 以及 加上斷點就可以很清楚的分析出load到圖片以后的transform過程
第二部分 網絡請求部分 以及自定義headers token等 modelfactory
在 加斷點?
他是從這里調用過來的MultiModelLoader