cs231n作業2 雙層神經網絡

雙層神經網絡

在這里插入圖片描述
在這里插入圖片描述
我們選用ReLU函數和softmax函數:
在這里插入圖片描述
步驟:
1、LOSS損失函數(前向傳播)與梯度(后向傳播)計算

Forward: 計算score,再根據score計算loss
Backward:分別對W2、b2、W1、b1求梯度

def loss(self, X, y=None, reg=0.0):# Unpack variables from the params dictionaryW1, b1 = self.params['W1'], self.params['b1']W2, b2 = self.params['W2'], self.params['b2']N, D = X.shape# Compute the forward passscores = Noneh1 = np.maximum(0, np.dot(X,W1) + b1) #(5,10)scores = np.dot(h1,W2) + b2 # (5,3)if y is None:return scores# Compute the lossloss = Noneexp_S = np.exp(scores) #(5,3)sum_exp_S = np.sum(exp_S,axis = 1) sum_exp_S = sum_exp_S.reshape(-1,1) #(5,1)#print (sum_exp_S.shape)loss = np.sum(-scores[range(N),list(y)]) + sum(np.log(sum_exp_S))loss = loss / N + 0.5 * reg * np.sum(W1 * W1) +  0.5 * reg * np.sum(W2 * W2)# Backward pass: compute gradientsgrads = {}#---------------------------------#dscores = np.zeros(scores.shape)dscores[range(N),list(y)] = -1dscores += (exp_S/sum_exp_S) #(5,3) dscores /= Ngrads['W2'] = np.dot(h1.T, dscores)grads['W2'] += reg * W2grads['b2'] = np.sum(dscores, axis = 0)#---------------------------------#dh1 = np.dot(dscores, W2.T)  #(5,10)dh1_ReLU = (h1>0) * dh1grads['W1'] = X.T.dot(dh1_ReLU) + reg * W1grads['b1'] = np.sum(dh1_ReLU, axis = 0)#---------------------------------#return loss, grads

2、訓練函數 (迭代過程:forward–>backward–>update–>forward–>backward->update……)

def train(self, X, y, X_val, y_val,learning_rate=1e-3, learning_rate_decay=0.95,reg=5e-6, num_iters=100,batch_size=200, verbose=False):num_train = X.shape[0]iterations_per_epoch = max(num_train / batch_size, 1)# Use SGD to optimize the parameters in self.modelloss_history = []train_acc_history = []val_acc_history = []for it in xrange(num_iters):X_batch = Noney_batch = Nonemask = np.random.choice(num_train,batch_size,replace = True)X_batch = X[mask]y_batch = y[mask]# Compute loss and gradients using the current minibatchloss, grads = self.loss(X_batch, y=y_batch, reg=reg)loss_history.append(loss)self.params['W1'] += -learning_rate * grads['W1']self.params['b1'] += -learning_rate * grads['b1']self.params['W2'] += -learning_rate * grads['W2']self.params['b2'] += -learning_rate * grads['b2']if verbose and it % 100 == 0:print('iteration %d / %d: loss %f' % (it, num_iters, loss))# Every epoch, check train and val accuracy and decay learning rate.if it % iterations_per_epoch == 0:# Check accuracy#print ('第%d個epoch' %it)train_acc = (self.predict(X_batch) == y_batch).mean()val_acc = (self.predict(X_val) == y_val).mean()train_acc_history.append(train_acc)val_acc_history.append(val_acc)# Decay learning ratelearning_rate *= learning_rate_decay #減小學習率return {'loss_history': loss_history,'train_acc_history': train_acc_history,'val_acc_history': val_acc_history,}

3、預測函數
4、參數訓練

用于機器視覺識別的卷積神經網絡

多層全連接神經網絡

兩個基本的layer:

def affine_forward(x, w, b):out = NoneN=x.shape[0]x_new=x.reshape(N,-1)#轉為二維向量out=np.dot(x_new,w)+bcache = (x, w, b) # 不需要保存outreturn out, cachedef affine_backward(dout, cache):x, w, b = cachedx, dw, db = None, None, Nonedx=np.dot(dout,w.T)dx=np.reshape(dx,x.shape)x_new=x.reshape(x.shape[0],-1)dw=np.dot(x_new.T,dout) db=np.sum(dout,axis=0,keepdims=True)return dx, dw, dbdef relu_forward(x):out = Noneout=np.maximum(0,x)cache = xreturn out, cachedef relu_backward(dout, cache):dx, x = None, cachereturn dx

構建一個Sandwich的層:

def affine_relu_forward(x, w, b):a, fc_cache = affine_forward(x, w, b)out, relu_cache = relu_forward(a)cache = (fc_cache, relu_cache)return out, cachedef affine_relu_backward(dout, cache):fc_cache, relu_cache = cacheda = relu_backward(dout, relu_cache)dx, dw, db = affine_backward(da, fc_cache)return dx, dw, db

FullyConnectedNet:

class FullyConnectedNet(object):def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,dropout=0, use_batchnorm=False, reg=0.0,weight_scale=1e-2, dtype=np.float32, seed=None):self.use_batchnorm = use_batchnormself.use_dropout = dropout > 0self.reg = regself.num_layers = 1 + len(hidden_dims)self.dtype = dtypeself.params = {}layers_dims = [input_dim] + hidden_dims + [num_classes] #z這里存儲的是每個layer的大小for i in xrange(self.num_layers):self.params['W' + str(i + 1)] = weight_scale * np.random.randn(layers_dims[i], layers_dims[i + 1])self.params['b' + str(i + 1)] = np.zeros((1, layers_dims[i + 1]))if self.use_batchnorm and i < len(hidden_dims):#最后一層是不需要batchnorm的self.params['gamma' + str(i + 1)] = np.ones((1, layers_dims[i + 1]))self.params['beta' + str(i + 1)] = np.zeros((1, layers_dims[i + 1]))self.dropout_param = {}if self.use_dropout:self.dropout_param = {'mode': 'train', 'p': dropout}if seed is not None:self.dropout_param['seed'] = seedself.bn_params = []if self.use_batchnorm:self.bn_params = [{'mode': 'train'} for i in xrange(self.num_layers - 1)]# Cast all parameters to the correct datatypefor k, v in self.params.iteritems():self.params[k] = v.astype(dtype)def loss(self, X, y=None):X = X.astype(self.dtype)mode = 'test' if y is None else 'train'if self.dropout_param is not None:self.dropout_param['mode'] = modeif self.use_batchnorm:for bn_param in self.bn_params:bn_param[mode] = modescores = Noneh, cache1, cache2, cache3,cache4, bn, out = {}, {}, {}, {}, {}, {},{}out[0] = X #存儲每一層的out,按照邏輯,X就是out0[0]# Forward pass: compute lossfor i in xrange(self.num_layers - 1):# 得到每一層的參數w, b = self.params['W' + str(i + 1)], self.params['b' + str(i + 1)]if self.use_batchnorm:gamma, beta = self.params['gamma' + str(i + 1)], self.params['beta' + str(i + 1)]h[i], cache1[i] = affine_forward(out[i], w, b)bn[i], cache2[i] = batchnorm_forward(h[i], gamma, beta, self.bn_params[i])out[i + 1], cache3[i] = relu_forward(bn[i])if self.use_dropout:out[i+1], cache4[i] = dropout_forward(out[i+1]  , self.dropout_param)else:out[i + 1], cache3[i] = affine_relu_forward(out[i], w, b)if self.use_dropout:out[i + 1], cache4[i] = dropout_forward(out[i + 1], self.dropout_param)W, b = self.params['W' + str(self.num_layers)], self.params['b' + str(self.num_layers)]scores, cache = affine_forward(out[self.num_layers - 1], W, b) #對最后一層進行計算if mode == 'test':return scoresloss, grads = 0.0, {}data_loss, dscores = softmax_loss(scores, y)reg_loss = 0for i in xrange(self.num_layers):reg_loss += 0.5 * self.reg * np.sum(self.params['W' + str(i + 1)] * self.params['W' + str(i + 1)])loss = data_loss + reg_loss# Backward pass: compute gradientsdout, dbn, dh, ddrop = {}, {}, {}, {}t = self.num_layers - 1dout[t], grads['W' + str(t + 1)], grads['b' + str(t + 1)] = affine_backward(dscores, cache)#這個cache就是上面得到的for i in xrange(t):if self.use_batchnorm:if self.use_dropout:dout[t - i] = dropout_backward(dout[t-i], cache4[t-1-i])dbn[t - 1 - i] = relu_backward(dout[t - i], cache3[t - 1 - i])dh[t - 1 - i], grads['gamma' + str(t - i)], grads['beta' + str(t - i)] = batchnorm_backward(dbn[t - 1 - i],cache2[t - 1 - i])dout[t - 1 - i], grads['W' + str(t - i)], grads['b' + str(t - i)] = affine_backward(dh[t - 1 - i],cache1[t - 1 - i])else:if self.use_dropout:dout[t - i] = dropout_backward(dout[t - i], cache4[t - 1 - i])dout[t - 1 - i], grads['W' + str(t - i)], grads['b' + str(t - i)] = affine_relu_backward(dout[t - i],cache3[t - 1 - i])# Add the regularization gradient contributionfor i in xrange(self.num_layers):grads['W' + str(i + 1)] += self.reg * self.params['W' + str(i + 1)]return loss, grads

使用slover來對神經網絡進優化求解

之后進行參數更新:

  1. SGD
  2. Momentum
  3. Nestero
  4. RMSProp and Adam

批量規范化

在這里插入圖片描述

BN層前向傳播:
在這里插入圖片描述

BN層反向傳播:
在這里插入圖片描述

def batchnorm_forward(x, gamma, beta, bn_param):mode = bn_param['mode']  #因為train和test是兩種不同的方法eps = bn_param.get('eps', 1e-5)momentum = bn_param.get('momentum', 0.9)N, D = x.shaperunning_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))out, cache = None, Noneif mode == 'train':    sample_mean = np.mean(x, axis=0, keepdims=True)       # [1,D]    sample_var = np.var(x, axis=0, keepdims=True)         # [1,D] x_normalized = (x - sample_mean) / np.sqrt(sample_var + eps)    # [N,D]    out = gamma * x_normalized + beta    cache = (x_normalized, gamma, beta, sample_mean, sample_var, x, eps)    running_mean = momentum * running_mean + (1 - momentum) * sample_mean    #通過moument得到最終的running_mean和running_varrunning_var = momentum * running_var + (1 - momentum) * sample_varelif mode == 'test':    x_normalized = (x - running_mean) / np.sqrt(running_var + eps)    #test的時候如何通過BN層out = gamma * x_normalized + betaelse:    raise ValueError('Invalid forward batchnorm mode "%s"' % mode)# Store the updated running means back into bn_parambn_param['running_mean'] = running_meanbn_param['running_var'] = running_varreturn out, cachedef batchnorm_backward(dout, cache):dx, dgamma, dbeta = None, None, Nonex_normalized, gamma, beta, sample_mean, sample_var, x, eps = cacheN, D = x.shapedx_normalized = dout * gamma       # [N,D]x_mu = x - sample_mean             # [N,D]sample_std_inv = 1.0 / np.sqrt(sample_var + eps)    # [1,D]dsample_var = -0.5 * np.sum(dx_normalized * x_mu, axis=0, keepdims=True) * sample_std_inv**3dsample_mean = -1.0 * np.sum(dx_normalized * sample_std_inv, axis=0, keepdims=True) - \                                2.0 * dsample_var * np.mean(x_mu, axis=0, keepdims=True)dx1 = dx_normalized * sample_std_invdx2 = 2.0/N * dsample_var * x_mudx = dx1 + dx2 + 1.0/N * dsample_meandgamma = np.sum(dout * x_normalized, axis=0, keepdims=True)dbeta = np.sum(dout, axis=0, keepdims=True)return dx, dgamma, dbeta

Batch Normalization解決的一個重要問題就是梯度飽和。

Dropout

訓練的時候以一定的概率來去每層的神經元:
在這里插入圖片描述
可以防止過擬合。還可以理解為dropout是一個正則化的操作,他在每次訓練的時候,強行讓一些feature為0,這樣提高了網絡的稀疏表達能力。

def dropout_forward(x, dropout_param):p, mode = dropout_param['p'], dropout_param['mode']if 'seed' in dropout_param:  np.random.seed(dropout_param['seed'])mask = Noneout = Noneif mode == 'train':    mask = (np.random.rand(*x.shape) < p) / p    #注意這里除以了一個P,這樣在test的輸出的時候,維持原樣即可out = x * maskelif mode == 'test':    out = xcache = (dropout_param, mask)out = out.astype(x.dtype, copy=False)return out, cachedef dropout_backward(dout, cache):dropout_param, mask = cachemode = dropout_param['mode']dx = Noneif mode == 'train':    dx = dout * maskelif mode == 'test':    dx = doutreturn dx

卷積神經網絡

卷積層的前向傳播與反向傳播
在這里插入圖片描述
在這里插入圖片描述
在這里插入圖片描述
在這里插入圖片描述

def conv_forward_naive(x, w, b, conv_param):stride, pad = conv_param['stride'], conv_param['pad']N, C, H, W = x.shapeF, C, HH, WW = w.shapex_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), mode='constant') #補零H_new = 1 + (H + 2 * pad - HH) / strideW_new = 1 + (W + 2 * pad - WW) / strides = strideout = np.zeros((N, F, H_new, W_new))for i in xrange(N):       # ith image    for f in xrange(F):   # fth filter        for j in xrange(H_new):            for k in xrange(W_new):                out[i, f, j, k] = np.sum(x_padded[i, :, j*s:HH+j*s, k*s:WW+k*s] * w[f]) + b[f]#對應位相乘cache = (x, w, b, conv_param)return out, cachedef conv_backward_naive(dout, cache):x, w, b, conv_param = cachepad = conv_param['pad']stride = conv_param['stride']F, C, HH, WW = w.shapeN, C, H, W = x.shapeH_new = 1 + (H + 2 * pad - HH) / strideW_new = 1 + (W + 2 * pad - WW) / stridedx = np.zeros_like(x)dw = np.zeros_like(w)db = np.zeros_like(b)s = stridex_padded = np.pad(x, ((0, 0), (0, 0), (pad, pad), (pad, pad)), 'constant')dx_padded = np.pad(dx, ((0, 0), (0, 0), (pad, pad), (pad, pad)), 'constant')for i in xrange(N):       # ith image    for f in xrange(F):   # fth filter        for j in xrange(H_new):            for k in xrange(W_new):                window = x_padded[i, :, j*s:HH+j*s, k*s:WW+k*s]db[f] += dout[i, f, j, k]                dw[f] += window * dout[i, f, j, k]                dx_padded[i, :, j*s:HH+j*s, k*s:WW+k*s] += w[f] * dout[i, f, j, k]#上面的式子,關鍵就在于+號# Unpaddx = dx_padded[:, :, pad:pad+H, pad:pad+W]return dx, dw, db

池化層

def max_pool_forward_naive(x, pool_param):HH, WW = pool_param['pool_height'], pool_param['pool_width']s = pool_param['stride']N, C, H, W = x.shapeH_new = 1 + (H - HH) / sW_new = 1 + (W - WW) / sout = np.zeros((N, C, H_new, W_new))for i in xrange(N):    for j in xrange(C):        for k in xrange(H_new):            for l in xrange(W_new):                window = x[i, j, k*s:HH+k*s, l*s:WW+l*s] out[i, j, k, l] = np.max(window)cache = (x, pool_param)return out, cachedef max_pool_backward_naive(dout, cache):x, pool_param = cacheHH, WW = pool_param['pool_height'], pool_param['pool_width']s = pool_param['stride']N, C, H, W = x.shapeH_new = 1 + (H - HH) / sW_new = 1 + (W - WW) / sdx = np.zeros_like(x)for i in xrange(N):    for j in xrange(C):        for k in xrange(H_new):            for l in xrange(W_new):                window = x[i, j, k*s:HH+k*s, l*s:WW+l*s]                m = np.max(window)               #獲得之前的那個值,這樣下面只要windows==m就能得到相應的位置dx[i, j, k*s:HH+k*s, l*s:WW+l*s] = (window == m) * dout[i, j, k, l]return dx

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/42333.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/42333.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/42333.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

學懂C#編程:WPF應用開發系列——WPF之ComboBox控件的詳細用法

WPF&#xff08;Windows Presentation Foundation&#xff09;中的ComboBox控件是一個下拉列表控件&#xff0c;允許用戶從一組預定義的選項中選擇一個選項。以下是ComboBox控件的詳細用法&#xff0c;并附帶示例說明。 ComboBox的基本用法 1. XAML定義&#xff1a; 在XAML中…

multisim中關于74ls192n和DSWPK開關仿真圖分析(減法計數器)

&#x1f3c6;本文收錄于「Bug調優」專欄&#xff0c;主要記錄項目實戰過程中的Bug之前因后果及提供真實有效的解決方案&#xff0c;希望能夠助你一臂之力&#xff0c;幫你早日登頂實現財富自由&#x1f680;&#xff1b;同時&#xff0c;歡迎大家關注&&收藏&&…

直播預告 | VMware大規模遷移實戰,HyperMotion助力業務高效遷移

2006年核高基專項啟動&#xff0c;2022年國家79號文件要求2027年央國企100%完成信創改造……國家一系列信創改造政策的推動&#xff0c;讓服務器虛擬化軟件巨頭VMware在中國的市場份額迅速縮水。 加之VMware永久授權的取消和部分軟件組件銷售策略的變更&#xff0c;導致VMware…

開發一個HTTP模塊

開發一個HTTP模塊 HTTP模塊的數據結構ngx_module_t模塊的數據結構ngx_http_module_t數據結構ngx_command_s 數據結構 定義一個HTTP模塊處理用戶請求返回值獲取URI和參數方法名URIURL協議版本 獲取HTTP頭獲取HTTP包體 發送響應發送HTTP頭發送內存中的字符串作為包體返回一個Hell…

什么時候考慮將mysql數據遷移到ES?

文章目錄 對ES的一些疑問問題1:ES相比mysql本身有哪些優勢&#xff1f;問題2:哪些場景適合用ES而不是mysql&#xff1f;問題3:mysql逐行掃描&#xff0c;根據過濾條件檢查記錄中對應字段是否滿足要求屬于正排索引&#xff0c;根據二叉樹索引檢索記錄的方式屬于正排索引還是倒排…

SpringBoot整合DataX數據同步(自動生成job文件)

SpringBoot整合Datax數據同步 文章目錄 SpringBoot整合Datax數據同步1.簡介設計理念 DataX3.0框架設計DataX3.0核心架構核心模塊介紹DataX調度流程 2.DataX3.0插件體系3.數據同步1.編寫job的json文件2.進入bin目錄下&#xff0c;執行文件 4.SpringBoot整合DataX生成Job文件并執…

生產力工具|VS Code安裝及使用指南

一、VS Code介紹 &#xff08;一&#xff09;軟件介紹 Visual Studio Code&#xff08;簡稱VS Code&#xff09;是由Microsoft開發的免費開源代碼編輯器&#xff0c;適用于Windows、macOS和Linux操作系統。它支持多種編程語言&#xff0c;如JavaScript、Python、C等&#xff0…

知識社區在線提問小程序模板源碼

藍色的知識問答&#xff0c;問答交流&#xff0c;知識社區&#xff0c;在線提問手機app小程序網頁模板。包含&#xff1a;社區主頁、提問、我的、綁定手機&#xff0c;實名認證等。 知識社區在線提問小程序模板源碼

ubuntu 檢查硬盤的通電時長、健康度

ubuntu 檢查硬盤的通電時長、健康度 在Ubuntu系統中&#xff0c;檢查硬盤的通電時長和健康度通常需要使用SMART&#xff08;Self-Monitoring, Analysis, and Reporting Technology&#xff09;工具。SMART是硬盤制造商內置的一套硬盤保護技術&#xff0c;用于監控硬盤的健康狀況…

品質至上!中國星坤連接器的發展之道!

在電子連接技術領域&#xff0c;中國星坤以其卓越的創新能力和對品質的不懈追求&#xff0c;贏得了業界的廣泛認可。憑借在高精度連接器設計和制造上的領先地位&#xff0c;星坤不僅獲得了多項實用新型專利&#xff0c;更通過一系列國際質量管理體系認證&#xff0c;彰顯了其產…

【Qt5.12.9】程序無法顯示照片問題(已解決)

問題記錄&#xff1a;Qt5.12.9下無法顯示照片 我的工程名為03_qpainter&#xff0c;照片cd.png存放在工程目錄下的image文件夾中。 /03_qpainter/image/cd.png 因為這是正點原子Linux下Qt書籍中的例程&#xff0c;在通過學習其配套的例程中的項目&#xff0c;發現我的項目少…

【Python】搭建屬于自己 AI 機器人

目錄 前言 1 準備工作 1.1 環境搭建 1.2 獲取 API KEY 2 寫代碼 2.1 引用庫 2.2 創建用戶 2.3 創建對話 2.4 輸出內容 2.5 調試 2.6 全部代碼 2.7 簡短的總結 3 優化代碼 3.1 規范代碼 3.1.1 引用庫 3.1.2 創建提示詞 3.1.3 創建模型 3.1.4 規范輸出&#xf…

在線調試網絡接口的免費網站

免費接口網站 GET接口 https://httpbin.org/get https://httpbin.org/ip https://publicobject.com/helloworld.txt POST接口 https://httpbin.org/post 調試網站 Postman需要下載安裝&#xff0c;還要登錄賬號。對于簡單測試&#xff0c;麻煩&#xff01; http://coolaf.…

西門子1200高速計數器編碼器的應用 接線 組態 編程 調試 測距測速

編碼器的應用、接線、組態、博途1200編程與調試&#xff1a;高速計數器&#xff0c;用于給PLC發高速脈沖&#xff0c;接I點 用來例如&#xff1a;檢測電機轉速&#xff0c;皮帶輸送機運行的距離 &#xff08;粗略定位&#xff09; 360&#xff1a;代表轉一圈會對外發360個脈沖&…

系統化學習 H264視頻編碼(02) I幀 P幀 B幀 引入及相關概念解讀

說明&#xff1a;我們參考黃金圈學習法&#xff08;什么是黃金圈法則?->模型 黃金圈法則&#xff0c;本文使用&#xff1a;why-what&#xff09;來學習音H264視頻編碼。本系列文章側重于理解視頻編碼的知識體系和實踐方法&#xff0c;理論方面會更多地講清楚 音視頻中概念的…

Python類實例的json

web開發中有這么一個場景&#xff0c;我們從數據庫中查詢某一數據的時候&#xff0c;往往需要對數據進行一些轉化之后才能傳給前端。 當然我們可以根據查詢出來的實例對象&#xff0c;構建一個dict返回&#xff0c;這樣會導致我們的代碼非常的臃腫。但是這也確實是一種最直接的…

網絡空間測繪是什么?

網絡空間測繪是一種技術過程&#xff0c;用于探測、分析和可視化互聯網及其他網絡環境中的各種資源和連接。這個概念在2016年開始廣泛使用&#xff0c;它涉及到收集有關網絡節點&#xff08;如服務器、路由器、個人電腦和其他設備&#xff09;的信息&#xff0c;并建立這些節點…

C++ STL 多線程庫用法介紹

目錄 一:Atomic: 二:Thread 1. 創建線程 2. 小心移動(std::move)線程 3. 如何創建帶參數的線程 4. 線程參數是引用類型時,要小心謹慎。 5. 獲取線程ID 6. jthread 7. 如何在線程中使用中斷 stop_token 三:如何解決數據競爭 1.有問題的代碼 2.使用互斥 3.預防…

Vue3+.NET6前后端分離式管理后臺實戰(二十八)

1&#xff0c;Vue3.NET6前后端分離式管理后臺實戰(二十八)

【Linux進階】文件系統6——理解文件操作

目錄 1.文件的讀取 1.1.目錄 1.2.文件 1.3.目錄樹讀取 1.4.文件系統大小與磁盤讀取性能 2.增添文件 2.1.數據的不一致&#xff08;Inconsistent&#xff09;狀態 2.2.日志式文件系統&#xff08;Journaling filesystem&#xff09; 3.Linux文件系統的運行 4、文件的刪…