mxnet教程

官方教程,講的還行,我用自己的實例講解。自己如何設計網絡,自己的迭代器

1:引入module:

import mxnet as mx
import numpy as np
import cv2
import matplotlib.pyplot as plt
import logginglogger = logging.getLogger()
logger.setLevel(logging.DEBUG)

2:創建網絡:

# Variables are place holders for input arrays. We give each variable a unique name.
data = mx.symbol.Variable('data')# The input is fed to a fully connected layer that computes Y=WX+b.
# This is the main computation module in the network.
# Each layer also needs an unique name. We'll talk more about naming in the next section.
fc1  = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
# Activation layers apply a non-linear function on the previous layer's output.
# Here we use Rectified Linear Unit (ReLU) that computes Y = max(X, 0).
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")fc2  = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")fc3  = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
# Finally we have a loss layer that compares the network's output with label and generates gradient signals.
mlp  = mx.symbol.SoftmaxOutput(data = fc3, name = 'softmax')

3:顯示網絡:

mx.viz.plot_network(mlp)

不過這個在spyder上無法顯示,所以本人使用這個,會在運行目錄下創建jpg的圖:

mx.viz.plot_network(mlp).view()  

4:加載數據:

由于官方mxnet只用mnist數據來測試,所以:

又由于data很難下下來,所以在example目錄下新建data文件夾,在data文件夾中創建mldata文件夾,再放入從github上下載的original_mnist.mat文件

from sklearn.datasets import fetch_mldata
import os,sys
curr_path = sys.path[0]
sys.path = [os.path.join("/home/hu/mxnet-master/example/autoencoder")] + sys.path
import data
X,Y=data.get_mnist()for i in range(10):plt.subplot(1,10,i+1)plt.imshow(X[i].reshape((28,28)), cmap='Greys_r')plt.axis('off')
plt.show()X = X.astype(np.float32)/255
X_train = X[:60000]
X_test = X[60000:]
Y_train = Y[:60000]
Y_test = Y[60000:]

5:設置數據迭代器:

mxnet這個數據迭代器創建可以自己寫函數,網上可以查得到,mxnet工作其實就是數據一塊一塊的迭代

batch_size = 100
train_iter = mx.io.NDArrayIter(X_train, Y_train, batch_size=batch_size)
test_iter = mx.io.NDArrayIter(X_test, Y_test, batch_size=batch_size)

6:訓練:

網上看到,好像不要這樣去訓練,因為這樣的話,你能夠調試的東西就少了

model = mx.model.FeedForward(ctx = mx.gpu(0),      # Run on GPU 0symbol = mlp,         # Use the network we just definednum_epoch = 10,       # Train for 10 epochslearning_rate = 0.1,  # Learning ratemomentum = 0.9,       # Momentum for SGD with momentumwd = 0.00001)         # Weight decay for regularization
model.fit(X=train_iter,  # Training data seteval_data=test_iter,  # Testing data set. MXNet computes scores on test set every epochbatch_end_callback = mx.callback.Speedometer(batch_size, 200))  # Logging module to print out progress

第二種:

先把數據放入顯存,初始化參數,然后在訓練(貌似,用這個準確率更高?)

data = mx.symbol.Variable('data')
fc1  = mx.symbol.FullyConnected(data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(fc1, name='relu1', act_type="relu")
fc2  = mx.symbol.FullyConnected(act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(fc2, name='relu2', act_type="relu")
fc3  = mx.symbol.FullyConnected(act2, name='fc3', num_hidden=10)
out  = mx.symbol.SoftmaxOutput(fc3, name = 'softmax')
# construct the module
mod = mx.mod.Module(out,context=mx.gpu())   
mod.bind(data_shapes=train_iter.provide_data,label_shapes=train_iter.provide_label)
mod.init_params()
mod.fit(train_iter, eval_data=test_iter,optimizer_params={'learning_rate':0.01, 'momentum': 0.9},num_epoch=10)

?

7:用訓練好的模型進行來預測:

plt.imshow((X_test[0].reshape((28,28))*255).astype(np.uint8), cmap='Greys_r')
plt.show()
print 'Result:', model.predict(X_test[0:1])[0].argmax()

8:有模型評估函數:

print 'Accuracy:', model.score(test_iter)*100, '%'

9:弄成網頁調用函數:

# run hand drawing test
from IPython.display import HTMLdef classify(img):img = img[len('data:image/png;base64,'):].decode('base64')img = cv2.imdecode(np.fromstring(img, np.uint8), -1)img = cv2.resize(img[:,:,3], (28,28))img = img.astype(np.float32).reshape((1, 784))/255.0return model.predict(img)[0].argmax()html = """<style type="text/css">canvas { border: 1px solid black; }</style><div id="board"><canvas id="myCanvas" width="100px" height="100px">Sorry, your browser doesn't support canvas technology.</canvas><p><button id="classify" οnclick="classify()">Classify</button><button id="clear" οnclick="myClear()">Clear</button>Result: <input type="text" id="result_output" size="5" value=""></p></div>"""
script = """<script type="text/JavaScript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js?ver=1.4.2"></script><script type="text/javascript">function init() {var myCanvas = document.getElementById("myCanvas");var curColor = $('#selectColor option:selected').val();if(myCanvas){var isDown = false;var ctx = myCanvas.getContext("2d");var canvasX, canvasY;ctx.lineWidth = 5;$(myCanvas).mousedown(function(e){isDown = true;ctx.beginPath();var parentOffset = $(this).parent().offset(); canvasX = e.pageX - parentOffset.left;canvasY = e.pageY - parentOffset.top;ctx.moveTo(canvasX, canvasY);}).mousemove(function(e){if(isDown != false) {var parentOffset = $(this).parent().offset(); canvasX = e.pageX - parentOffset.left;canvasY = e.pageY - parentOffset.top;ctx.lineTo(canvasX, canvasY);ctx.strokeStyle = curColor;ctx.stroke();}}).mouseup(function(e){isDown = false;ctx.closePath();});}$('#selectColor').change(function () {curColor = $('#selectColor option:selected').val();});}init();function handle_output(out) {document.getElementById("result_output").value = out.content.data["text/plain"];}function classify() {var kernel = IPython.notebook.kernel;var myCanvas = document.getElementById("myCanvas");data = myCanvas.toDataURL('image/png');document.getElementById("result_output").value = "";kernel.execute("classify('" + data +"')",  { 'iopub' : {'output' : handle_output}}, {silent:false});}function myClear() {var myCanvas = document.getElementById("myCanvas");myCanvas.getContext("2d").clearRect(0, 0, myCanvas.width, myCanvas.height);}</script>"""
HTML(html+script)

10:輸出權重:

def norm_stat(d):"""The statistics you want to see.We compute the L2 norm here but you can change it to anything you like."""return mx.nd.norm(d)/np.sqrt(d.size)
mon = mx.mon.Monitor(100,                 # Print every 100 batchesnorm_stat,           # The statistics function defined abovepattern='.*weight',  # A regular expression. Only arrays with name matching this pattern will be included.sort=True)           # Sort output by name
model = mx.model.FeedForward(ctx = mx.gpu(0), symbol = mlp, num_epoch = 1,learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
model.fit(X=train_iter, eval_data=test_iter, monitor=mon,  # Set the monitor herebatch_end_callback = mx.callback.Speedometer(100, 100))

11:就像之前所說的,數據ilter是能夠自己寫loop來聚類的

但說實話,自己寫的loop如何調用gpu?作者的自己寫的例子,也沒有調用gpu,我實在是懷疑

epoch迭代次數,ilter是分的數據patch的個數

一般來說沒必要寫自己的loop,所以不怎么推薦使用

# ==================Binding=====================
# The symbol we created is only a graph description.
# To run it, we first need to allocate memory and create an executor by 'binding' it.
# In order to bind a symbol, we need at least two pieces of information: context and input shapes.
# Context specifies which device the executor runs on, e.g. cpu, GPU0, GPU1, etc.
# Input shapes define the executor's input array dimensions.
# MXNet then run automatic shape inference to determine the dimensions of intermediate and output arrays.# data iterators defines shapes of its output with provide_data and provide_label property.
input_shapes = dict(train_iter.provide_data+train_iter.provide_label)
print 'input_shapes', input_shapes
# We use simple_bind to let MXNet allocate memory for us.
# You can also allocate memory youself and use bind to pass it to MXNet.
exe = mlp.simple_bind(ctx=mx.gpu(0), **input_shapes)# ===============Initialization=================
# First we get handle to input arrays
arg_arrays = dict(zip(mlp.list_arguments(), exe.arg_arrays))
data = arg_arrays[train_iter.provide_data[0][0]]
label = arg_arrays[train_iter.provide_label[0][0]]# We initialize the weights with uniform distribution on (-0.01, 0.01).
init = mx.init.Uniform(scale=0.01)
for name, arr in arg_arrays.items():if name not in input_shapes:init(name, arr)# We also need to create an optimizer for updating weights
opt = mx.optimizer.SGD(learning_rate=0.1,momentum=0.9,wd=0.00001,rescale_grad=1.0/train_iter.batch_size)
updater = mx.optimizer.get_updater(opt)# Finally we need a metric to print out training progress
metric = mx.metric.Accuracy()# Training loop begines
for epoch in range(10):train_iter.reset()metric.reset()t = 0for batch in train_iter:# Copy data to executor input. Note the [:].data[:] = batch.data[0]label[:] = batch.label[0]# Forwardexe.forward(is_train=True)# You perform operations on exe.outputs here if you need to.# For example, you can stack a CRF on top of a neural network.# Backward
        exe.backward()# Updatefor i, pair in enumerate(zip(exe.arg_arrays, exe.grad_arrays)):weight, grad = pairupdater(i, grad, weight)metric.update(batch.label, exe.outputs)t += 1if t % 100 == 0:print 'epoch:', epoch, 'iter:', t, 'metric:', metric.get()

12:新的層

輸入的數據,輸出數據個數都要好好申明

# Define custom softmax operator
class NumpySoftmax(mx.operator.NumpyOp):def __init__(self):# Call the parent class constructor. # Because NumpySoftmax is a loss layer, it doesn't need gradient input from layers above.super(NumpySoftmax, self).__init__(need_top_grad=False)def list_arguments(self):# Define the input to NumpySoftmax.return ['data', 'label']def list_outputs(self):# Define the output.return ['output']def infer_shape(self, in_shape):# Calculate the dimensions of the output (and missing inputs) from (some) input shapes.data_shape = in_shape[0]  # shape of first argument 'data'label_shape = (in_shape[0][0],)  # 'label' should be one dimensional and has batch_size instances.output_shape = in_shape[0] # 'output' dimension is the same as the input.return [data_shape, label_shape], [output_shape]def forward(self, in_data, out_data):x = in_data[0]  # 'data'y = out_data[0]  # 'output'# Compute softmaxy[:] = np.exp(x - x.max(axis=1).reshape((x.shape[0], 1)))y /= y.sum(axis=1).reshape((x.shape[0], 1))def backward(self, out_grad, in_data, out_data, in_grad):l = in_data[1]  # 'label'l = l.reshape((l.size,)).astype(np.int)  # cast to inty = out_data[0]  # 'output'dx = in_grad[0]  # gradient for 'data'# Compute gradientdx[:] = ydx[np.arange(l.shape[0]), l] -= 1.0numpy_softmax = NumpySoftmax()data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
# Use the new operator we just defined instead of the standard softmax operator.
mlp = numpy_softmax(data=fc3, name = 'softmax')model = mx.model.FeedForward(ctx = mx.gpu(0), symbol = mlp, num_epoch = 2,learning_rate = 0.1, momentum = 0.9, wd = 0.00001)
model.fit(X=train_iter, eval_data=test_iter,batch_end_callback = mx.callback.Speedometer(100, 100))

13:新層加新的迭代:

我創建在example/mytest文件夾下面

#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Thu Mar 30 15:35:02 2017@author: root
"""
from __future__ import print_function
import sys
import os
# code to automatically download dataset
curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
sys.path = [os.path.join(curr_path, "../autoencoder")] + sys.path
import mxnet as mx
import numpy as np
import data
from scipy.spatial.distance import cdist
from sklearn.cluster import KMeans
import model
from autoencoder import AutoEncoderModel
from solver import Solver, Monitor
import logging
import time
global YT
import scipy.io as sio  
import matplotlib.pyplot as plt 
# ==================start setting My-layer=====================
class NumpySoftmax(mx.operator.NumpyOp):def __init__(self):# Call the parent class constructor. # Because NumpySoftmax is a loss layer, it doesn't need gradient input from layers above.super(NumpySoftmax, self).__init__(need_top_grad=False)def list_arguments(self):# Define the input to NumpySoftmax.return ['data', 'label']def list_outputs(self):# Define the output.return ['output']def infer_shape(self, in_shape):# Calculate the dimensions of the output (and missing inputs) from (some) input shapes.data_shape = in_shape[0]  # shape of first argument 'data'label_shape = (in_shape[0][0],)  # 'label' should be one dimensional and has batch_size instances.output_shape = in_shape[0] # 'output' dimension is the same as the input.return [data_shape, label_shape], [output_shape]def forward(self, in_data, out_data):alpha=1.0z = in_data[0]q= out_data[0]  # 'output'kmeans = KMeans(n_clusters=10, random_state=170).fit(z)mu=kmeans.cluster_centers_        # Compute softmaxmask = 1.0/(1.0+cdist(z, mu)**2/alpha)q[:] = mask**((alpha+1.0)/2.0)q[:] = (q.T/q.sum(axis=1)).Tdef backward(self, out_grad, in_data, out_data, in_grad):alpha=1.0x = in_data[0]  # 'label'y = out_data[0]  # 'output'dx = in_grad[0]  # gradient for 'data'kmeans = KMeans(n_clusters=10, random_state=170).fit(x)mu=kmeans.cluster_centers_ mask = 1.0/(1.0+cdist(x, mu)**2/alpha)p = mask**((alpha+1.0)/2.0)mask*= (alpha+1.0)/alpha*(p-y)dx[:] = (x.T*mask.sum(axis=1)).T - mask.dot(mu)
#======================end setting==========================
# ==================start of the process of data=====================
X, Y = data.get_mnist()
X_train = X[:60000]
X_test = X[60000:]
Y_train = Y[:60000]
Y_test = Y[60000:]
numpy_softmax = NumpySoftmax()
batch_size = 100
#the office code to create iter
train_iter = mx.io.NDArrayIter(X_train, Y_train, batch_size=batch_size)
test_iter = mx.io.NDArrayIter(X_test, Y_test, batch_size=batch_size)
input_shapes = dict(train_iter.provide_data+train_iter.provide_label)
# ==================end of the process=====================
# ==================start of setting the net=====================
data = mx.symbol.Variable('data')
fc1 = mx.symbol.FullyConnected(data = data, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='relu1', act_type="relu")
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='relu2', act_type="relu")
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=10)
mlp = numpy_softmax(data=fc3, name = 'softmax')
mx.viz.plot_network(mlp).view()  
# ==================start of setting the net=====================
exe = mlp.simple_bind(ctx=mx.gpu(0), **input_shapes)
# ===============Initialization=================
# First we get handle to input arrays
arg_arrays = dict(zip(mlp.list_arguments(), exe.arg_arrays))
data = arg_arrays[train_iter.provide_data[0][0]]
label = arg_arrays[train_iter.provide_label[0][0]]# We initialize the weights with uniform distribution on (-0.01, 0.01).
init = mx.init.Uniform(scale=0.01)
for name, arr in arg_arrays.items():if name not in input_shapes:init(name, arr)# We also need to create an optimizer for updating weights
opt = mx.optimizer.SGD(learning_rate=0.1,momentum=0.9,wd=0.00001,rescale_grad=1.0/train_iter.batch_size)
updater = mx.optimizer.get_updater(opt)# Finally we need a metric to print out training progress
metric = mx.metric.Accuracy()# Training loop begines
for epoch in range(10):train_iter.reset()metric.reset()t = 0for batch in train_iter:# Copy data to executor input. Note the [:].data[:] = batch.data[0]label[:] = batch.label[0]# Forwardexe.forward(is_train=True)# You perform operations on exe.outputs here if you need to.# For example, you can stack a CRF on top of a neural network.# Backward
        exe.backward()# Updatefor i, pair in enumerate(zip(exe.arg_arrays, exe.grad_arrays)):weight, grad = pairupdater(i, grad, weight)metric.update(batch.label, exe.outputs)t += 1if t % 100 == 0:print('epoch:', epoch, 'iter:', t, 'metric:', metric.get())

?

轉載于:https://www.cnblogs.com/kangronghu/p/mxnet.html

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/393336.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/393336.shtml
英文地址,請注明出處:http://en.pswp.cn/news/393336.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

web動畫_Web動畫簡介

web動畫by CodeDraken由CodeDraken Web動畫簡介 (An Introduction to Web Animations) In this introduction to web animations article, we will cover basic CSS animations using pseudo-classes, transitions, and transformations.在此Web動畫簡介中&#xff0c;我們將介…

java統計空間占用_JVM —— Java 對象占用空間大小計算

引用類型(reference type&#xff1a; Integer)在 32 位系統上每一個占用 4bytes(即32bit&#xff0c; 才干管理 2^324G 的內存), 在 64 位系統上每一個占用 8bytes(開啟壓縮為 4 bytes)。四. 對齊填充HotSpot 的對齊方式為 8 字節對齊。不足的須要 Padding 填充對齊&#xff0…

源于十年來的點滴積累——《變革中的思索》印行出版

源于歸國十年來的點滴積累, 集結成書的《變革中的思索》&#xff0c;日前由電子工業出版社刊印出版。 這本書共有五個章節&#xff0c;分別是解碼創新、中國智造、管理心得、我和微軟、心靈記憶——前三章偏重技術&#xff0c;更多理性的思考; 后兩章則工作生活中的所見所聞&am…

SpringBoot聲明式事務

目錄 事務的基本特征隔離級別傳播行為Transcation事務的基本特征&#xff08;ACID&#xff09; Atomic&#xff08;原子性&#xff09; 事務中包含的操作被看作一個整體的業務單元&#xff0c;這個業務單元中的操作要么全部成功&#xff0c;要么全部失敗&#xff0c;不會出現部…

leetcode1437. 是否所有 1 都至少相隔 k 個元素

給你一個由若干 0 和 1 組成的數組 nums 以及整數 k。如果所有 1 都至少相隔 k 個元素&#xff0c;則返回 True &#xff1b;否則&#xff0c;返回 False 。 示例 1&#xff1a; 輸入&#xff1a;nums [1,0,0,0,1,0,0,1], k 2 輸出&#xff1a;true 解釋&#xff1a;每個 1 …

數據結構教程網盤鏈接_數據結構101:鏈接列表

數據結構教程網盤鏈接by Kevin Turney凱文特尼(Kevin Turney) Like stacks and queues, Linked Lists are a form of a sequential collection. It does not have to be in order. A Linked list is made up of independent nodes that may contain any type of data. Each no…

多線程之間的通信(等待喚醒機制、Lock 及其它線程的方法)

一、多線程之間的通信。 就是多個線程在操作同一份數據&#xff0c; 但是操作的方法不同。     如&#xff1a; 對于同一個存儲塊&#xff0c;其中有兩個存儲位&#xff1a;name sex&#xff0c; 現有兩個線程&#xff0c;一個向其中存放數據&#xff0c;一個打印其中的數…

Linux iptables 配置詳解

一、配置一個filter表的防火墻 1. 查看本機關于 iptables 的設置情況 # iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) …

06 Nginx

1.檢查linux上是否通過yum安裝了nginx rpm -qi nginx2.解決安裝nginx所依賴包 yum install gcc patch libffi-devel python-devel zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel ope…

java編寫安卓程序代碼,安卓:從Android的Java源代碼code創建UML

i am looking for a program that can create automatically an Uml from my Java-Android source code.I have tested ArgoUml, but it does not support Android.Have any one a suggestion?Thanks!解決方案I can second what Tom Morris wrote in the comment above. Even …

leetcode1052. 愛生氣的書店老板(滑動窗口)

今天&#xff0c;書店老板有一家店打算試營業 customers.length 分鐘。每分鐘都有一些顧客&#xff08;customers[i]&#xff09;會進入書店&#xff0c;所有這些顧客都會在那一分鐘結束后離開。 在某些時候&#xff0c;書店老板會生氣。 如果書店老板在第 i 分鐘生氣&#xf…

amazon alexa_在Amazon Alexa上推出freeCodeCamp編碼瑣事測驗

amazon alexaNow you can learn coding concepts hands-free using an Amazon Echo.現在&#xff0c;您可以使用Amazon Echo免提學習編碼概念。 freeCodeCamp.org contributor David Jolliffe created a quiz game with questions on JavaScript, CSS, networking, and comput…

第一類第二類丟失更新

第一類丟失更新 A事務撤銷時&#xff0c;把已經提交的B事務的更新數據覆蓋了。這種錯誤可能造成很嚴重的問題&#xff0c;通過下面的賬戶取款轉賬就可以看出來&#xff1a; 時間 取款事務A 轉賬事務B T1 開始事務 T2 開始事務 T3 查詢賬戶余額為1000元 …

oracle數據字典表與視圖

oracle數據字典表與視圖 數據字典是數據的數據&#xff0c;也就是元數據。描述了數據庫的物理與邏輯存儲與相應的信息。模式中對象的定義信息&#xff0c;安全信息&#xff0c;完整性約束信息&#xff0c;和部分的性能監控信息等。數據字典表 與視圖存儲在system表空間中的。有…

團隊作業——項目Alpha版本發布

---恢復內容開始--- https://edu.cnblogs.com/campus/xnsy/SoftwareEngineeringClass1 https://edu.cnblogs.com/campus/xnsy/SoftwareEngineeringClass1/homework/3329 <作業要求的鏈接> Gorious Computer <寫上團隊名稱> 發布項目α版本&#xff0c;對項目…

java臟字過濾_臟字過濾

1.[文件]SensitiveWordFilter.java ~ 7KB下載(141)package com.forgov.sharpc.infrastruture.util;import static java.util.Collections.sort;import java.util.ArrayList;import java.util.Collection;import java.util.Comparator;import java.util.HashSet;import java.uti…

react中使用構建緩存_完整的React課程:如何使用React構建聊天室應用

react中使用構建緩存In this video course, youll learn React by building a chat room app.在本視頻課程中&#xff0c;您將通過構建聊天室應用程序來學習React。 By the end of the video, youll have a solid understanding of React.js and have your very own chat room…

leetcode1509. 三次操作后最大值與最小值的最小差

給你一個數組 nums &#xff0c;每次操作你可以選擇 nums 中的任意一個元素并將它改成任意值。 請你返回三次操作后&#xff0c; nums 中最大值與最小值的差的最小值。 示例 1&#xff1a; 輸入&#xff1a;nums [5,3,2,4] 輸出&#xff1a;0 解釋&#xff1a;將數組 [5,3,…

MySQL異步復制

準備&#xff1a;主備庫版本一致&#xff0c;正常安裝軟件。 1、主庫上設置一個復制使用的賬戶&#xff1a; mysql> grant replication slave on *.* to rep1192.168.100.136 identified by dbking; Query OK, 0 rows affected (0.18 sec) mysql> select user,host,passw…

開源一個爬取redmine數據的測試報告系統

背景 軟件測試的最后有一道比較繁瑣的工作&#xff0c;就是編寫測試報告。手寫測試報告在數據統計和分析上面要耗費比較大的事件和精力。之前工作室使用mantis管理bug缺陷。公司有內部有個系統&#xff0c;可以直接從mantis上面獲取數據并進行統計&#xff0c;生成一份測試報告…