spark streaming 的 Job創建、調度、提交

2019獨角獸企業重金招聘Python工程師標準>>> hot3.png

上文已經從源碼分析了Receiver接收的數據交由BlockManager管理,整個數據接收流都已經運轉起來了,那么讓我們回到分析JobScheduler的博客中。

// JobScheduler.scala line 62def start(): Unit = synchronized {if (eventLoop != null) return // scheduler has already been startedlogDebug("Starting JobScheduler")eventLoop = new EventLoop[JobSchedulerEvent]("JobScheduler") {override protected def onReceive(event: JobSchedulerEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = reportError("Error in job scheduler", e)}eventLoop.start()// attach rate controllers of input streams to receive batch completion updatesfor {inputDStream <- ssc.graph.getInputStreamsrateController <- inputDStream.rateController} ssc.addStreamingListener(rateController)listenerBus.start(ssc.sparkContext)receiverTracker = new ReceiverTracker(ssc)inputInfoTracker = new InputInfoTracker(ssc)receiverTracker.start()jobGenerator.start()logInfo("Started JobScheduler")}

前面好幾篇博客都是 由?receiverTracker.start() 延展開。延展完畢后,繼續下一步。

// JobScheduler.scala line 83
jobGenerator.start()

jobGenerator的實例化過程,前面已經分析過。深入下源碼了解到。

  1. 實例化eventLoop,此處的eventLoop與JobScheduler中的eventLoop不一樣,對應的是不同的泛型。
  2. EventLoop.start
  3. 首次啟動,startFirstTime
  // JobGenerator.scala line 78/** Start generation of jobs */def start(): Unit = synchronized {if (eventLoop != null) return // generator has already been started// Call checkpointWriter here to initialize it before eventLoop uses it to avoid a deadlock.// See SPARK-10125checkpointWritereventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") {override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = {jobScheduler.reportError("Error in job generator", e)}}eventLoop.start()if (ssc.isCheckpointPresent) {restart()} else {startFirstTime()}}
// JobGenerator.scala line 189/** Starts the generator for the first time */private def startFirstTime() {val startTime = new Time(timer.getStartTime())graph.start(startTime - graph.batchDuration)timer.start(startTime.milliseconds)logInfo("Started JobGenerator at " + startTime)}

將DStreamGraph.start

  1. 將所有的outputStreams都initialize,初始化首次執行時間,依賴的DStream一并設置。
  2. 如果設置了duration,將所有的outputStreams都remember,依賴的DStream一并設置
  3. 啟動前驗證,主要是驗證chechpoint設置是否沖突以及各種Duration
  4. 將所有的inputStreams啟動;讀者掃描了下目前版本1.6.0InputDStraem及其所有的子類。start方法啥都沒做。結合之前的博客,inputStreams都已經交由ReceiverTracker管理了。
// DStreamGraph.scala line 39def start(time: Time) {this.synchronized {require(zeroTime == null, "DStream graph computation already started")zeroTime = timestartTime = timeoutputStreams.foreach(_.initialize(zeroTime))outputStreams.foreach(_.remember(rememberDuration))outputStreams.foreach(_.validateAtStart)inputStreams.par.foreach(_.start())}}

至此,只是做了一些簡單的初始化,并沒有讓數據處理起來。

再回到JobGenerator。此時,將循環定時器啟動,

// JobGenerator.scala line 193timer.start(startTime.milliseconds)

循環定時器啟動;讀者是不是很熟悉,是不是在哪見過這個循環定時器?

沒錯,就是BlockGenerator.scala line 105 、109?,兩個線程,其中一個是循環定時器,定時將數據放入待push隊列中。

// RecurringTimer.scala line 59def start(startTime: Long): Long = synchronized {nextTime = startTimethread.start()logInfo("Started timer for " + name + " at time " + nextTime)nextTime}

具體的邏輯是在構造是傳入的方法:longTime => eventLoop.post(GenerateJobs(new Time(longTime)));

輸入是Long,

方法體是eventLoop.post(GenerateJobs(new Time(longTime)))

// JobGenerator.scala line 58private val timer = new RecurringTimer(clock, ssc.graph.batchDuration.milliseconds,longTime => eventLoop.post(GenerateJobs(new Time(longTime))), "JobGenerator")

只要線程狀態不是stopped,一直循環。

  1. 初始化的時候將上面的方法傳進來, ?callback: (Long) => Unit 對應的就是 ?longTime => eventLoop.post(GenerateJobs(new Time(longTime)))
  2. start的時候 thread.run啟動,里面的loop方法被執行。
  3. loop中調用的是?triggerActionForNextInterval。
  4. triggerActionForNextInterval調用構造傳入的callback,也就是上面的?longTime => eventLoop.post(GenerateJobs(new Time(longTime)))?
private[streaming]
class RecurringTimer(clock: Clock, period: Long, callback: (Long) => Unit, name: String)extends Logging {
// RecurringTimer.scala line 27private val thread = new Thread("RecurringTimer - " + name) {setDaemon(true)override def run() { loop }}
// RecurringTimer.scala line 56/*** Start at the given start time.*/def start(startTime: Long): Long = synchronized {nextTime = startTimethread.start()logInfo("Started timer for " + name + " at time " + nextTime)nextTime}
// RecurringTimer.scala line 92private def triggerActionForNextInterval(): Unit = {clock.waitTillTime(nextTime)callback(nextTime)prevTime = nextTimenextTime += periodlogDebug("Callback for " + name + " called at time " + prevTime)}// RecurringTimer.scala line 100/*** Repeatedly call the callback every interval.*/private def loop() {try {while (!stopped) {triggerActionForNextInterval()}triggerActionForNextInterval()} catch {case e: InterruptedException =>}}
// ...一些代碼
}

定時發送GenerateJobs 類型的事件消息,eventLoop.post中將事件消息加入到eventQueue中

// EventLoop.scala line 102def post(event: E): Unit = {eventQueue.put(event)}

同時,此EventLoop中的另一個成員變量?eventThread。會一直從隊列中取事件消息,將此事件作為參數調用onReceive。而此onReceive在實例化時被override了。

// JobGenerator.scala line 86eventLoop = new EventLoop[JobGeneratorEvent]("JobGenerator") {override protected def onReceive(event: JobGeneratorEvent): Unit = processEvent(event)override protected def onError(e: Throwable): Unit = {jobScheduler.reportError("Error in job generator", e)}}eventLoop.start()

onReceive調用的是

// JobGenerator.scala line 177/** Processes all events */private def processEvent(event: JobGeneratorEvent) {logDebug("Got event " + event)event match {case GenerateJobs(time) => generateJobs(time)// 其他case class}}

GenerateJobs case class 是匹配到 generateJobs(time:Time) 來處理

  1. 獲取當前時間批次ReceiverTracker收集到的所有的Blocks,若開啟WAL會執行WAL
  2. DStreamGraph生產任務
  3. 提交任務
  4. 若設置checkpoint,則checkpoint
// JobGenerator.scala line 240/** Generate jobs and perform checkpoint for the given `time`.  */private def generateJobs(time: Time) {// Set the SparkEnv in this thread, so that job generation code can access the environment// Example: BlockRDDs are created in this thread, and it needs to access BlockManager// Update: This is probably redundant after threadlocal stuff in SparkEnv has been removed.SparkEnv.set(ssc.env)Try {jobScheduler.receiverTracker.allocateBlocksToBatch(time) // allocate received blocks to batchgraph.generateJobs(time) // generate jobs using allocated block} match {case Success(jobs) =>val streamIdToInputInfos = jobScheduler.inputInfoTracker.getInfo(time)jobScheduler.submitJobSet(JobSet(time, jobs, streamIdToInputInfos))case Failure(e) =>jobScheduler.reportError("Error generating jobs for time " + time, e)}eventLoop.post(DoCheckpoint(time, clearCheckpointDataLater = false))}

上述代碼不是特別容易理解。細細拆分:咋一看以為是try{} catch{case ...?},仔細一看,是Try{}match{}

追蹤下代碼,原來Try是大寫的,是一個伴生對象,apply接收的參數是一個方法,返回Try的實例。在scala.util.Try.scala?代碼如下:

// scala.util.Try.scala line 155
object Try {/** Constructs a `Try` using the by-name parameter.  This* method will ensure any non-fatal exception is caught and a* `Failure` object is returned.*/def apply[T](r: => T): Try[T] =try Success(r) catch {case NonFatal(e) => Failure(e)}}

Try有兩個子類,都是case class 。分別是Success和Failure。如圖

再返回調用處,Try中的代碼塊最后執行的是?graph.generateJobs(time) 。跟蹤下:

返回的是outputStream.generateJob(time)。

// DStreamGraph.scala line 111def generateJobs(time: Time): Seq[Job] = {logDebug("Generating jobs for time " + time)val jobs = this.synchronized {outputStreams.flatMap { outputStream =>val jobOption = outputStream.generateJob(time)jobOption.foreach(_.setCallSite(outputStream.creationSite))jobOption}}logDebug("Generated " + jobs.length + " jobs for time " + time)jobs}

從前文可知,outputStream其實都是ForEachDStream。進入ForEachDStream,override了generateJob。

  1. parent.getOrCompute(time) 返回一個Option[Job]。
  2. 若有rdd,則返回可能是new Job(time,jobFunc)
// ForEachDStream.scala line 46override def generateJob(time: Time): Option[Job] = {parent.getOrCompute(time) match {case Some(rdd) =>val jobFunc = () => createRDDWithLocalProperties(time, displayInnerRDDOps) {foreachFunc(rdd, time)}Some(new Job(time, jobFunc))case None => None}}

那么ForEachDStream的parent是什么呢?看下我們的案例:

import?org.apache.spark.SparkConf
import?org.apache.spark.streaming.{Durations,?StreamingContext}object?StreamingWordCountSelfScala?{def?main(args:?Array[String])?{val?sparkConf?=?new?SparkConf().setMaster("spark://master:7077").setAppName("StreamingWordCountSelfScala")val?ssc?=?new?StreamingContext(sparkConf,?Durations.seconds(5))?//?每5秒收割一次數據val?lines?=?ssc.socketTextStream("localhost",?9999)?//?監聽?本地9999?socket?端口val?words?=?lines.flatMap(_.split("?")).map((_,?1)).reduceByKey(_?+?_)?//?flat?map?后?reducewords.print()?//?打印結果ssc.start()?//?啟動ssc.awaitTermination()ssc.stop(true)}
}

按照前文的描述:本例中?DStream的依賴是?SocketInputDStream <<?FlatMappedDStream <<?MappedDStream <<?ShuffledDStream <<?ForEachDStream

筆者掃描了下DStream及其所有子類,發現只有DStream有?getOrCompute,沒有一個子類override了此方法。如此一來,是ShuffledDStream.getorCompute

在一般情況下,是RDD不存在,執行orElse代碼快,

// DStream.scala line 338/*** Get the RDD corresponding to the given time; either retrieve it from cache* or compute-and-cache it.*/private[streaming] final def getOrCompute(time: Time): Option[RDD[T]] = {// If RDD was already generated, then retrieve it from HashMap,// or else compute the RDDgeneratedRDDs.get(time).orElse {// Compute the RDD if time is valid (e.g. correct time in a sliding window)// of RDD generation, else generate nothing.if (isTimeValid(time)) {val rddOption = createRDDWithLocalProperties(time, displayInnerRDDOps = false) {// Disable checks for existing output directories in jobs launched by the streaming// scheduler, since we may need to write output to an existing directory during checkpoint// recovery; see SPARK-4835 for more details. We need to have this call here because// compute() might cause Spark jobs to be launched.PairRDDFunctions.disableOutputSpecValidation.withValue(true) {compute(time)  // line 352}}rddOption.foreach { case newRDD =>// Register the generated RDD for caching and checkpointingif (storageLevel != StorageLevel.NONE) {newRDD.persist(storageLevel)logDebug(s"Persisting RDD ${newRDD.id} for time $time to $storageLevel")}if (checkpointDuration != null && (time - zeroTime).isMultipleOf(checkpointDuration)) {newRDD.checkpoint()logInfo(s"Marking RDD ${newRDD.id} for time $time for checkpointing")}generatedRDDs.put(time, newRDD)}rddOption} else {None}}}

ShuffledDStream.compute?

又調用parent.getOrCompute

// ShuffledDStream.scala line 40override def compute(validTime: Time): Option[RDD[(K, C)]] = {parent.getOrCompute(validTime) match {case Some(rdd) => Some(rdd.combineByKey[C](createCombiner, mergeValue, mergeCombiner, partitioner, mapSideCombine))case None => None}}

MappedDStream的compute,又是父類的getOrCompute,結果又調用compute,如此循環。

// MappedDStream.scala line 34override def compute(validTime: Time): Option[RDD[U]] = {parent.getOrCompute(validTime).map(_.map[U](mapFunc))}

FlatMappedDStream的compute,又是父類的getOrCompute。結果又調用compute,如此循環。

// FlatMappedDStream.scala line 34override def compute(validTime: Time): Option[RDD[U]] = {parent.getOrCompute(validTime).map(_.flatMap(flatMapFunc))}

直到DStreamshi SocketInputDStream,也就是inputStream時,compute是繼承自父類。

先不考慮if中的邏輯,直接else代碼塊。

進入createBlockRDD

// ReceiverInputDStream.scala line 69override def compute(validTime: Time): Option[RDD[T]] = {val blockRDD = {if (validTime < graph.startTime) {// If this is called for any time before the start time of the context,// then this returns an empty RDD. This may happen when recovering from a// driver failure without any write ahead log to recover pre-failure data.new BlockRDD[T](ssc.sc, Array.empty)} else {// Otherwise, ask the tracker for all the blocks that have been allocated to this stream// for this batchval receiverTracker = ssc.scheduler.receiverTrackerval blockInfos = receiverTracker.getBlocksOfBatch(validTime).getOrElse(id, Seq.empty)// Register the input blocks information into InputInfoTrackerval inputInfo = StreamInputInfo(id, blockInfos.flatMap(_.numRecords).sum)ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)// Create the BlockRDDcreateBlockRDD(validTime, blockInfos)}}Some(blockRDD)}
new BlockRDD[T](ssc.sc, validBlockIds) line 127,RDD實例化成功
// ReceiverInputDStream.scala line 94private[streaming] def createBlockRDD(time: Time, blockInfos: Seq[ReceivedBlockInfo]): RDD[T] = {if (blockInfos.nonEmpty) {val blockIds = blockInfos.map { _.blockId.asInstanceOf[BlockId] }.toArray// Are WAL record handles present with all the blocksval areWALRecordHandlesPresent = blockInfos.forall { _.walRecordHandleOption.nonEmpty }if (areWALRecordHandlesPresent) {// If all the blocks have WAL record handle, then create a WALBackedBlockRDDval isBlockIdValid = blockInfos.map { _.isBlockIdValid() }.toArrayval walRecordHandles = blockInfos.map { _.walRecordHandleOption.get }.toArraynew WriteAheadLogBackedBlockRDD[T](ssc.sparkContext, blockIds, walRecordHandles, isBlockIdValid)} else {// Else, create a BlockRDD. However, if there are some blocks with WAL info but not// others then that is unexpected and log a warning accordingly.if (blockInfos.find(_.walRecordHandleOption.nonEmpty).nonEmpty) {if (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {logError("Some blocks do not have Write Ahead Log information; " +"this is unexpected and data may not be recoverable after driver failures")} else {logWarning("Some blocks have Write Ahead Log information; this is unexpected")}}val validBlockIds = blockIds.filter { id =>ssc.sparkContext.env.blockManager.master.contains(id)}if (validBlockIds.size != blockIds.size) {logWarning("Some blocks could not be recovered as they were not found in memory. " +"To prevent such data loss, enabled Write Ahead Log (see programming guide " +"for more details.")}new BlockRDD[T](ssc.sc, validBlockIds) // line 127}} else {// If no block is ready now, creating WriteAheadLogBackedBlockRDD or BlockRDD// according to the configurationif (WriteAheadLogUtils.enableReceiverLog(ssc.conf)) {new WriteAheadLogBackedBlockRDD[T](ssc.sparkContext, Array.empty, Array.empty, Array.empty)} else {new BlockRDD[T](ssc.sc, Array.empty)}}}

此BlockRDD是Spark Core的RDD的子類,且沒有依賴的RDD。至此,RDD的實例化已經完成。

// BlockRDD.scala line 30
private[spark]
class BlockRDD[T: ClassTag](sc: SparkContext, @transient val blockIds: Array[BlockId])extends RDD[T](sc, Nil) // RDd.scala line 74
abstract class RDD[T: ClassTag](@transient private var _sc: SparkContext,@transient private var deps: Seq[Dependency[_]]) extends Serializable with Logging

至此,最終還原回來的RDD:

new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(flatMapFunc)).map(_.map[U](mapFunc)).combineByKey[C](createCombiner, mergeValue, mergeCombiner, partitioner, mapSideCombine)。

在本例中則為

new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(t=>t.split(" "))).map(_.map[U](t=>(t,1))).combineByKey[C](t=>t, (t1,t2)=>t1+t2, (t1,t2)=>t1+t2,partitioner, true)

而最終的print為

() => foreachFunc(new BlockRDD[T](ssc.sc, validBlockIds).map(_.flatMap(t=>t.split(" "))).map(_.map[U](t=>(t,1))).combineByKey[C](t=>t, (t1,t2)=>t1+t2, (t1,t2)=>t1+t2,partitioner, true),time)

其中foreachFunc為 DStrean.scala line 766

至此,RDD已經通過DStream實例化完成,現在再回顧下,是否可以理解DStream是RDD的模版。

不過別急,回到ForEachDStream.scala line?46 ,將上述函數作為構造參數,傳入Job。

?

-------------分割線--------------

補充下Job創建的流程圖,來源于版本定制班學員博客,略有修改。

?

?

補充下RDD按照lineage從?OutputDStream 回溯?創建RDD Dag的流程圖,來源于版本定制班學員博客

?

?

補充案例中?RDD按照lineage從?OutputDStream 回溯?創建RDD Dag的流程圖,來源于版本定制班學員博客

?

?

下節內容從源碼分析Job提交,敬請期待。

?

轉載于:https://my.oschina.net/corleone/blog/672999

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/284783.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/284783.shtml
英文地址,請注明出處:http://en.pswp.cn/news/284783.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

CSS屬性總結之background

最近在學習css3的一些新屬性&#xff0c;就把一些使用中遇到的方法和問題做一個小結。 background-color 背景顏色在IE7之前只顯示到padding區域&#xff0c;不包含border。而現代瀏覽器background-color都是從border的左上角&#xff0c;到border的右下角。 background-color:…

官宣!微軟發布 VS Code Server!

北京時間 2022 年 7 月 7 日&#xff0c;微軟在 VS Code 官方博客中宣布了 Visual Studio Code Server&#xff01;遠程開發的過去與未來2019 年&#xff0c;微軟發布了 VS Code Remote&#xff0c;開啟了遠程開發的新時代&#xff01;2020 年&#xff0c;微軟發布了 GitHub Co…

iis管理常用命令 創建IIS站點 應用應用程序 及虛擬目錄

::防止中文輸出亂碼 chcp 65001::臨時設置PATH set PATH%SystemRoot%\system32\inetsrv;%PATH% ::列出所有站點 appcmd list site::站點名稱 set sitename"WisdomEducation"::綁定域名和端口號 set domain"http/*:8080:,https/*:8443:"::網站源文件物理路徑…

【QGIS入門實戰精品教程】4.4:QGIS如何將點自動連成線、線生成多邊形?

個人簡介:劉一哥,多年研究地圖學、地理信息系統、遙感、攝影測量和GPS等應用,精通ArcGIS等軟件的應用,精通多門編程語言,擅長GIS二次開發和數據庫系統開發,具有豐富的行業經驗,致力于測繪、地信、數字城市、資源、環境、生態、國土空間規劃、空間數字建模、無人機等領域…

.NET7之MiniAPI(特別篇) :Preview6 緩存和限流

前幾在用MiniAPI時還想沒有比較優雅的緩存&#xff0c;這不&#xff0c;Preivew6就帶來了。使用起來很簡單&#xff0c;注入Sevice&#xff0c;引用中間件&#xff0c;然后在Map方法的后面跟CacheOutput()就ok了&#xff0c;CacheOutpu也有不同的參數&#xff0c;可以根據每個方…

藍橋杯C1

轉一篇寫的炒雞棒的博客。講了表達式求值和詞法分析。 http://blog.csdn.net/StevenKyleLee/article/details/43099789 轉載于:https://www.cnblogs.com/wangkaipeng/p/6343204.html

曾鳴:未來十年,將確定智能商業的格局|干貨

2019獨角獸企業重金招聘Python工程師標準>>> 20年來風云變幻&#xff0c;潮起潮涌&#xff0c;我自己最深的一個感受&#xff0c;是對“勢”這個字的理解。 第一&#xff0c;敬畏。對于商業規律和對大勢的把握&#xff0c;很容易在三五年內決定一個企業的命運。 第二…

Jedis 設置key的超時時間

一分鐘之內只能發送一次短信, 若用戶刷新頁面,然后輸入原來的手機號,則繼續計時 方案:服務器端要記錄時間戳 方法名:sMSWaitingTime 功能:返回倒計時剩余時間,單位秒 Java代碼 /*** * 倒計時還剩余多長時間 * param mobile : 手機號 * return : second */…

[轉]IIS7全新管理工具AppCmd.exe的命令使用

IIS 7 提供了一個新的命令行工具 Appcmd.exe&#xff0c;可以使用該工具來配置和查詢 Web 服務器上的對象&#xff0c;并以文本或 XML 格式返回輸出。 下面是一些可以使用 Appcmd.exe 完成的任務的示例&#xff1a; ?創建和配置站點、應用程序、應用程序池和虛擬目錄。 ?停止…

【QGIS入門實戰精品教程】4.1:QGIS柵格數據地理配準完整操作流程

推薦閱讀:ArcGIS地理配準完整操作步驟 文章目錄 一、安裝地理配準插件二、準備實驗數據三、配準操作流程1. 添加柵格數據2. 添加地面控制點3. 配準設置4. 開始配準5. 精度評價一、安裝地理配準插件 點擊下拉菜單【插件】→【管理并安裝插件】,如下圖所示: QGIS默認已經安裝…

聊聊 C++ 中的幾種智能指針 (上)

一&#xff1a;背景 我們知道 C 是手工管理內存的分配和釋放&#xff0c;對應的操作符就是 new/delete 和 new[] / delete[], 這給了程序員極大的自由度也給了我們極高的門檻&#xff0c;弄不好就得內存泄露&#xff0c;比如下面的代碼&#xff1a;void test() {int* i new i…

【Android 學習】深入理解Handler機制

版權聲明&#xff1a;本文為博主原創文章&#xff0c;轉載請注明出處http://blog.csdn.net/u013132758。 https://blog.csdn.net/u013132758/article/details/51355051 Android 提供了Handler和Looper來來滿足線程間的通信&#xff0c;而前面我們所說的IPC指的是進程間的通信。…

第五天個人總結

1.昨天做了什么 頁面完善 2.今天要做什么 暫未知轉載于:https://www.cnblogs.com/sunshine-z/p/8298895.html

【QGIS入門實戰精品教程】4.3:QGIS屬性表按字段鏈接外部屬性數據

屬性數據是GIS空格數據的重要組成部分。屬性數據采集的基本操作由于地理實體(如建筑物) 位于地塊之內成者與地塊有關(如道路),因此,描述地理實體的屬性數據和描述地塊實體與地理實體之間關系的屬性數強大多數都是土地信息的范疇土地空間數據庫的屬性教據主要是用來描述空間目…

解決 Cmder 的光標跟文字有個間距 及常用配置

具體的方法&#xff1a; 菜單 > SettingStartup > Environment set PATH%ConEmuBaseDir%\Scripts;%PATH% set LANGzh_CN.UTF8 chcp 65001 如果無效&#xff1a;在 Cmder 下的 verndor 目錄里&#xff0c;修改 clink.lua 文件大約40和46行&#xff0c;把符號 λ 改為 # …

32 commons-lang包學習

maven依賴 <dependency><groupId>commons-lang</groupId><artifactId>commons-lang</artifactId><version>2.6</version></dependency>一、DateUtils類1、日期比較 public static boolean isSameDay(Date date1, Date date2)&…

做一個高德地圖的 iOS / Android .NET MAUI 控件系列 - 創建控件

我們知道 MAUI 是開發跨平臺應用的解決方案 &#xff0c;用 C# 可以直接把 iOS , Android , Windows , macOS , Linux ,Tizen 等應用開發出來。那我們在這個框架除了用底層自定義的 UI 控件外&#xff0c;如果我們要用如高德地圖這樣的第三方控件&#xff0c;要如何做呢&#x…

flask中的session,render_template()第二和參數是字典

1. 設置一個secret_key 2.驗證登入后加上session,這是最簡單,不保險 . 3.注意render_template傳的參數是字典 轉載于:https://www.cnblogs.com/cuzz/p/8087844.html

統一設置網站html文件輸出編碼為utf-8,以解決亂碼問題

如果設置整站&#xff0c;就在根目錄web.config設置如下參數&#xff0c;如果是指定目錄&#xff0c;在該目錄下的web.config文件設置如下參數&#xff0c;如果沒有web.config文件&#xff0c;可以直接新建一個&#xff1a; <?xml version"1.0" encoding"…

失敗記錄兩則

一&#xff0c;未找出為什么有的CPU應用超高&#xff0c;而另一些CPU靜靜啥也不干。可能是將JOB的優先級設置低了&#xff1f; 二&#xff0c;給BOSS的三星I9300線刷港版ROM失敗。可能文件壞&#xff0c;最可能數據線不是原裝&#xff1f;