Elasticsearch集群shard過多后導致的性能問題分析

1.問題現象

上午上班以后發現ES日志集群狀態不正確,集群頻繁地重新發起選主操作。對外不能正常提供數據查詢服務,相關日志數據入庫也產生較大延時

2.問題原因

相關日志

查看ES集群日志如下:

00:00:51開始集群各個節點與當時的master節點通訊超時

Timeleveldata
00:00:51.140WARNReceived response for a request that has timed out, sent [12806ms] ago, timed out [2802ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [864657514]
00:01:24.912WARNReceived response for a request that has timed out, sent [12205ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [143113108]
00:01:24.912WARNReceived response for a request that has timed out, sent [12206ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [835936906]
00:01:27.731WARNReceived response for a request that has timed out, sent [20608ms] ago, timed out [10604ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [137999525]
00:01:44.686WARNReceived response for a request that has timed out, sent [18809ms] ago, timed out [8804ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [143114372]
00:01:44.686WARNReceived response for a request that has timed out, sent [18643ms] ago, timed out [8639ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [835938242]
00:01:56.523WARNReceived response for a request that has timed out, sent [20426ms] ago, timed out [10423ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [137250155]
00:01:56.523WARNReceived response for a request that has timed out, sent [31430ms] ago, timed out [21426ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [137249119]

觸發各個節點發起重新選主的操作

Timeleveldata
00:00:51.140WARNReceived response for a request that has timed out, sent [12806ms] ago, timed out [2802ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [864657514]
00:01:24.912WARNReceived response for a request that has timed out, sent [12206ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [835936906]
00:01:24.912WARNReceived response for a request that has timed out, sent [12205ms] ago, timed out [2201ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [143113108]
00:01:27.731WARNReceived response for a request that has timed out, sent [20608ms] ago, timed out [10604ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [137999525]
00:01:44.686WARNReceived response for a request that has timed out, sent [18643ms] ago, timed out [8639ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [835938242]
00:01:44.686WARNReceived response for a request that has timed out, sent [18809ms] ago, timed out [8804ms] ago, action [internal:coordination/fault_detection/leader_check], node [{hot}{tUvNI22CRAanSsJdircGlA}{crDi96kOQl6J944HZqNB0w}{131}{131:9300}{dim}{xpack.installed=true, box_type=hot}], id [143114372]

新的主節點被選出,但頻繁在3個候選節點間切換,集群狀態始終處于不穩定狀態

Timeleveldata
00:52:37.264DEBUGexecuting cluster state update for [elected-as-master ([2] nodes joined)[{hot}{g7zfvt_3QI6cW6ugxIkSRw}{bELGusphTpy6RBeArNo8MA}{129}{129:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, {hot}{GDyoKXPmQyC42JBjNP0tzA}{llkC7-LgQbi4BdcPiX_oOA}{130}{130:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
00:52:37.264TRACEwill process [elected-as-master ([2] nodes joined)[_FINISH_ELECTION_]]
00:52:37.264TRACEwill process [elected-as-master ([2] nodes joined)[_BECOME_MASTER_TASK_]]
00:52:37.264TRACEwill process [elected-as-master ([2] nodes joined)[{hot}{g7zfvt_3QI6cW6ugxIkSRw}{bELGusphTpy6RBeArNo8MA}{129}{129:9300}{dim}{xpack.installed=true, box_type=hot} elect leader]]
00:52:37.264TRACEwill process [elected-as-master ([2] nodes joined)[{hot}{GDyoKXPmQyC42JBjNP0tzA}{llkC7-LgQbi4BdcPiX_oOA}{130}{130:9300}{dim}{xpack.installed=true, box_type=hot} elect leader]]
00:52:37.584DEBUGtook [200ms] to compute cluster state update for [elected-as-master ([2] nodes joined)[{hot}{g7zfvt_3QI6cW6ugxIkSRw}{bELGusphTpy6RBeArNo8MA}{129}{129:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, {hot}{GDyoKXPmQyC42JBjNP0tzA}{llkC7-LgQbi4BdcPiX_oOA}{130}{130:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]
00:52:37.828TRACEcluster state updated, source [elected-as-master ([2] nodes joined)[{hot}{g7zfvt_3QI6cW6ugxIkSRw}{bELGusphTpy6RBeArNo8MA}{129}{129:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, {hot}{GDyoKXPmQyC42JBjNP0tzA}{llkC7-LgQbi4BdcPiX_oOA}{130}{130:9300}{dim}{xpack.installed=true, box_type=hot} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_]]

問題分析

綜合上述日志、集群狀態及近期所做的操作后,發現這是由于為解決前期ES集群SSD磁盤IO不均,部分磁盤達到IO上限的問題,為平衡各節點、各SSD磁盤的IO,將index的shard均勻分配至每個節點的每塊SSD上,增加了在每個節點上的shard分配數量。這雖然避免了熱點盤的問題,有效地均衡了磁盤IO,但導致了shard數目的快速增加 (之前集群shard總數一般控制在2萬左右,出現問題時集群shard數目接近6萬)進而觸發如下ES bug(該bug在ES 7.6及以上版本被修復),導致平時可以在短時間內正常完成的處理(freeze index,delete index,create index)長時間不能完成,同時造成master節點負載過高,最終出現大量處理超時等錯誤:

  • https://github.com/elastic/elasticsearch/pull/47817

  • https://github.com/elastic/elasticsearch/issues/46941

  • https://github.com/elastic/elasticsearch/pull/48579

這3個bug所表述的事情是同一個,即:為了確定節點中一個shard是否需要發生移動,ES集群需要查看集群中所有shard是否處于RELOCATING或者INITIALIZING狀態,以獲取其shard的大小。在bug未修復版本中,集群里的每個shard都會重復上述操作,而這些工作都由master節點通過實時計算來完成。當集群的shard數增多后,master節點計算工作量會急劇上升,從而導致master節點處理緩慢,引發一系列的問題。由于集群shard數上升,導致master節點的工作負載急劇上升,出現相關處理緩慢的情況,進而導致以下問題:

(1)Master節點由于負載過高長時間不能響應其他節點的請求導致超時,進而觸發集群重新選主,但由于新選出的Master仍然不能承載集群相關工作,再次導致超時,再次觸發重新選主,周而復始,最后集群異常。

(2)Master節點處理緩慢,導致大面積作業堆積(冷凍索引、創建索引、刪除索引、數據遷移等作業)

該問題最早是由華為工程師發現并提交社區的,相關堆棧信息為:

"elasticsearch[iZ2ze1ymtwjqspsn3jco0tZ][masterService#updateTask][T#1]"?#39?daemon?prio=5?os_prio=0?cpu=150732651.74ms?elapsed=258053.43s?tid=0x00007f7c98012000?nid=0x3006?runnable??[0x00007f7ca28f8000]java.lang.Thread.State:?RUNNABLEat?java.util.Collections$UnmodifiableCollection$1.hasNext(java.base@13/Collections.java:1046)at?org.elasticsearch.cluster.routing.RoutingNode.shardsWithState(RoutingNode.java:148)at?org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider.sizeOfRelocatingShards(DiskThresholdDecider.java:111)at?org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider.getDiskUsage(DiskThresholdDecider.java:345)at?org.elasticsearch.cluster.routing.allocation.decider.DiskThresholdDecider.canRemain(DiskThresholdDecider.java:290)at?org.elasticsearch.cluster.routing.allocation.decider.AllocationDeciders.canRemain(AllocationDeciders.java:108)at?org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator$Balancer.decideMove(BalancedShardsAllocator.java:668)at?org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator$Balancer.moveShards(BalancedShardsAllocator.java:628)at?org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator.allocate(BalancedShardsAllocator.java:123)at?org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:405)at?org.elasticsearch.cluster.routing.allocation.AllocationService.reroute(AllocationService.java:370)at?org.elasticsearch.cluster.metadata.MetaDataIndexStateService$1$1.execute(MetaDataIndexStateService.java:168)at?org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)at?org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:702)at?org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:324)at?org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:219)at?org.elasticsearch.cluster.service.MasterService.access$000(MasterService.java:73)at?org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151)at?org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)at?org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)at?org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:703)at?org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)at?org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)at?java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@13/ThreadPoolExecutor.java:1128)at?java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@13/ThreadPoolExecutor.java:628)at?java.lang.Thread.run(java.base@13/Thread.java:830)

????/***?Determine?the?shards?with?a?specific?state*?@param?states?set?of?states?which?should?be?listed*?@return?List?of?shards*/public?List<ShardRouting>?shardsWithState(ShardRoutingState...?states)?{List<ShardRouting>?shards?=?new?ArrayList<>();for?(ShardRouting?shardEntry?:?this)?{for?(ShardRoutingState?state?:?states)?{if?(shardEntry.state()?==?state)?{shards.add(shardEntry);}}}return?shards;}

在shardsWithState中會對所有shard進行遍歷找到符合狀態的shard,并返回。在ES7.2后由于pr#39499功能的引入,導致即使index被關閉也將被統計,隨著集群shard數的增加需要遍歷的工作量急劇增加,導致處理緩慢

下面是ES官方給出的統計數據:

ShardsNodesShards per nodeReroute time without relocationsReroute time with relocations
60000106000~250ms~15000ms
60000601000~250ms~4000ms
10000101000~60ms~250ms

由此可見即使在正常情況下,隨著集群shard數的增加系統的處理耗時也是在快速增加的,需要進行優化

代碼改進

為修復該問題,在新版本的ES中修改了RoutingNode的結構,在原來的基礎上新增了兩個LinkedHashSet結構的initializingShards和relocatingShards,分別用來存儲INITIALIZING狀態和RELOCATING狀態的shard。在其構造函數中添加了對shard分類的邏輯,將INITIALIZING狀態和RELOCATING狀態的shard信息分別存儲在兩個LinkedHashSet結構中,具體代碼如下:

+?? private final LinkedHashSet<ShardRouting> initializingShards;
+???private?final?LinkedHashSet<ShardRouting>?relocatingShards;RoutingNode(String?nodeId,?DiscoveryNode?node,?LinkedHashMap<ShardId,?ShardRouting>?shards)?{this.nodeId?=?nodeId;this.node?=?node;this.shards?=?shards;
+???????this.relocatingShards?=?new?LinkedHashSet<>();
+???????this.initializingShards?=?new?LinkedHashSet<>();
+???????for?(ShardRouting?shardRouting?:?shards.values())?{
+???????????if?(shardRouting.initializing())?{
+???????????????initializingShards.add(shardRouting);
+???????????}?else?if?(shardRouting.relocating())?{
+???????????????relocatingShards.add(shardRouting);
+???????????}
+????????}
+???????assert?invariant();
}

由于RoutingNode的結構中新增了initializingShards和relocatingShards,所以其add、update、remove、numberOfShardsWithState和shardsWithState也需要同步做改動,具體如下:

void?add(ShardRouting?shard)?{
+???????assert?invariant();if?(shards.containsKey(shard.shardId()))?{throw?new?IllegalStateException("Trying?to?add?a?shard?"?+?shard.shardId()?+?"?to?a?node?["?+?nodeId+?"]?where?it?already?exists.?current?["?+?shards.get(shard.shardId())?+?"].?new?["?+?shard?+?"]");}shards.put(shard.shardId(),?shard);+???????if?(shard.initializing())?{
+???????????initializingShards.add(shard);
+???????}?else?if?(shard.relocating())?{
+???????????relocatingShards.add(shard);
+???????}
+???????assert?invariant();}

void?update(ShardRouting?oldShard,?ShardRouting?newShard)?{
+???????assert?invariant();if?(shards.containsKey(oldShard.shardId())?==?false)?{//?Shard?was?already?removed?by?routing?nodes?iterator//?TODO:?change?caller?logic?in?RoutingNodes?so?that?this?check?can?go?awayreturn;}ShardRouting?previousValue?=?shards.put(newShard.shardId(),?newShard);assert?previousValue?==?oldShard?:?"expected?shard?"?+?previousValue?+?"?but?was?"?+?oldShard;+???????if?(oldShard.initializing())?{
+???????????boolean?exist?=?initializingShards.remove(oldShard);
+???????????assert?exist?:?"expected?shard?"?+?oldShard?+?"?to?exist?in?initializingShards";
+???????}?else?if?(oldShard.relocating())?{
+???????????boolean?exist?=?relocatingShards.remove(oldShard);
+???????????assert?exist?:?"expected?shard?"?+?oldShard?+?"?to?exist?in?relocatingShards";
+???????}
+???????if?(newShard.initializing())?{
+???????????initializingShards.add(newShard);
+???????}?else?if?(newShard.relocating())?{
+???????????relocatingShards.add(newShard);
+???????}
+???????assert?invariant();}

void?remove(ShardRouting?shard)?{
+???????assert?invariant();ShardRouting?previousValue?=?shards.remove(shard.shardId());assert?previousValue?==?shard?:?"expected?shard?"?+?previousValue?+?"?but?was?"?+?shard;
+???????if?(shard.initializing())?{
+???????????boolean?exist?=?initializingShards.remove(shard);
+???????????assert?exist?:?"expected?shard?"?+?shard?+?"?to?exist?in?initializingShards";
+???????}?else?if?(shard.relocating())?{
+???????????boolean?exist?=?relocatingShards.remove(shard);
+???????????assert?exist?:?"expected?shard?"?+?shard?+?"?to?exist?in?relocatingShards";
+???????}
+???????assert?invariant();
+????}

public?int?numberOfShardsWithState(ShardRoutingState...?states)?{
+???????if?(states.length?==?1)?{
+???????????if?(states[0]?==?ShardRoutingState.INITIALIZING)?{
+???????????????return?initializingShards.size();
+???????????}?else?if?(states[0]?==?ShardRoutingState.RELOCATING)?{
+???????????????return?relocatingShards.size();
+???????????}
+???????}int?count?=?0;for?(ShardRouting?shardEntry?:?this)?{for?(ShardRoutingState?state?:?states)?{if?(shardEntry.state()?==?state)?{count++;}}}return?count;}
public?List<ShardRouting>?shardsWithState(String?index,?ShardRoutingState...?states)?{List<ShardRouting>?shards?=?new?ArrayList<>();+???????if?(states.length?==?1)?{
+???????????if?(states[0]?==?ShardRoutingState.INITIALIZING)?{
+???????????????for?(ShardRouting?shardEntry?:?initializingShards)?{
+????????????????if?(shardEntry.getIndexName().equals(index)?==?false)?{
+????????????????????continue;
+????????????????}
+????????????????shards.add(shardEntry);
+????????????}
+????????????return?shards;
+????????}?else?if?(states[0]?==?ShardRoutingState.RELOCATING)?{
+????????????for?(ShardRouting?shardEntry?:?relocatingShards)?{
+????????????????if?(shardEntry.getIndexName().equals(index)?==?false)?{
+????????????????????continue;
+????????????????}
+????????????????shards.add(shardEntry);
+????????????}
+????????????return?shards;
+??????????}
+???????}for?(ShardRouting?shardEntry?:?this)?{if?(!shardEntry.getIndexName().equals(index))?{continue;}for?(ShardRoutingState?state?:?states)?{if?(shardEntry.state()?==?state)?{shards.add(shardEntry);}}}return?shards;}
????public?int?numberOfOwningShards()?{
-????????int?count?=?0;
-????????for?(ShardRouting?shardEntry?:?this)?{
-????????????if?(shardEntry.state()?!=?ShardRoutingState.RELOCATING)?{
-????????????????count++;
-????????????}
-????????}
-
-????????return?count;
+????????return?shards.size()?-?relocatingShards.size();}+????private?boolean?invariant()?{
+????
+????????//?initializingShards?must?consistent?with?that?in?shards
+????????Collection<ShardRouting>?shardRoutingsInitializing?=
+????????????shards.values().stream().filter(ShardRouting::initializing).collect(Collectors.toList());
+????????assert?initializingShards.size()?==?shardRoutingsInitializing.size();
+????????assert?initializingShards.containsAll(shardRoutingsInitializing);+????????//?relocatingShards?must?consistent?with?that?in?shards
+????????Collection<ShardRouting>?shardRoutingsRelocating?=
+????????????shards.values().stream().filter(ShardRouting::relocating).collect(Collectors.toList());
+????????assert?relocatingShards.size()?==?shardRoutingsRelocating.size();
+????????assert?relocatingShards.containsAll(shardRoutingsRelocating);+????????return?true;
+????}
  • 上面的add、update、remove方法的開始和結尾處都添加了assert invariant(),這個確保了initializingShards和relocatingShards中存儲的INITIALIZING狀態和RELOCATING狀態的shard在任何時候都是最新的,但是,隨著shard的數量級的增長,invariant()方法花費的時間也會增大,所以在shard進行add、update、remove操作時所耗費的時間也會增大。

  • 該修復通過使用兩個LinkedHashSet結構來存儲initializingShards和relocatingShards的信息,同時在每次shard更新時同步更新LinkedHashSet里面的信息,由此降低了每次使用時都需要重新統計全量shard信息的開銷,提高了處理效率。該問題在ES 7.2-7.5間的版本上,當集群shard超過50000以上就極有可能觸發。BUG在ES 7.6上被修復。

3.問題處理

當時為快速恢復服務,對集群進行了重啟操作。但集群相關作業處理仍然很慢,整個恢復過程持續很長時間。后續我們的處理方法是:

  • 臨時設置設置集群參數"cluster.routing.allocation.disk.include_relocations":"false"(不推薦使用,在ES 7.5后該參數被廢棄。在磁盤使用率接近高水位時會出現錯誤的計算,導致頻繁的數據遷移)

  • 減少集群的shard數目,縮短在線數據查詢時間范圍為最近20天,目前控制集群shard總數在5萬左右

上面的處理方法只能緩解問題,沒有從根本上解決,如果要解決該問題可以進行以下處理:

  • 升級ES的版本至已修復bug的版本

  • 控制集群總shard數目在合理范圍內

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/36663.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/36663.shtml
英文地址,請注明出處:http://en.pswp.cn/news/36663.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Playwright快速上手-1

前言 隨著近年來對UI自動化測試的要求越來越高&#xff0c;,功能強大的測試框架也不斷的涌現。本系列主講的Playwright作為一款新興的端到端測試框架,憑借其獨特優勢,正在逐漸成為測試工程師的熱門選擇。 本系列文章將著重通過示例講解 Playwright python開發環境的搭建 …

Linux Day07

一、僵死進程 1.1僵死進程產生的原因 子進程先于父進程結束, 而父進程沒有獲取子進程退出碼&#xff0c;釋放子進程占用的資源&#xff0c;此時子進程將成為一個僵死進程。 在第一個框這里時父進程子進程都沒有結束&#xff0c;顯示其pid 父進程是2349&#xff0c;子進程是235…

【Nginx】Nginx網站服務

國外主流還是使用apache&#xff1b;國內現在主流是nginx&#xff08;并發能力強&#xff0c;相對穩定&#xff09; nginx&#xff1a;高性能、輕量級的web服務軟件 特點&#xff1a; 1.穩定性高&#xff08;沒apache穩&#xff09;&#xff1b; 2.系統資源消耗比較低&#xf…

Failed to set locale, defaulting to C.UTF-8 或者中文系統語言轉英文系統語言

CentOS 8中執行命令&#xff0c;出現報錯&#xff1a;Failed to set locale, defaulting to C.UTF-8報錯原因&#xff1a; 1、沒有安裝相應的語言包。2、沒有設置正確的語言環境。 解決方法1&#xff1a;安裝語言包 設置語言環境需使用命令 localelocale -a 命令&#xff0c;查…

代碼隨想錄day02

977.有序數組的平方 ● 力扣題目鏈接 ● 給你一個按 非遞減順序 排序的整數數組 nums&#xff0c;返回 每個數字的平方 組成的新數組&#xff0c;要求也按 非遞減順序 排序。 思路 ● 暴力排序&#xff0c;時間復雜度O(n nlogn) ● 使用雙指針&#xff0c;時間復雜度O(n) …

Vue中使用v-bind:class動態綁定多個類名

Vue.js是一個流行的前端框架&#xff0c;它可以幫助開發者構建動態交互的UI界面。在Vue.js開發中&#xff0c;經常需要動態綁定HTML元素的class&#xff08;類名&#xff09;屬性&#xff0c;以改變元素的外觀和行為。本文將介紹采用v-bind:class指令在Vue中如何動態綁定多個類…

【大數據】-- 本地部署 Flink kubernetes operator

目錄 1.說明 1.1 版本 1.2 kubernetes 環境 1.3 參考 2.安裝步驟 2.1 安裝本地 kubernetes 環境

判斷鏈表有環的證明

目錄 1.問題 2.證明 3.代碼實現 1.問題 給你一個鏈表的頭節點 head &#xff0c;判斷鏈表中是否有環。 如果鏈表中有某個節點&#xff0c;可以通過連續跟蹤 next 指針再次到達&#xff0c;則鏈表中存在環。 為了表示給定鏈表中的環&#xff0c;評測系統內部使用…

TansUNet代碼理解

首先通過論文中所給的圖片了解網絡的整體架構&#xff1a; vit_seg_modeling部分 模塊引入和定義相關量&#xff1a; # codingutf-8 # __future__ 在老版本的Python代碼中兼顧新特性的一種方法 from __future__ import absolute_import from __future__ import division fr…

新基建助推數字經濟,CosmosAI率先布局AI超算租賃新紀元

倫敦, 8月14日 - 在英國倫敦隆重的Raffles OWO舉辦的歐盟數字超算新時代戰略合作簽約儀式&#xff0c;CosmosAI、Infinite Money Fund與Internet Research Lab三方強強聯手&#xff0c;達成了歷史性的合作協議&#xff0c;共同邁向超算租賃新紀元。 ? 這次跨界的合作昭示了全球…

Session基礎

文章目錄 什么是Sessionsession與cookie的區別和聯系Session的存Session的取 什么是Session 服務器為每個用戶瀏覽器創建一個會話對象&#xff08;session對象&#xff09;&#xff0c;一個瀏覽器只能產生一個session當新建一個窗口訪問服務器時&#xff0c;還是原來的那個ses…

VR家裝提升用戶信任度,線上體驗家裝空間感

近些年&#xff0c;VR家裝逐漸被各大裝修公司引入&#xff0c;VR全景裝修的盛行&#xff0c;大大增加了客戶“所見即所得”的沉浸式體驗感&#xff0c;不再是傳統二維平面的看房模式&#xff0c;而是讓客戶通過視覺、聽覺、交互等功能更加真實的體驗家裝后的效果。 對于傳統家裝…

本地Linux 部署 Dashy 并遠程訪問教程

文章目錄 簡介1. 安裝Dashy2. 安裝cpolar3.配置公網訪問地址4. 固定域名訪問 轉載自cpolar極點云文章&#xff1a;本地Linux 部署 Dashy 并遠程訪問 簡介 Dashy 是一個開源的自托管的導航頁配置服務&#xff0c;具有易于使用的可視化編輯器、狀態檢查、小工具和主題等功能。你…

JS如何向數組中添加數組

常見的辦法有 1、push()方法 var arr [a, b, c,d]; arr.push(e); console.log(arr); // [a, b, c, d,e] 2、concat()方法 var arr1 [a, b, c]; var arr2 [d, e, f]; var arr3 arr1.concat(arr2); console.log(arr3); // [a, b, c, d, e, f] 3、可以使用ES6中的spread操作符…

【git】Fork或者git clone克隆了別人項目,如何保持與原項目同步更新

Fork或者git clone克隆了別人項目&#xff0c;如何保持與原項目同步更新 #mermaid-svg-LC920CR873UxZJC3 {font-family:"trebuchet ms",verdana,arial,sans-serif;font-size:16px;fill:#333;}#mermaid-svg-LC920CR873UxZJC3 .error-icon{fill:#552222;}#mermaid-svg-…

BUUCTF 還原大師 1

題目描述&#xff1a; 我們得到了一串神秘字符串&#xff1a;TASC?O3RJMV?WDJKX?ZM,問號部分是未知大寫字母&#xff0c;為了確定這個神秘字符串&#xff0c;我們通過了其他途徑獲得了這個字串的32位MD5碼。但是我們獲得它的32位MD5碼也是殘缺不全&#xff0c;E903???4D…

【Vue3】自動引入插件-`unplugin-auto-import`

Vue3自動引入插件-unplugin-auto-import&#xff0c;不必再手動 import 。 自動導入 api 按需為 Vite, Webpack, Rspack, Rollup 和 esbuild 。支持TypeScript。由unplugin驅動。 插件安裝&#xff1a;unplugin-auto-import 配置vite.config.ts&#xff08;配置完后需要重啟…

迪瑞克斯拉算法 — 優化

在上一篇迪瑞克斯拉算法中將功能實現了出來&#xff0c;完成了圖集中從源點出發獲取所有可達的點的最短距離的收集。 但在代碼中getMinDistanceAndUnSelectNode()方法的實現并不簡潔&#xff0c;每次獲取minNode時&#xff0c;都需要遍歷整個Map&#xff0c;時間復雜度太高。這…

stable diffusion安裝包和超火使用文檔及提示詞,數字人網址

一&#xff1a;文生圖、圖生圖 1&#xff1a;stable diffusion&#xff1a;對喜歡二次元、美女小姐姐、大眼萌妹的人及其友好哈哈(o^^o) 1&#xff09;&#xff1a;關于安裝包和模型包&#xff1a; 鏈接&#xff1a;https://pan.baidu.com/s/11_kguofh76gwhTBPUipepw 提取碼…

HTML詳解連載(5)

HTML詳解連載&#xff08;5&#xff09; 專欄鏈接 [link](http://t.csdn.cn/xF0H3)下面進行專欄介紹 開始嘍行高&#xff1a;設置多行文本的間距屬性名屬性值行高的測量方法 行高-垂直居中技巧 字體族屬性名屬性值示例擴展 font 復合屬性使用場景復合屬性示例注意 文本縮進屬性…