openEuler安裝BenchmarkSQL

BenchmarkSQL是一個用于評估數據庫性能的開源工具。它模擬TPC-C(Transaction Processing Performance Council)基準測試場景,該場景主要用于衡量數據庫在處理大量并發事務時的能力。TPC-C測試場景模擬了一個典型的批發分銷商的業務環境,包括新訂單、支付、庫存查詢等操作。

openEuler版本:openEuler 22.03 (LTS-SP4)

安裝依賴:

JDK,ANT,R語言

其中安裝JDK不在此說明,網上很多教程

安裝ANT步驟如下:

將ANT安裝包上傳至服務器,然后解壓:

#tar -xvzf apache-ant-1.10.15-bin.tar.gz

設置環境變量:

編輯/etc/profile文件追以下內容:

export ANT_HOME=/opt/software/ant/apache-ant-1.10.15
export PATH=$PATH:$ANT_HOME/bin

執行:

#source /etc/profile

使環境變量設置生效

驗證安裝結果:

#ant -version

R語言安裝步驟如下:

1.上傳壓縮包(R-4.4.2.tar.gz)到服務器

2.解壓R-4.4.2.tar.gz:

#tar -xvf R-4.4.2.tar.gz

3.進入解壓目錄,進行編譯安裝(時間有點長,請耐心等待):

#./configure && make && make install

4.驗證安裝結果:

?#R --version

安裝編譯R語言過程中可能會出現下面3個錯誤:

報錯1:configure: error: --with-x=yes (default) and X11 headers/libs are not available

解決方法:yum -y install xorg-x11-server-devel libX11-devel libXt-devel

報錯2:configure: error: "liblzma library and headers are required"

解決方法:yum install xz-devel.x86_64

報錯3:configure: error: libcurl >= 7.28.0 library and headers are required with support for https

解決方法:yum install libcurl-devel

安裝?BenchmarkSQL:

創建數據庫和用戶

create user benchmarksql with password 'Benchmark123';

create database benchmarksql owner benchmarksql;

給用戶授予權限:有建表權限,創建索引權限等等,為了方便修改用戶為管理員

alter user benchmarksql with SYSADMIN;

解壓benchmarksql

上傳benchmark壓縮包到服務器,然后解壓:

#unzip benchmarksql-5.0.zip

編譯benchmarksql

進入benchmark解壓目錄:

執行命令:

#ant

創建配置文件

進入run目錄

#cd run

#cp props.pg my_postgres.properties

修改配置文件

#vi my_postgres.properties

修改內容:

db=postgres

driver=org.postgresql.Driver

conn=jdbc:postgresql://localhost:15000/benchmarksql

user=benchmarksql

password=Benchmark123

osCollectorDevices=net_ens33?blk_sda

  • 配置文件詳解:
    • db=postgres //數據庫類型,postgres,mysql
    • driver=org.postgresql.Driver //驅動,mysql8.0以上為com.mysql.cj.jdbc.Driver
    • conn=jdbc:postgresql://localhost:5432/postgres //PG數據庫連接字符串,正常情況下,需要更改localhost為對應PG服務IP、5432位對應PG服務端口、postgres為對應測試數據庫名
    • user=benchmarksql //數據庫用戶名,通常建議用默認,這就需要我們提前在數據庫中建立benchmarksql用戶
    • password=PWbmsql //如上用戶密碼
    • warehouses=1 //倉庫數量,數量根據實際服務器內存配置,每個倉庫 約 100MB,代表一個獨立業務單元(TPC-C 標準)
    • loadWorkers=4 //用于在數據庫中初始化數據的加載進程數量,默認為4,實際使用過程中可以根據實際情況調整,加載速度會隨worker數量的增加而有所提升
    • terminals=1 //終端數,即并發客戶端數量,通常設置為CPU線程總數的2~6倍
    • runTxnsPerTerminal=10 //每個終端(terminal)運行的固定事務數量,例如:如果該值設置為10,意味著每個terminal運行10個事務,如果有32個終端,那整體運行320個事務后,測試結束。該參數配置為非0值時,下面的runMins參數必須設置為0
    • runMins=0 //要測試的整體時間,單位為分鐘,如果runMins設置為60,那么測試持續1小時候結束。該值設置為非0值時,runTxnsPerTerminal參數必須設置為0。這兩個參數不能同時設置為正整數,如果設置其中一個,另一個必須為0,主要區別是runMins定義時間長度來控制測試時間;runTxnsPerTerminal定義事務總數來控制時間。
    • limitTxnsPerMin=300 //每分鐘事務總數限制,該參數主要控制每分鐘處理的事務數,事務數受terminals參數的影響,如果terminals數量大于limitTxnsPerMin值,意味著并發數大于每分鐘事務總數,該參數會失效,想想也是如此,如果有1000個并發同時發起,那每分鐘事務數設置為300就沒意義了,上來就是1000個并發,所以要讓該參數有效,可以設置數量大于并發數,或者讓其失效,測試過程中目前采用的是默認300。
    • terminalWarehouseFixed=true //終端和倉庫的綁定模式,設置為true時可以運行4.x兼容模式,意思為每個終端都有一個固定的倉庫。設置為false時可以均勻的使用數據庫整體配置。TPCC規定每個終端都必須有一個綁定的倉庫,所以一般使用默認值true。
    • //下面五個值的總和必須等于100,默認值為:45, 43, 4, 4 & 4 ,與TPC-C測試定義的比例一致,實際操作過程中,可以調整比重來適應各種場景。
      newOrderWeight=45
      paymentWeight=43
      orderStatusWeight=4
      deliveryWeight=4
      stockLevelWeight=4
    • resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS //測試數據生成目錄,默認無需修改,默認生成在run目錄下面,名字形如my_result_xxxx的文件夾。
    • osCollectorScript=./misc/os_collector_linux.py //操作系統性能收集腳本,默認無需修改,需要操作系統具備有python環境
    • osCollectorInterval=1 //操作系統收集操作間隔,默認為1秒
    • //osCollectorSSHAddr=user@dbhost //操作系統收集所對應的主機,如果對本機數據庫進行測試,該參數保持注銷即可,如果要對遠程服務器進行測試,請填寫用戶名和主機名。
    • osCollectorDevices=net_ens33?blk_sda?//操作系統中被收集服務器的網卡名稱和磁盤名稱,例如:使用ifconfig查看操作系統網卡名稱,找到測試所走的網卡,名稱為ens33,那么下面網卡名設置為net_ens33(net_前綴固定);使用df -h查看數據庫數據目錄,名稱為(/dev/sdb 33T 18T 16T 54% /hgdata),那么下面磁盤名設置為blk_sdb(blk_前綴固定)

創建數據庫模式和加載初始數據

在run目錄下執行:

./runDatabaseBuild.sh my_postgres.properties

執行成功后輸出信息:

[omm@hostName1 run]$ ./runDatabaseBuild.sh my_postgres.properties
# ------------------------------------------------------------
# Loading SQL file ./sql.common/tableCreates.sql
# ------------------------------------------------------------
create table bmsql_config (
cfg_name    varchar(30) primary key,
cfg_value   varchar(50)
);
create table bmsql_warehouse (
w_id        integer   not null,
w_ytd       decimal(12,2),
w_tax       decimal(4,4),
w_name      varchar(10),
w_street_1  varchar(20),
w_street_2  varchar(20),
w_city      varchar(20),
w_state     char(2),
w_zip       char(9)
);
create table bmsql_district (
d_w_id       integer       not null,
d_id         integer       not null,
d_ytd        decimal(12,2),
d_tax        decimal(4,4),
d_next_o_id  integer,
d_name       varchar(10),
d_street_1   varchar(20),
d_street_2   varchar(20),
d_city       varchar(20),
d_state      char(2),
d_zip        char(9)
);
create table bmsql_customer (
c_w_id         integer        not null,
c_d_id         integer        not null,
c_id           integer        not null,
c_discount     decimal(4,4),
c_credit       char(2),
c_last         varchar(16),
c_first        varchar(16),
c_credit_lim   decimal(12,2),
c_balance      decimal(12,2),
c_ytd_payment  decimal(12,2),
c_payment_cnt  integer,
c_delivery_cnt integer,
c_street_1     varchar(20),
c_street_2     varchar(20),
c_city         varchar(20),
c_state        char(2),
c_zip          char(9),
c_phone        char(16),
c_since        timestamp,
c_middle       char(2),
c_data         varchar(500)
);
create sequence bmsql_hist_id_seq;
create table bmsql_history (
hist_id  integer,
h_c_id   integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id   integer,
h_w_id   integer,
h_date   timestamp,
h_amount decimal(6,2),
h_data   varchar(24)
);
create table bmsql_new_order (
no_w_id  integer   not null,
no_d_id  integer   not null,
no_o_id  integer   not null
);
create table bmsql_oorder (
o_w_id       integer      not null,
o_d_id       integer      not null,
o_id         integer      not null,
o_c_id       integer,
o_carrier_id integer,
o_ol_cnt     integer,
o_all_local  integer,
o_entry_d    timestamp
);
create table bmsql_order_line (
ol_w_id         integer   not null,
ol_d_id         integer   not null,
ol_o_id         integer   not null,
ol_number       integer   not null,
ol_i_id         integer   not null,
ol_delivery_d   timestamp,
ol_amount       decimal(6,2),
ol_supply_w_id  integer,
ol_quantity     integer,
ol_dist_info    char(24)
);
create table bmsql_item (
i_id     integer      not null,
i_name   varchar(24),
i_price  decimal(5,2),
i_data   varchar(50),
i_im_id  integer
);
create table bmsql_stock (
s_w_id       integer       not null,
s_i_id       integer       not null,
s_quantity   integer,
s_ytd        integer,
s_order_cnt  integer,
s_remote_cnt integer,
s_data       varchar(50),
s_dist_01    char(24),
s_dist_02    char(24),
s_dist_03    char(24),
s_dist_04    char(24),
s_dist_05    char(24),
s_dist_06    char(24),
s_dist_07    char(24),
s_dist_08    char(24),
s_dist_09    char(24),
s_dist_10    char(24)
);
Starting BenchmarkSQL LoadDatadriver=org.postgresql.Driver
conn=jdbc:postgresql://localhost:15000/benchmarksql
user=benchmarksql
password=***********
warehouses=10
loadWorkers=4
fileLocation (not defined)
csvNullValue (not defined - using default 'NULL')Worker 000: Loading ITEM
Worker 001: Loading Warehouse      1
Worker 002: Loading Warehouse      2
Worker 003: Loading Warehouse      3
Worker 000: Loading ITEM done
Worker 000: Loading Warehouse      4
Worker 002: Loading Warehouse      2 done
Worker 002: Loading Warehouse      5
Worker 001: Loading Warehouse      1 done
Worker 001: Loading Warehouse      6
Worker 003: Loading Warehouse      3 done
Worker 003: Loading Warehouse      7
Worker 000: Loading Warehouse      4 done
Worker 000: Loading Warehouse      8
Worker 002: Loading Warehouse      5 done
Worker 002: Loading Warehouse      9
Worker 001: Loading Warehouse      6 done
Worker 001: Loading Warehouse     10
Worker 003: Loading Warehouse      7 done
Worker 000: Loading Warehouse      8 done
Worker 002: Loading Warehouse      9 done
Worker 001: Loading Warehouse     10 done
# ------------------------------------------------------------
# Loading SQL file ./sql.common/indexCreates.sql
# ------------------------------------------------------------
alter table bmsql_warehouse add constraint bmsql_warehouse_pkey
primary key (w_id);
alter table bmsql_district add constraint bmsql_district_pkey
primary key (d_w_id, d_id);
alter table bmsql_customer add constraint bmsql_customer_pkey
primary key (c_w_id, c_d_id, c_id);
create index bmsql_customer_idx1
on  bmsql_customer (c_w_id, c_d_id, c_last, c_first);
alter table bmsql_oorder add constraint bmsql_oorder_pkey
primary key (o_w_id, o_d_id, o_id);
create unique index bmsql_oorder_idx1
on  bmsql_oorder (o_w_id, o_d_id, o_carrier_id, o_id);
alter table bmsql_new_order add constraint bmsql_new_order_pkey
primary key (no_w_id, no_d_id, no_o_id);
alter table bmsql_order_line add constraint bmsql_order_line_pkey
primary key (ol_w_id, ol_d_id, ol_o_id, ol_number);
alter table bmsql_stock add constraint bmsql_stock_pkey
primary key (s_w_id, s_i_id);
alter table bmsql_item add constraint bmsql_item_pkey
primary key (i_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.common/foreignKeys.sql
# ------------------------------------------------------------
alter table bmsql_district add constraint d_warehouse_fkey
foreign key (d_w_id)
references bmsql_warehouse (w_id);
alter table bmsql_customer add constraint c_district_fkey
foreign key (c_w_id, c_d_id)
references bmsql_district (d_w_id, d_id);
alter table bmsql_history add constraint h_customer_fkey
foreign key (h_c_w_id, h_c_d_id, h_c_id)
references bmsql_customer (c_w_id, c_d_id, c_id);
alter table bmsql_history add constraint h_district_fkey
foreign key (h_w_id, h_d_id)
references bmsql_district (d_w_id, d_id);
alter table bmsql_new_order add constraint no_order_fkey
foreign key (no_w_id, no_d_id, no_o_id)
references bmsql_oorder (o_w_id, o_d_id, o_id);
alter table bmsql_oorder add constraint o_customer_fkey
foreign key (o_w_id, o_d_id, o_c_id)
references bmsql_customer (c_w_id, c_d_id, c_id);
alter table bmsql_order_line add constraint ol_order_fkey
foreign key (ol_w_id, ol_d_id, ol_o_id)
references bmsql_oorder (o_w_id, o_d_id, o_id);
alter table bmsql_order_line add constraint ol_stock_fkey
foreign key (ol_supply_w_id, ol_i_id)
references bmsql_stock (s_w_id, s_i_id);
alter table bmsql_stock add constraint s_warehouse_fkey
foreign key (s_w_id)
references bmsql_warehouse (w_id);
alter table bmsql_stock add constraint s_item_fkey
foreign key (s_i_id)
references bmsql_item (i_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.postgres/extraHistID.sql
# ------------------------------------------------------------
-- ----
-- Extra Schema objects/definitions for history.hist_id in PostgreSQL
-- ----
-- ----
--      This is an extra column not present in the TPC-C
--      specs. It is useful for replication systems like
--      Bucardo and Slony-I, which like to have a primary
--      key on a table. It is an auto-increment or serial
--      column type. The definition below is compatible
--      with Oracle 11g, using a sequence and a trigger.
-- ----
-- Adjust the sequence above the current max(hist_id)
select setval('bmsql_hist_id_seq', (select max(hist_id) from bmsql_history));
-- Make nextval(seq) the default value of the hist_id column.
alter table bmsql_history
alter column hist_id set default nextval('bmsql_hist_id_seq');
-- Add a primary key history(hist_id)
alter table bmsql_history add primary key (hist_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.postgres/buildFinish.sql
# ------------------------------------------------------------
-- ----
-- Extra commands to run after the tables are created, loaded,
-- indexes built and extra's created.
-- PostgreSQL version.
-- ----
vacuum analyze;

?創建的表和初始化的數據量

運行基準測試

在run目錄下執行:

#./runBenchmark.sh my_postgres.properties

如果執行報錯:print ",".join([str(x) for x in sysInfo])

這是因為benchmarksql是用python2寫的腳本,而openEuler用的是python3,python3的print函數不兼容python2的print函數,需要修改benchmarksql安裝目錄下的/run/misc/os_collector_linux.py腳本中的print函數用法,具體如何修改見本文最后os_collector_linux.py腳本

如果執行報錯:ValueError: can't have unbuffered text I/O

這是因為benchmarksql是用python2寫的腳本,而openEuler用的是python3,python3的open函數不兼容python2的open函數,需要修改benchmarksql安裝目錄下的/run/misc/os_collector_linux.py腳本中的open函數用法,具體如何修改見本文最后os_collector_linux.py腳本

如果執行報錯:NameError: name 'lastStatData' is not defined

這是因為benchmarksql是用python2寫的腳本,而openEuler用的是python3,python3和python2存在兼容性問題,在 Python 3 中運行時變量作用域可能不同,具體如何修改見本文最后os_collector_linux.py腳本

執行成功后輸出信息:

[root@hostName1 run]# ./runBenchmark.sh my_postgres.properties
03:49:54,400 [main] INFO   jTPCC : Term-00,
03:49:54,411 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
03:49:54,412 [main] INFO   jTPCC : Term-00,      BenchmarkSQL v5.0
03:49:54,413 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
03:49:54,414 [main] INFO   jTPCC : Term-00,  (c) 2003, Raul Barbosa
03:49:54,416 [main] INFO   jTPCC : Term-00,  (c) 2004-2016, Denis Lussier
03:49:54,426 [main] INFO   jTPCC : Term-00,  (c) 2016, Jan Wieck
03:49:54,427 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
03:49:54,428 [main] INFO   jTPCC : Term-00,
03:49:54,468 [main] INFO   jTPCC : Term-00, db=postgres
03:49:54,468 [main] INFO   jTPCC : Term-00, driver=org.postgresql.Driver
03:49:54,468 [main] INFO   jTPCC : Term-00, conn=jdbc:postgresql://localhost:15000/benchmarksql
03:49:54,469 [main] INFO   jTPCC : Term-00, user=benchmarksql
03:49:54,471 [main] INFO   jTPCC : Term-00,
03:49:54,471 [main] INFO   jTPCC : Term-00, warehouses=10
03:49:54,472 [main] INFO   jTPCC : Term-00, terminals=1
03:49:54,481 [main] INFO   jTPCC : Term-00, runTxnsPerTerminal=10
03:49:54,481 [main] INFO   jTPCC : Term-00, limitTxnsPerMin=300
03:49:54,484 [main] INFO   jTPCC : Term-00, terminalWarehouseFixed=true
03:49:54,484 [main] INFO   jTPCC : Term-00,
03:49:54,484 [main] INFO   jTPCC : Term-00, newOrderWeight=45
03:49:54,485 [main] INFO   jTPCC : Term-00, paymentWeight=43
03:49:54,487 [main] INFO   jTPCC : Term-00, orderStatusWeight=4
03:49:54,487 [main] INFO   jTPCC : Term-00, deliveryWeight=4
03:49:54,487 [main] INFO   jTPCC : Term-00, stockLevelWeight=4
03:49:54,487 [main] INFO   jTPCC : Term-00,
03:49:54,488 [main] INFO   jTPCC : Term-00, resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
03:49:54,488 [main] INFO   jTPCC : Term-00, osCollectorScript=./misc/os_collector_linux.py
03:49:54,489 [main] INFO   jTPCC : Term-00,
03:49:54,559 [main] INFO   jTPCC : Term-00, copied my_postgres.properties to my_result_2025-06-19_034954/run.properties
03:49:54,581 [main] INFO   jTPCC : Term-00, created my_result_2025-06-19_034954/data/runInfo.csv for runID 11
03:49:54,582 [main] INFO   jTPCC : Term-00, writing per transaction results to my_result_2025-06-19_034954/data/result.csv
03:49:54,589 [main] INFO   jTPCC : Term-00, osCollectorScript=./misc/os_collector_linux.py
03:49:54,589 [main] INFO   jTPCC : Term-00, osCollectorInterval=1
03:49:54,589 [main] INFO   jTPCC : Term-00, osCollectorSSHAddr=null
03:49:54,590 [main] INFO   jTPCC : Term-00, osCollectorDevices=net_ens33 blk_sda
03:49:54,693 [main] INFO   jTPCC : Term-00,
03:49:55,172 [main] INFO   jTPCC : Term-00, C value for C_LAST during load: 118
03:49:55,173 [main] INFO   jTPCC : Term-00, C value for C_LAST this run:    221
03:49:55,173 [main] INFO   jTPCC : Term-00,                                                                                                                             Ter03:49:58,483 [Thread-1] INFO   jTPCC : Term-00, t tpmTOTAL: 96    Memory Usage: 12MB / 118MB
03:49:58,490 [Thread-1] INFO   jTPCC : Term-00,
03:49:58,503 [Thread-1] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 20.24
03:49:58,507 [Thread-1] INFO   jTPCC : Term-00, Measured tpmTOTAL = 222.74
03:49:58,510 [Thread-1] INFO   jTPCC : Term-00, Session Start     = 2025-06-19 03:49:55
03:49:58,512 [Thread-1] INFO   jTPCC : Term-00, Session End       = 2025-06-19 03:49:58
03:49:58,514 [Thread-1] INFO   jTPCC : Term-00, Transaction Count = 10
[root@hostName1 run]#

測試結束后,run目錄下會生成一個新目錄,它的命名格式為 my_result_%tY-%tm-%td_%tH%tM%tS。

使用?generateReport.sh my_result_* 腳本創建具有圖形的 HTML 文件:

例如:

#./generateReport.sh?my_result_2025-06-19_034954

使用generateReport.sh,需要安裝R語言

注意由于我的openEuler服務器沒有安裝圖形桌面,是沒法生成圖片的,會報類似下面的錯誤:Generating my_result_2025-06-19_034954/tpm_nopm.png ... Error in .External2(C_X11, paste0("png::", filename), g$width, g$height, ?:

將my_result_2025-06-19_034954目錄打包下載下來

#zip -r my_result_2025-06-19_034954.zip my_result_2025-06-19_034954/

打開報告文件:report.html,類似下面這樣子:

生成的報告怎么看:

1)整體性能:

Overall tpmC: 124.67

Overall tpmTotal: 280.67

tpmC 是核心事務(NEW_ORDER)每分鐘處理的事務數。

tpmTotal 是所有類型的事務(包括NEW_ORDER、Payment、Order Status、Delivery和Stock Level)每分鐘的總事務數。

這里的tpmC為124.67,意味著每分鐘大約處理124.67個NEW_ORDER事務,這是TPC-C基準測試中最為關鍵的性能指標。

2理論最大值:

理論最大值:12.86 NEW_ORDER 事務/分鐘/倉庫

TPC-C規范指出,在理想的條件下,每個倉庫的NEW_ORDER事務理論最大值為12.86。要達到這個最大值,需要:

完美的負載分配(45%的事務是NEW_ORDER)。

系統在零響應時間下運行。

實際上,由于系統的延遲和負載不平衡,很難達到這個理論最大值。

3)實際性能 vs 理論性能:

124.67 tpmC 是理論最大值的 969.414%

這意味著系統的實際性能是理論最大值的約9.7倍。也就是說,相較于理想狀態,當前的系統性能遠超預期,表明該數據庫配置在特定負載下的表現非常好。考慮到系統的實際響應時間和負載,這種超出理論最大值的表現通常表明系統在某些方面(如硬件配置、數據庫優化、并行處理等)非常高效。

4)生成的兩張圖:

每分鐘事務數量。

事務延遲。

TPCC表結構

?TPC-C測試系統數據庫由9張表組成,它們的關系如下圖所示,表框里的數字表示該表將要存放多少條記錄,倉庫數W的調整在測試中能夠體現數據庫所能夠支持的數據規模的能力:

? ? Warehouse(倉庫表,W表示倉數,能夠體現數據庫所能夠支持的數據規模)
? ? District(區域表,W*10表每個倉為10個銷售點供貨)
? ? Customer(用戶表,W*30k表示每個供貨點為3000個客戶提供服務)
? ? Stock(庫存表,W*100k表示每個倉庫維護10w種商品庫存記錄)
? ? Order(訂單表)
? ? New-Order(新訂單表)
? ? Item(商品表,表固定大小為10w)
? ? Order-Line(訂單行)
? ? History(歷史表)

TPCC事務模型

TPC-C需要處理的交易事務主要為以下幾種:

新訂單(New-Order,占比45%) :客戶輸入一筆新的訂貨交易;
支付操作(Payment,占比43%) :更新客戶帳戶余額以反映其支付狀況;
發貨(Delivery,占比4%) :發貨(模擬批處理交易);
訂單狀態查詢(Order-Status,占比4%) :查詢客戶最近交易的狀態;
庫存狀態查詢(Stock-Level,占比4%) :查詢倉庫庫存狀況,以便能夠及時補貨。

核心指標解讀

附錄:

修改run/misc/os_collector_linux.py腳本?

#!/usr/bin/env python
# ----------------------------------------------------------------------
# os_collector_linux.py -
#
#   Script used to collect OS level resource utilization data like
#   CPU usage and disk IO.
#
#   This code is used in the jTPCCOSCollect class. It is launched as
#   a separate process, possibly via ssh(1) on the remote database
#   server. The ability of Python to receive a script to execute on
#   stdin allows us to execute this script via ssh(1) on the database
#   server without installing any programs/scripts there.
#
#   The command line arguments for this script are the runID, the
#   interval in seconds at which to collect information and a variable
#   number of devices in the form "blk_<devname>" "net_<devname>",
#   for example "blk_sda" for the first SCSI disk or "net_eth0".
#
#   The output on stdout is one line for CPU/VM info, followed by a
#   line for each of the specified devices in CSV format. The first
#   set of lines are the CSV headers. The output is prefixed with the
#   runID, elapsed_ms and for the devices the blk_ or net_ name that
#   was specified on the command line. This format makes it easy to
#   load the data into a result database where it can be analyzed
#   together with the BenchmarkSQL per transaction results and compared
#   to other benchmark runs.
#
#   It is the caller's responsibility to split the output lines into
#   separate result CSV files.
# ----------------------------------------------------------------------import errno
import math
import os
import sys
import time# ----
# main
# ----
def main(argv):global  deviceFDsglobal  lastDeviceData# ----# Get the runID and collection interval from the command line# ----runID = int(argv[0])interval = float(argv[1])# ----# Our start time is now. Since most of the information is deltas# we can only produce the first data after the first interval.# ----startTime = time.time()nextDue = startTime + interval# ----# Initialize CPU and vmstat collection and output the CSV header.# ----sysInfo = ['run', 'elapsed', ]sysInfo += initSystemUsage()print(",".join([str(x) for x in sysInfo]))# ----# Get all the devices from the command line.# ----devices = []deviceFDs = {}lastDeviceData = {}for dev in argv[2:]:if dev.startswith('blk_'):devices.append(dev)elif dev.startswith('net_'):devices.append(dev)else:raise Exception("unknown device type '" + dev + "'")# ----# Initialize usage collection per device depending on the type.# Output all the headers in the order, the devices are given.# ----for dev in devices:if dev.startswith('blk_'):devInfo = ['run', 'elapsed', 'device', ]devInfo += initBlockDevice(dev)print(",".join([str(x) for x in devInfo]))elif dev.startswith('net_'):devInfo = ['run', 'elapsed', 'device', ]devInfo += initNetDevice(dev)print(",".join([str(x) for x in devInfo]))# ----# Flush all header lines.# ----sys.stdout.flush()try:while True:# ----# Wait until our next collection interval and calculate the# elapsed time in milliseconds.# ----now = time.time()if nextDue > now:time.sleep(nextDue - now)elapsed = int((nextDue - startTime) * 1000.0)sysInfo = [runID, elapsed, ]sysInfo += getSystemUsage()print(",".join([str(x) for x in sysInfo]))# ----# Collect all device utilization data.# ----for dev in devices:if dev.startswith('blk_'):devInfo = [runID, elapsed, dev, ]devInfo += getBlockUsage(dev, interval)print(",".join([str(x) for x in devInfo]))elif dev.startswith('net_'):devInfo = [runID, elapsed, dev, ]devInfo += getNetUsage(dev, interval)print(",".join([str(x) for x in devInfo]))# ----# Bump the time when we are next due.# ----nextDue += intervalsys.stdout.flush()# ----# Running on the command line for test purposes?# ----except KeyboardInterrupt:print("")return 0# ----# The OSCollector class will just close our stdout on the other# side, so this is expected.# ----except IOError as e:if e.errno == errno.EPIPE:return 0else:raise edef initSystemUsage():global  procStatFDglobal  procVMStatFDglobal  lastStatDataglobal  lastVMStatDataprocStatFD = open("/proc/stat", "rb")for line in procStatFD:line = line.decode().split()if line[0] == "cpu":lastStatData = [int(x) for x in line[1:]]breakif len(lastStatData) != 10:raise Exception("cpu line in /proc/stat too short")procVMStatFD = open("/proc/vmstat", "rb")lastVMStatData = {}for line in procVMStatFD:line = line.decode().split()if line[0] in ['nr_dirty', ]:lastVMStatData['vm_' + line[0]] = int(line[1])if len(lastVMStatData.keys()) != 1:raise Exception("not all elements found in /proc/vmstat")return ['cpu_user', 'cpu_nice', 'cpu_system','cpu_idle', 'cpu_iowait', 'cpu_irq','cpu_softirq', 'cpu_steal','cpu_guest', 'cpu_guest_nice','vm_nr_dirty',]def getSystemUsage():global  procStatFDglobal  procVMStatFDglobal  lastStatDataglobal  lastVMStatDataprocStatFD.seek(0, 0)for line in procStatFD:line = line.decode().split()if line[0] != "cpu":continuestatData = [int(x) for x in line[1:]]deltaTotal = float(sum(statData) - sum(lastStatData))if deltaTotal == 0:result = [0.0 for x in statData]else:result = []for old, new in zip(lastStatData, statData):result.append(float(new - old) / deltaTotal)lastStatData = statDatabreakprocVMStatFD.seek(0, 0)newVMStatData = {}for line in procVMStatFD:line = line.decode().split()if line[0] in ['nr_dirty', ]:newVMStatData['vm_' + line[0]] = int(line[1])for key in ['vm_nr_dirty', ]:result.append(newVMStatData[key])return resultdef initBlockDevice(dev):global  deviceFDsglobal  lastDeviceDatadevPath = os.path.join("/sys/block", dev[4:], "stat")deviceFDs[dev] = open(devPath, "rb")line = deviceFDs[dev].readline().decode().split()newData = []for idx, mult in [(0, 1.0), (1, 1.0), (2, 0.5),(4, 1.0), (5, 1.0), (6, 0.5),]:newData.append(int(line[idx]))lastDeviceData[dev] = newDatareturn ['rdiops', 'rdmerges', 'rdkbps', 'wriops', 'wrmerges', 'wrkbps', ]def getBlockUsage(dev, interval):global  deviceFDsglobal lastDeviceDatadeviceFDs[dev].seek(0, 0)line = deviceFDs[dev].readline().decode().split()oldData = lastDeviceData[dev]newData = []result = []ridx = 0for idx, mult in [(0, 1.0), (1, 1.0), (2, 0.5),(4, 1.0), (5, 1.0), (6, 0.5),]:newData.append(int(line[idx]))result.append(float(newData[ridx] - oldData[ridx]) * mult / interval)ridx += 1lastDeviceData[dev] = newDatareturn resultdef initNetDevice(dev):global  deviceFDsglobal  lastDeviceDatadevPath = os.path.join("/sys/class/net", dev[4:], "statistics")deviceData = []for fname in ['rx_packets', 'rx_bytes', 'tx_packets', 'tx_bytes', ]:key = dev + "." + fnamedeviceFDs[key] = open(os.path.join(devPath, fname), "rb")deviceData.append(int(deviceFDs[key].read()))lastDeviceData[dev] = deviceDatareturn ['rxpktsps', 'rxkbps', 'txpktsps', 'txkbps', ]def getNetUsage(dev, interval):global  deviceFDsglobal  lastDeviceDataoldData = lastDeviceData[dev]newData = []for fname in ['rx_packets', 'rx_bytes', 'tx_packets', 'tx_bytes', ]:key = dev + "." + fnamedeviceFDs[key].seek(0, 0)newData.append(int(deviceFDs[key].read()))result = [float(newData[0] - oldData[0]) / interval,float(newData[1] - oldData[1]) / interval / 1024.0,float(newData[2] - oldData[2]) / interval,float(newData[3] - oldData[3]) / interval / 1024.0,]lastDeviceData[dev] = newDatareturn resultif __name__ == '__main__':sys.exit(main(sys.argv[1:]))

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/web/84151.shtml
繁體地址,請注明出處:http://hk.pswp.cn/web/84151.shtml
英文地址,請注明出處:http://en.pswp.cn/web/84151.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

分庫分表之優缺點分析

大家好&#xff0c;我是工藤學編程 &#x1f989;一個正在努力學習的小博主&#xff0c;期待你的關注實戰代碼系列最新文章&#x1f609;C實現圖書管理系統&#xff08;Qt C GUI界面版&#xff09;SpringBoot實戰系列&#x1f437;【SpringBoot實戰系列】Sharding-Jdbc實現分庫…

【2025年超詳細】Git 系列筆記-4 git版本號及git相關指令運用。

系列筆記 【2025年超詳細】Git 系列筆記-1 Git簡述、Windows下git安裝、Linux下git安裝_displaying 2e144 commits. adjust this setting in -CSDN博客 【2025年超詳細】Git 系列筆記-2 github連接超時問題解決_2025訪問github-CSDN博客 【2025年超詳細】Git 系列筆記-3 Git…

圖像特征檢測算法SuperPoint和SuperGlue

SuperPoint 背景與概述 &#xff1a;SuperPoint 是一個自監督的全卷積神經網絡&#xff0c;用于提取圖像中的興趣點及其描述子。它在 2018 年由 Magic Leap 提出&#xff0c;通過在合成數據集上預訓練一個基礎檢測器 MagicPoint&#xff0c;然后利用同胚適應技術對真實圖像數據…

nginx 和 springcloud gateway cors 跨域如何設置

在跨域資源共享(CORS)配置中,Nginx 和 API Gateway(如Spring Cloud Gateway、Kong等)是兩種常見的解決方案,它們的配置邏輯和適用場景有所不同。以下是詳細對比和配置示例: 一、核心區別 維度NginxAPI Gateway定位反向代理/Web服務器微服務流量入口配置位置基礎設施層應…

電路筆記(信號):一階低通RC濾波器 一階線性微分方程推導 拉普拉斯域表達(傳遞函數、頻率響應)分析

目錄 RC 低通濾波器電路一階線性微分方程推導拉普拉斯域表達&#xff08;傳遞函數&#xff09;傳遞函數 H ( s ) H(s) H(s)頻率響應&#xff08;令 s j ω s j\omega sjω&#xff09;幅頻特性&#xff1a;相位特性&#xff1a;Bode 圖&#xff08;線性系統頻率響應&#x…

【Git】刪除遠程分支時,本地分支還能看到

當遠程倉庫的分支被刪除后&#xff0c;本地通過 git branch -a 或 git remote show origin 仍能看到這些分支的引用&#xff0c;是因為本地存儲的遠程跟蹤分支&#xff08;位于 refs/remotes/origin/&#xff09;未被同步更新。以下是解決方法&#xff1a; 解決方案&#xff1…

Cubase 通過 MIDIPLUS MIDI 鍵盤進行走帶控制的設置方法

第一步&#xff0c;在官網下載xml配置文件。 https://midiplus.com/upload/202101/29/Xpro & Xpro_mini控制腳本(Cubase).zip 第二步&#xff0c;Cubase中按如圖步驟添加映射。 將MIDI鍵盤連接到電腦后打開Cubase軟件&#xff0c;點選菜單“工作室”->“工作室設置”&…

第十八章 Linux之Python定制篇——Python開發平臺Ununtu

1. Ubuntu介紹 Ubuntu&#xff08;友幫拓、優般圖、烏班圖&#xff09;是一個以桌面應用為主的開源GUN/Linux操作系統&#xff0c;Ubuntu基于GUN/Linux&#xff0c;支持x86、amd64&#xff08;即x64&#xff09;和ppc架構&#xff0c;有全球專業開發團隊&#xff08;Canonical…

推薦輕量級文生視頻模型(Text-to-Video)

1. ModelScope T2V by 阿里達摩院&#xff08;推薦&#xff09; 模型名&#xff1a;damo/text-to-video-synthesis 輸入&#xff1a;一句文字描述&#xff08;如&#xff1a;"a panda is dancing"&#xff09; 輸出&#xff1a;2秒視頻&#xff08;16幀&#xff0c…

流編輯器sed

sed簡介 sed是一種流編輯器&#xff0c;處理時&#xff0c;把當前處理的行存儲在臨時緩沖區中&#xff0c;稱為模式空間&#xff0c;接著用sed命令處理緩沖區中的內容&#xff0c;處理完成后&#xff0c;把緩沖區的內容送往屏幕。接著處理下行&#xff0c;這樣不斷重復&#xf…

商用密碼基礎知識介紹(上)

一、密碼的基礎知識 1、密碼分類 根據《中華人民共和國密碼法》&#xff0c;國家對密碼實行分類管理&#xff0c;分為密碼分為核心密碼、普通密碼和商用密碼。 &#xff08;1&#xff09;核心密碼、普通密碼 核心密碼、普通密碼用于保護國家秘密信息&#xff0c;核心密碼保護…

PROFINET主站S7-1500通過協議網關集成歐姆龍NJ系列TCP/IP主站

一、項目背景 某大型新能源電池生產企業&#xff0c;致力于提升電池生產的自動化水平和智能化程度。其生產線上&#xff0c;部分關鍵設備采用了不同的通信協議。在電池生產的前段工序&#xff0c;如原材料攪拌、涂布等環節&#xff0c;使用了西門子S7-1500系列PLC作為ROFINET協…

Vue3 + TypeScript + Element Plus + el-input 輸入框列表按回車聚焦到下一行

應用效果&#xff1a;從第一行輸入1&#xff0c;按回車&#xff0c;聚焦到第二行輸入2&#xff0c;按回車&#xff0c;聚焦到第三行…… 一、通過元素 id&#xff0c;聚焦到下一行的輸入框 關鍵技術點&#xff1a; 1、動態設置元素 id 屬性為&#xff1a;:id"input-appl…

FramePack 全面測評:革新視頻生成體驗

在 AI 視頻生成領域&#xff0c;FramePack 自問世便備受矚目&#xff0c;它憑借獨特的技術架構&#xff0c;號稱能打破傳統視頻生成對高端硬件的依賴&#xff0c;讓普通電腦也能產出高質量視頻。此次測評&#xff0c;我們將全方位剖析 FramePack&#xff0c;探究它在實際應用中…

html中的table標簽以及相關標簽

表格標簽可以通過指定的標簽完成數據展示 <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><title>表格標簽</title> </head> <body><table border"2"><!-- tr是表行 r…

springboot+vue3+vue-simple-uploader輕松實現大文件分片上傳Minio

最近在寫視頻課程的上傳&#xff0c;需要上傳的視頻幾百MB到幾個G不等&#xff0c;普通的上傳都限制了文件的大小&#xff0c;況且上傳的文件太大的話會超時、異常等。所以這時候需要考慮分片上傳了&#xff0c;把需要上傳的視頻分成多個小塊上傳到&#xff0c;最后再合并成一個…

AI 重構代碼實戰:如何用飛算 JavaAI 快速升級遺留系統?

在企業數字化進程中&#xff0c;遺留系統如同陳舊的基礎設施&#xff0c;雖承載著重要業務邏輯&#xff0c;但因技術落后、架構復雜&#xff0c;升級維護困難重重。飛算 JavaAI 的出現&#xff0c;為遺留系統的二次開發帶來了新的轉機&#xff0c;其基于智能分析與關聯項目的技…

鴻蒙運動開發實戰:打造專屬運動視頻播放器

##鴻蒙核心技術##運動開發##Media Kit&#xff08;媒體服務&#xff09;# 在當今數字化時代&#xff0c;運動健身已經成為許多人生活的一部分。今天我將在應用中添加視頻播放器&#xff0c;幫助用戶在運動前、運動后更好地進行熱身和拉伸。這篇文章將從代碼核心點入手&#xf…

一個包含15個界面高質量的電商APP客戶端UI解決方案

一個包含15個界面高質量的電商APP客戶端UI解決方案 您可以將其用于電商APP應用項目。包含一系列完整的界面設計元素&#xff0c;包括歡迎頁、登錄、注冊、首頁、產品分類、產品詳情、尺碼選擇、購物車、訂單、支付&#xff0c;覆蓋電商APP的大部分界面。每個部分都精心設計&…

執行 PGPT_PROFILES=ollama make run下面報錯,

執行 PGPT_PROFILESollama make run 下面報錯&#xff0c; File "/home/powersys/.cache/pypoetry/virtualenvs/private-gpt-ZIwX6JeM-py3.11/lib/python3.11/site-packages/qdrant_client/http/api_client.py", line 108, in send_inner raise ResponseHandling…