Dinky1.2.3基于Kubernetes Application模式提交Flink作業

前言

Dinky 是一個開箱即用、易擴展,以 Apache Flink 為基礎,連接 OLAP 和數據湖等眾多框架的一站式實時計算平臺,致力于流批一體和湖倉一體的探索與實踐。 致力于簡化Flink任務開發,提升Flink任務運維能力,降低Flink入門成本,提供一站式的Flink任務開發、運維、監控、報警、調度、數據管理等功能。

今天想給大家說一說,如何通過dinky數據開發平臺,將Flink作業采用k8s appliction模式提交到flink集群中。

前置條件

  • 需要安裝Dinky1.2.3環境
  • Flink1.20鏡像包
  • K8S集群環境

步驟1.構建Flink1.20鏡像包

編寫Dockerfile文件

FROM flink:1.20.0-scala_2.12-java11# 設置時區(可選)
ENV TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone# 設置工作目錄
WORKDIR /opt/flink# 添加自定義配置文件(可選)
# dinky提交作業時并未生效,將會被dinky集群中的flink配置替換
COPY conf/config.yaml ./conf/# 添加自定義jar包(可選)
COPY lib/*.jar ./lib/# 兼容dinky替換jar包
RUN rm -rf ./lib/flink-table-planner-loader-*.jar
RUN mv ./opt/flink-table-planner_2.12-*.jar ./lib/# 添加用戶自定義代碼(可選)
COPY plugins/ ./plugins/# 設置環境變量
ENV FLINK_HOME=/opt/flink
ENV PATH=$FLINK_HOME/bin:$PATH# 暴露必要的端口
# 8081 - Web UI
# 6123 - TaskManager RPC
EXPOSE 8081 6123# 設置容器啟動命令(根據需要修改)
CMD ["bash"]

原始目錄結構,lib目錄包含了所需jar包,plugins包含插件jia包,conf包含flink配置文件,其中lib中的依賴包,包含了mysql,sqlserver,mongodb,kafka,paimon的依賴,請大家根據實際情況添加依賴包。

依賴jar包下載地址:倉庫服務

編譯構建完后的鏡像完整路徑(后續注冊集群配置需要用上)

192.168.1.101:5000/bigdata/flink120:latest

步驟2.Kubernetes創建serviceacount

創建命名空間和serviceacount(后續注冊集群配置需要用上)

# 創建命名空間和serviceacount
# 創建namespace
kubectl create ns bigdata# 創建serviceaccount
kubectl create serviceaccount flink-service-account -n bigdata
# 用戶授權
kubectl create clusterrolebinding flink-role-binding-flink --clusterrole=edit --serviceaccount=bigdata:flink-service-account# 查看
kubectl get pods pods,svc,sa -n bigdata

步驟3.Dinky1.2.3注冊集群配置

注冊k8s appliction集群配置

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
#  limitations under the License.
################################################################################# These parameters are required for Java 17 support.
# They can be safely removed when using Java 8/11.
env:java:opts:all: --add-exports=java.base/sun.net.util=ALL-UNNAMED --add-exports=java.rmi/sun.rmi.registry=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED --add-exports=java.security.jgss/sun.security.krb5=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.text=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.locks=ALL-UNNAMED#==============================================================================
# Common
#==============================================================================jobmanager:# The host interface the JobManager will bind to. By default, this is localhost, and will prevent# the JobManager from communicating outside the machine/container it is running on.# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.## To enable this, set the bind-host address to one that has access to an outside facing network# interface, such as 0.0.0.0.bind-host: localhostrpc:# The external address of the host on which the JobManager runs and can be# reached by the TaskManagers and any clients which want to connect. This setting# is only used in Standalone mode and may be overwritten on the JobManager side# by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable.# In high availability mode, if you use the bin/start-cluster.sh script and setup# the conf/masters file, this will be taken care of automatically. Yarn# automatically configure the host name based on the hostname of the node where the# JobManager runs.address: localhost# The RPC port where the JobManager is reachable.port: 6123memory:process:# The total process memory size for the JobManager.# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.size: 1600mexecution:# The failover strategy, i.e., how the job computation recovers from task failures.# Only restart tasks that may have been affected by the task failure, which typically includes# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.failover-strategy: regiontaskmanager:# The host interface the TaskManager will bind to. By default, this is localhost, and will prevent# the TaskManager from communicating outside the machine/container it is running on.# On YARN this setting will be ignored if it is set to 'localhost', defaulting to 0.0.0.0.# On Kubernetes this setting will be ignored, defaulting to 0.0.0.0.## To enable this, set the bind-host address to one that has access to an outside facing network# interface, such as 0.0.0.0.bind-host: localhost# The address of the host on which the TaskManager runs and can be reached by the JobManager and# other TaskManagers. If not specified, the TaskManager will try different strategies to identify# the address.## Note this address needs to be reachable by the JobManager and forward traffic to one of# the interfaces the TaskManager is bound to (see 'taskmanager.bind-host').## Note also that unless all TaskManagers are running on the same machine, this address needs to be# configured separately for each TaskManager.host: localhost# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.numberOfTaskSlots: 1memory:process:# The total process memory size for the TaskManager.## Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.size: 1728mparallelism:# The parallelism used for programs that did not specify and other parallelism.default: 1# # The default file system scheme and authority.
# # By default file paths without scheme are interpreted relative to the local
# # root file system 'file:///'. Use this to override the default and interpret
# # relative paths relative to a different file system,
# # for example 'hdfs://mynamenode:12345'
# fs:
#   default-scheme: hdfs://mynamenode:12345#==============================================================================
# High Availability
#==============================================================================# high-availability:
#   # The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#   type: zookeeper
#   # The path where metadata for master recovery is persisted. While ZooKeeper stores
#   # the small ground truth for checkpoint and leader election, this location stores
#   # the larger objects, like persisted dataflow graphs.
#   #
#   # Must be a durable file system that is accessible from all nodes
#   # (like HDFS, S3, Ceph, nfs, ...)
#   storageDir: hdfs:///flink/ha/
#   zookeeper:
#     # The list of ZooKeeper quorum peers that coordinate the high-availability
#     # setup. This must be a list of the form:
#     # "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
#     quorum: localhost:2181
#     client:
#       # ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
#       # It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
#       # The default value is "open" and it can be changed to "creator" if ZK security is enabled
#       acl: open#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled. Checkpointing is enabled when execution.checkpointing.interval > 0.# # Execution checkpointing related parameters. Please refer to CheckpointConfig and CheckpointingOptions for more details.
# execution:
#   checkpointing:
#     interval: 3min
#     externalized-checkpoint-retention: [DELETE_ON_CANCELLATION, RETAIN_ON_CANCELLATION]
#     max-concurrent-checkpoints: 1
#     min-pause: 0
#     mode: [EXACTLY_ONCE, AT_LEAST_ONCE]
#     timeout: 10min
#     tolerable-failed-checkpoints: 0
#     unaligned: false# state:
#   backend:
#     # Supported backends are 'hashmap', 'rocksdb', or the
#     # <class-name-of-factory>.
#     type: hashmap
#     # Flag to enable/disable incremental checkpoints for backends that
#     # support incremental checkpoints (like the RocksDB state backend).
#     incremental: false
#   checkpoints:
#       # Directory for checkpoints filesystem, when using any of the default bundled
#       # state backends.
#       dir: hdfs://namenode-host:port/flink-checkpoints
#   savepoints:
#       # Default target directory for savepoints, optional.
#       dir: hdfs://namenode-host:port/flink-savepoints#==============================================================================
# Rest & web frontend
#==============================================================================rest:# The address to which the REST client will connect toaddress: localhost# The address that the REST & web server binds to# By default, this is localhost, which prevents the REST & web server from# being able to communicate outside of the machine/container it is running on.## To enable this, set the bind address to one that has access to outside-facing# network interface, such as 0.0.0.0.bind-address: localhost# # The port to which the REST client connects to. If rest.bind-port has# # not been specified, then the server will bind to this port as well.# port: 8081# # Port range for the REST and web server to bind to.# bind-port: 8080-8090# web:
#   submit:
#     # Flag to specify whether job submission is enabled from the web-based
#     # runtime monitor. Uncomment to disable.
#     enable: false
#   cancel:
#     # Flag to specify whether job cancellation is enabled from the web-based
#     # runtime monitor. Uncomment to disable.
#     enable: false#==============================================================================
# Advanced
#==============================================================================# io:
#   tmp:
#     # Override the directories for temporary files. If not specified, the
#     # system-specific Java temporary directory (java.io.tmpdir property) is taken.
#     #
#     # For framework setups on Yarn, Flink will automatically pick up the
#     # containers' temp directories without any need for configuration.
#     #
#     # Add a delimited list for multiple directories, using the system directory
#     # delimiter (colon ':' on unix) or a comma, e.g.:
#     # /data1/tmp:/data2/tmp:/data3/tmp
#     #
#     # Note: Each directory entry is read from and written to by a different I/O
#     # thread. You can include the same directory multiple times in order to create
#     # multiple I/O threads against that directory. This is for example relevant for
#     # high-throughput RAIDs.
#     dirs: /tmp# classloader:
#   resolve:
#     # The classloading resolve order. Possible values are 'child-first' (Flink's default)
#     # and 'parent-first' (Java's default).
#     #
#     # Child first classloading allows users to use different dependency/library
#     # versions in their application than those in the classpath. Switching back
#     # to 'parent-first' may help with debugging dependency issues.
#     order: child-first# The amount of memory going to the network stack. These numbers usually need
# no tuning. Adjusting them may be necessary in case of an "Insufficient number
# of network buffers" error. The default min is 64MB, the default max is 1GB.
#
# taskmanager:
#   memory:
#     network:
#       fraction: 0.1
#       min: 64mb
#       max: 1gb#==============================================================================
# Flink Cluster Security Configuration
#==============================================================================# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
# may be enabled in four steps:
# 1. configure the local krb5.conf file
# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
# 3. make the credentials available to various JAAS login contexts
# 4. configure the connector to use JAAS/SASL# # The below configure how Kerberos credentials are provided. A keytab will be used instead of
# # a ticket cache if the keytab path and principal are set.
# security:
#   kerberos:
#     login:
#       use-ticket-cache: true
#       keytab: /path/to/kerberos/keytab
#       principal: flink-user
#       # The configuration below defines which JAAS login contexts
#       contexts: Client,KafkaClient#==============================================================================
# ZK Security Configuration
#==============================================================================# zookeeper:
#   sasl:
#     # Below configurations are applicable if ZK ensemble is configured for security
#     #
#     # Override below configuration to provide custom ZK service name if configured
#     # zookeeper.sasl.service-name: zookeeper
#     #
#     # The configuration below must match one of the values set in "security.kerberos.login.contexts"
#     login-context-name: Client#==============================================================================
# HistoryServer
#==============================================================================# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)
#
# jobmanager:
#   archive:
#     fs:
#       # Directory to upload completed jobs to. Add this directory to the list of
#       # monitored directories of the HistoryServer as well (see below).
#       dir: hdfs:///completed-jobs/# historyserver:
#   web:
#     # The address under which the web-based HistoryServer listens.
#     address: 0.0.0.0
#     # The port under which the web-based HistoryServer listens.
#     port: 8082
#   archive:
#     fs:
#       # Comma separated list of directories to monitor for completed jobs.
#       dir: hdfs:///completed-jobs/
#       # Interval in milliseconds for refreshing the monitored directories.
#       fs.refresh-interval: 10000# # s3密鑰配置
s3.endpoint: http://192.168.1.102:9000
s3.access-key: xxxxxxxx
s3.secret-key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  1. 類型:Kubernetes Application
  2. 集群配置名稱:k8s-appliction-test(自定義)
  3. 暴露端口類型:NodePort
  4. Kubernetes 命名空間:bigdata (步驟2中創建的命名空間)
  5. K8s 提交賬號:flink-service-account (步驟2中創建的serviceaccount)
  6. K8s KubeConfig:從k8s服務器中的將文件~/.kube/config內容拷貝過來
  7. Flink 鏡像地址:192.168.1.101:5000/bigdata/flink120:latest (步驟1構建的私有鏡像包)
  8. JobManager CPU 配置:1(根據實際需要配置)
  9. TaskManager CPU 配置:1(根據實際需要配置)
  10. Flink 配置文件路徑:/usr/local/flink-1.20.0/conf (將flink配置拷貝到dinky服務器中)
  11. JobManager 內存:1G(根據實際需要配置)
  12. TaskManager 內存:1G(根據實際需要配置)
  13. 插槽數:1(根據實際需要配置)
  14. 保存點路徑:s3://flink120/flink-savepoints(統一使用s3作為分布式存儲,所以上文中的flink配置新增了s3存儲密鑰配置)
  15. 檢查點路徑:s3://flink120/flink-checkpoints
  16. Jar 文件路徑:s3://flink120/dinky/dinky-app-1.20-1.2.3-jar-with-dependencies.jar

創建作業ods-mysql-to-doris,選擇flink集群為k8s-appliction-test

Flink作業運行情況

k8s中自動生成部署Deployments以及pod實例

總結

  1. 構建私有flink鏡像包時,將配置文件拷貝到鏡像中,但是dinky提交作業時并未生效,將會被dinky集群中的flink配置替換,實際使用的配置為/usr/local/flink-1.20.0/conf
  2. flink作業通過k8s appliction模式提交作業,可以拋棄yarn appliction模式,畢竟hadoop框架太重,通過s3(minio/oss)替換了hdfs分布式存儲,通過k8s的資源調度替帶了yarn資源調度

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/910003.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/910003.shtml
英文地址,請注明出處:http://en.pswp.cn/news/910003.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

【軟考高級架構設計師】——2025年上半年軟考真題(回憶版)

目錄 一、綜合知識1.1、計算機基礎與操作系統(15道單選)1.2、軟件工程與架構(16道單選)1.3、數據與網絡(8道單選)1.4、數學與邏輯(4道單選)1.5、其他(27道單選)1.6、英文題(質量屬性)(5道單選)二、案例分析2.1、大模型訓練系統(必選題)2.2、醫院知識圖譜(可選…

哈夫曼樹Python實現

哈夫曼樹構建原則&#xff1a; .統計頻率&#xff1a;對待編碼字符&#xff08;或數據塊&#xff09;的頻率進行統計。.初始化森林&#xff1a;將每個字符視為一棵只有根節點的二叉樹&#xff0c;權值為頻率。.合并樹&#xff1a;重復以下操作&#xff0c;直到只剩一棵樹&…

Dockerfile的學習與實踐

Dockerfile通過一系列的命令和參數&#xff0c;構建自定義鏡像。一般步驟如下&#xff1a; 一. 常用命令說明 基礎命令具體命令描述例子FROMFROM[基礎鏡像:版本號]基于指定的基礎鏡像構建自定義鏡像FROM eclipse-temurin:17-jdk-alpineRUNRUN構建容器需要運行的命令&#xff0…

【三大前端語言之一】靜態網頁語言:HTML詳解

你知道你在瀏覽器中所看到的每一個按鈕&#xff0c;每一個框&#xff0c;都是怎么創造出來的嗎&#xff1f;它們并非魔法&#xff0c;而是由一種被稱為HTML的語言精心構建的骨架。作為前端世界的三大基石之一&#xff08;HTML、CSS、JavaScript&#xff09;&#xff0c;HTML是萬…

04、誰發明了深度學習的方法,是怎么發明的?

深度學習的發展是多位研究者長期探索的結果,其核心方法的形成并非由單一人物 “發明”,而是歷經數十年理論積累與技術突破的產物。以下從關鍵人物、核心技術突破及歷史背景三個維度,梳理深度學習方法的起源與發展脈絡: 一、深度學習的奠基者與關鍵貢獻者 1. Geoffrey Hin…

Jmeter ServerAgent在arm環境啟動報錯no libsigar-aarch64-linux.so in java.library.path

使用Jmeter壓測的時候&#xff0c;用ServerAgent監測arm服務器的性能指標&#xff0c;在啟動ServerAgent時&#xff0c;報錯了&#xff1a;no libsigar-aarch64-linux.so in java.library.path 解決方案&#xff1a; 下載libsigar-aarch64-linux.so文件&#xff0c;放置在Serv…

AJAX攔截器失效排查指南:當你的beforeSend有效但error/complete沉默時

問題現象 開發者常遇到這樣的場景&#xff1a; $.ajaxSetup({beforeSend: () > console.log("? 觸發"), // 正常執行error: () > console.log("? 未觸發"), // 靜默失效complete: () > console.log("? 未觸發") // 同樣沉默 })…

【模型微調】負樣本選擇

1.核心設計理念 非對稱檢索任務&#xff08;例如&#xff0c;用一個簡短的問題去文檔庫里查找答案&#xff09;的一個核心挑戰是查詢&#xff08;query&#xff09;和文檔&#xff08;passage&#xff09;在文本特征上的巨大差異。以往的研究發現&#xff0c;為查詢和文檔提供…

下載安裝redis

有任何問題&#xff0c;都可以私信博主&#xff0c;共同探討學習。 正文開始 一、下載安裝redis一、啟動redis總結 一、下載安裝redis redis官方下載地址是github&#xff0c;有條件的同學可以自行搜索下載。針對部分網速不太好的同學&#xff0c;可以通過網盤獲取&#xff0c…

flutter 項目配置Gradle下載代理

如圖&#xff0c; 在Android Studio中配置代理是不生效的。 需要在flutter sdk的Gradle中去配置代理

世冠科技亮相TMC,以國產MBD工具鏈賦能汽車電控系統開發新未來

2025年6月12日至13日&#xff0c;第十七屆國際汽車動力系統技術年會&#xff08;TMC2025&#xff09;在南通國際會展中心盛大召開。作為全球汽車動力系統領域規模最大、規格最高、內容最前沿的標桿性國際盛會&#xff0c;匯聚了來自全球整車企業、核心零部件供應商、頂尖科研機…

將本地項目與遠程 Git 倉庫關聯的完整步驟

將本地項目與遠程 Git 倉庫關聯的完整步驟 現在的情景是&#xff1a;本地文件項目已經寫好了&#xff0c;亦或者遠程倉庫已經建好了&#xff0c;需要與本地項目關聯起來 以下是詳細的操作流程&#xff0c;我會用清晰的步驟說明如何將你的本地項目與遠程 Git 倉庫關聯&#xf…

3DS 轉換為 STP 全攻略:迪威模型網在線轉換詳解

在三維模型創作與應用的多元場景中&#xff0c;不同格式的文件承擔著獨特的角色。3DS&#xff08;3D Studio&#xff09;格式是 Autodesk 3ds Max 早期廣泛使用的文件格式&#xff0c;常用于游戲開發、影視特效制作等領域&#xff0c;能夠存儲模型的幾何形狀、材質、動畫等信息…

Linux下iptables和firewalld詳解

Linux下iptables和firewalld詳解 Linux下iptables和firewalld簡述Iptables四表五鏈策略與規則鏈命令參數Firewalld終端管理工具圖形管理工具服務的訪問控制列表Linux下iptables和firewalld 簡述 ? 保障數據的安全性是繼保障數據的可用性之后最為重要的一項工作。防火墻作為公…

Kafka Connect高級開發:自定義擴展與復雜場景應對

引言 在掌握Kafka Connect基礎操作與內置連接器應用后&#xff0c;面對企業復雜的業務需求&#xff0c;如對接非標準數據源、實現特定數據處理邏輯&#xff0c;就需要深入到高級開發領域。本篇博客將圍繞自定義Connector開發、數據轉換編程、錯誤處理與容錯機制展開&#xff0…

吳恩達機器學習筆記:正則化2

1.正則化線性回歸 對于線性回歸的求解&#xff0c;我們之前推導了兩種學習算法&#xff1a;一種基于梯度下降&#xff0c;一種基于正規方程。 正則化線性回歸的代價函數為&#xff1a; J ( θ ) 1 2 m [ ∑ i 1 m ( h θ ( x ( i ) ) ? y ( i ) ) 2 λ ∑ j 1 n θ j 2 …

Unity中的Resources加載

Unity的Resources加載是Unity引擎中一種在運行時動態加載資源&#xff08;assets&#xff09;的方式&#xff0c;允許開發者將資源放置在特定的Resources文件夾中&#xff0c;并通過代碼按名稱加載這些資源&#xff0c;而無需在場景中預先引用。這種方式在需要動態加載資源時非…

對Vue2響應式原理的理解-總結

根據這張圖進行總結 在組件實例初始化階段&#xff0c;通過 observe() 方法對 data 對象進行遞歸遍歷。在這個過程中&#xff0c;Vue 使用 Object.defineProperty() 為data 中的每個屬性定義 getter 和 setter 來攔截對象屬性的“讀取“操作和“寫入”操作。 Vue 的依賴追蹤是…

基于深度學習的智能音頻增強系統:技術與實踐

前言 在音頻處理領域&#xff0c;音頻增強技術一直是研究的熱點。音頻增強的目標是改善音頻信號的質量&#xff0c;去除噪聲、回聲等干擾&#xff0c;提高音頻的可聽性和可用性。傳統的音頻增強方法主要依賴于信號處理技術&#xff0c;如濾波器設計、頻譜減法等&#xff0c;但這…

從代碼學習深度強化學習 - DQN PyTorch版

文章目錄 前言DQN 算法核心思想Q-Learning 與函數近似經驗回放 (Experience Replay)目標網絡 (Target Network)PyTorch 代碼實現詳解1. 環境與輔助函數2. 經驗回放池 (ReplayBuffer)3. Q網絡 (Qnet)4. DQN 主類5. 訓練循環6. 設置超參數與開始訓練訓練結果與分析總結前言 歡迎…