springboot 分片上傳文件 - postgres(BLOB存儲)

springboot 分片上傳文件 - postgres(BLOB存儲)

  • 方案一(推薦)

? 接收完整文件,后端自動分片并存儲(多線程 大文件)

	/*** 接收完整文件,后端自動分片并存儲(多線程 大文件)* @param file* @return* @throws Exception*/public String uploadChunkFile(MultipartFile file) throws Exception {String uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小異常,無法分片";}// 1. 創建臨時目錄存儲分片(大文件避免內存溢出)File tempDir = Files.createTempDirectory("file-chunk-").toFile();//設置JVM退出時自動刪除該目錄tempDir.deleteOnExit();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 2. 先將所有分片寫入臨時文件(流式處理,不占大量內存)while ((bytesRead = inputStream.read(buffer)) != -1) {File chunkFile = new File(tempDir, uploadId + "-" + chunkIndex);try (FileOutputStream fos = new FileOutputStream(chunkFile)) {fos.write(buffer, 0, bytesRead); // 只寫實際讀取的字節}chunkIndex++;}// 3. 一次性提交所有分片任務,使用工具類等待完成ThreadPoolUtils.getNewInstance().submitBatchTasks((int) totalChunks, taskIndex -> {try {// 讀取臨時分片文件(每個任務只加載自己的分片數據)File chunkFile = new File(tempDir, uploadId + "-" + taskIndex);byte[] chunkData = Files.readAllBytes(chunkFile.toPath());// 存儲到數據庫FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(taskIndex);fileUploadMapper.insertFile(entity);} catch (IOException e) {throw new RuntimeException("分片" + taskIndex + "存儲失敗", e);}});} catch (Exception e) {log.error("文件分片上傳失敗", e);throw new RuntimeException("文件分片上傳失敗");} finally {// 4. 清理臨時文件deleteDir(tempDir);}return "文件已成功分片存儲,uploadId: " + uploadId;}// 遞歸刪除臨時目錄private boolean deleteDir(File dir) {if (dir.isDirectory()) {File[] children = dir.listFiles();if (children != null) {for (File child : children) {deleteDir(child);}}}return dir.delete();}
  • 方案二

? 接收完整文件,后端自動分片并存儲(多線程 小文件)。。。 大文件可能會內存溢出

/*** 接收完整文件,后端自動分片并使用多線程存儲 (多線程 小文件)* @param file* @return* @throws IOException, InterruptedException*/
public String uploadChunkFile(MultipartFile file) throws IOException, InterruptedException {// 生成唯一上傳ID,用于標識同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 計算總分片數long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小異常,無法分片";}// 讀取文件所有分片數據到內存(小文件適用,大文件建議使用磁盤臨時文件)List<byte[]> chunkDataList = new ArrayList<>();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);chunkDataList.add(chunkData);}}// 獲取線程池工具類實例ThreadPoolUtils threadPool = ThreadPoolUtils.getNewInstance();// 提交批量分片任務并等待完成threadPool.submitBatchTasks((int) totalChunks, chunkIndex -> {byte[] currentChunkData = chunkDataList.get(chunkIndex);long currentChunkSize = currentChunkData.length;// 存儲分片數據到數據庫FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(currentChunkSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(currentChunkData);fileUpload.setChunkIndex(chunkIndex);fileUploadMapper.insertFile(fileUpload);});return "文件已成功分片存儲,uploadId: " + uploadId;
}
  • 方案三

? 接收完整文件,后端自動分片并存儲 (單線程) 。。。。上傳大文件時間太久

    /*** 接收完整文件,后端自動分片并存儲* @param file* @return* @throws IOException*/public String uploadChunkFileBackup(MultipartFile file) throws IOException {// 生成唯一上傳ID,用于標識同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 計算總分片數long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);List<FileUploadEntity> list = new ArrayList<>();try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 循環讀取文件并分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 處理最后一個可能小于標準分片大小的分片byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);// 獲取分片實際大小(字節數)long chunkActualSize = bytesRead;  // 這就是當前分片的實際大小// 存儲當前分片
//                saveChunk(uploadId, chunkIndex, totalChunks, chunkData, fileSize, fileName);// 存儲當前分片FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(chunkActualSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(chunkData);fileUpload.setChunkIndex(chunkIndex);fileUploadMapper.insertFile(fileUpload);
//                list.add(fileUpload);chunkIndex++;}}//批量添加
//        int batchSize = 500;
//        for (int i = 0; i < list.size(); i += batchSize) {
//            int end = Math.min(i + batchSize, list.size());
//            List<FileUploadEntity> subList = list.subList(i, end);
//            fileUploadMapper.batchInsert(subList);
//        }return "文件已成功分片存儲,uploadId: " + uploadId;}
  • 方案四

    接收完整文件,后端自動分片并使用 (多線程)線程池未封裝

/*** 接收完整文件,后端自動分片并使用 (多線程)線程池未封裝* @param file* @return* @throws IOException, InterruptedException*/
//    @Overridepublic String uploadChunkFile(MultipartFile file) throws IOException, InterruptedException {// 生成唯一上傳ID,用于標識同一文件的所有分片String uploadId = UUID.randomUUID().toString();String fileName = file.getOriginalFilename();long fileSize = file.getSize();// 計算總分片數long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);// 創建線程池,核心線程數可根據服務器配置調整// 通常設置為CPU核心數 * 2 + 1int corePoolSize = Runtime.getRuntime().availableProcessors() * 2 + 1;ExecutorService executorService = Executors.newFixedThreadPool(corePoolSize);// 使用CountDownLatch等待所有線程完成CountDownLatch countDownLatch = new CountDownLatch((int) totalChunks);try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 循環讀取文件并分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 處理最后一個可能小于標準分片大小的分片byte[] chunkData = new byte[bytesRead];System.arraycopy(buffer, 0, chunkData, 0, bytesRead);long chunkActualSize = bytesRead;// 捕獲當前變量的快照,避免線程安全問題final int currentChunkIndex = chunkIndex;final byte[] currentChunkData = chunkData;final long currentChunkSize = chunkActualSize;// 提交分片存儲任務到線程池executorService.submit(() -> {try {FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(uploadId);fileUpload.setChunkSize(currentChunkSize);fileUpload.setChunkNum(totalChunks);fileUpload.setChunkFile(currentChunkData);fileUpload.setChunkIndex(currentChunkIndex);fileUploadMapper.insertFile(fileUpload);} finally {// 無論是否發生異常,都減少計數器countDownLatch.countDown();}});chunkIndex++;}// 等待所有分片處理完成countDownLatch.await();} finally {// 關閉線程池executorService.shutdown();}return "文件已成功分片存儲,uploadId: " + uploadId;}
  • 方案五

    大對象(Large Object)方案

 /*** 大對象(Large Object)方案** PostgreSQL 的大對象(Large Object)機制要求:* 二進制數據通過LargeObjectManager寫入,返回一個OID(數字類型的對象 ID)* 表中只存儲這個OID,而不是直接存儲二進制數據* 讀取時通過OID從大對象管理器中獲取數據* @param file* @return*/@Overridepublic String uploadLargeObjectFile(MultipartFile file) {if (file.isEmpty()) {return "請選擇文件";}try {long fileSize = file.getSize();String fileName = file.getOriginalFilename();long largeObjectId = postgresLargeObjectUtil.createLargeObject(file.getInputStream());FileUploadEntity fileUpload = new FileUploadEntity();fileUpload.setId(IdGenerator.nextId());fileUpload.setUploadId(String.valueOf(largeObjectId));fileUpload.setChunkSize(fileSize);fileUpload.setChunkNum(fileSize);fileUpload.setChunkFile(null);fileUpload.setChunkIndex(2);fileUploadMapper.insertLargeObjectFile(fileUpload);return "大文件上傳成功!文件名:" + fileName + ",大小:" + fileSize + "字節";}catch (Exception e) {log.error("上傳大文件失敗:{}", e);return "上傳失敗:" + e.getMessage();}}//下載@Overridepublic void downloadFile(Long fileId, HttpServletResponse response) {FileUploadEntity fileEntity = fileUploadMapper.getFileById(fileId);long oid = Long.valueOf( fileEntity.getUploadId());try {response.reset();response.setContentType("application/octet-stream");String filename = "fileName.zip";response.addHeader("Content-Disposition", "attachment; filename=" + URLEncoder.encode(filename, "UTF-8"));ServletOutputStream outputStream = response.getOutputStream();postgresLargeObjectUtil.readLargeObject(oid, outputStream);}catch (Exception e) {log.error("下載文件失敗:{}", e);}}
  • 方案六

    文件字節上傳

/*** 文件字節上傳* @param file* @return*/@Overridepublic String uploadFileByte(MultipartFile file) {if (file.isEmpty()) {return "請選擇文件";}try {// 獲取文件信息String fileName = file.getOriginalFilename();long fileSize = file.getSize();byte[] fileData = file.getBytes(); // 小文件直接獲取字節數組// 執行插入(大文件建議用流:file.getInputStream())String sql = "INSERT INTO system_upload_test (id, upload_id, chunk_size, chunk_num, chunk_file, chunk_index) VALUES (?, ?, ?, ?, ?, ?)";jdbcTemplate.update(sql,111L,"2222",222L,3L,fileData,33L);return "文件上傳成功!";} catch (Exception e) {e.printStackTrace();return "文件上傳失敗:" + e.getMessage();}}// 大文件用流:file.getInputStream()public String uploadBigFile(MultipartFile file) throws Exception {// 1. 定義 SQL(注意:字段順序和占位符對應)String sql = "INSERT INTO user_qgcgk_app.system_upload_test " +"(id, upload_id, chunk_size, chunk_num, chunk_file, chunk_index) " +"VALUES (?, ?, ?, ?, ?, ?)";// 2. 準備參數(確保 InputStream 未關閉)Long id = 1795166209435262976L;String uploadId = "3333";Long chunkSize = 7068L;Long chunkNum = 7068L;InputStream chunkInputStream = file.getInputStream(); // 你的 InputStream(如 FileInputStream、ServletInputStream)Integer chunkIndex = 2;try {// 3. 執行 SQL:通過 PreparedStatementSetter 手動綁定參數jdbcTemplate.update(sql, new PreparedStatementSetter() {@Overridepublic void setValues(PreparedStatement ps) throws SQLException {// 綁定非流參數(按順序,類型匹配)ps.setLong(1, id);                  // 第1個參數:id(Long)ps.setString(2, uploadId);          // 第2個參數:upload_id(String)ps.setLong(3, chunkSize);           // 第3個參數:chunk_size(Long)ps.setLong(4, chunkNum);            // 第4個參數:chunk_num(Long)// 關鍵:綁定 InputStream 到 bytea 字段(第5個參數)// 第三個參數傳 -1 表示“未知流長度”,PostgreSQL 支持此模式ps.setBinaryStream(5, chunkInputStream, file.getSize());ps.setInt(6, chunkIndex);           // 第6個參數:chunk_index(Int)}});} finally {// 4. 執行完成后關閉流,釋放資源if (chunkInputStream != null) {chunkInputStream.close();}}return "上傳成功!";}
  • 方案七
    無臨時文件+多線(減少IO操作)
 	/***   無臨時文件+多線程+批量插入的分片上傳*/public String uploadChunkFile(MultipartFile file) throws Exception {// 生成唯一上傳IDString uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小異常,無法分片";}try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 批量插入緩沖區(每10個分片一批)List<FileUploadEntity> batchList = new ArrayList<>(10);// 計數器:等待所有批量任務完成CountDownLatch latch = new CountDownLatch((int) Math.ceil((double) totalChunks / 10));// 流式讀取并處理分片while ((bytesRead = inputStream.read(buffer)) != -1) {// 復制當前分片數據(避免buffer被覆蓋)byte[] chunkData = Arrays.copyOfRange(buffer, 0, bytesRead);// 創建分片實體FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(chunkIndex);batchList.add(entity);chunkIndex++;// 批量條件:滿10個分片或最后一個分片if (batchList.size() >= 10 || chunkIndex == totalChunks) {// 復制當前批次(避免線程安全問題)List<FileUploadEntity> currentBatch = new ArrayList<>(batchList);// 提交批量插入任務ThreadPoolUtils.getNewInstance().executor(() -> {try {fileUploadMapper.batchInsert(currentBatch);} finally {latch.countDown(); // 任務完成,計數器減1}});batchList.clear(); // 清空緩沖區}}// 等待所有批量任務完成(最多等待5分鐘)boolean allCompleted = latch.await(5, java.util.concurrent.TimeUnit.MINUTES);if (!allCompleted) {throw new BusinessException("文件分片上傳超時,請重試");}} catch (Exception e) {log.error("文件分片上傳失敗,uploadId:{}", uploadId, e);// 失敗時清理已上傳的分片(可選)
//            fileUploadMapper.deleteByUploadId(uploadId);throw new BusinessException("文件分片上傳失敗:" + e.getMessage());}return "文件已成功分片存儲,uploadId: " + uploadId;}/***   無臨時文件+多線程+單條插入的分片上傳*/public String uploadChunkFile(MultipartFile file) throws Exception {String uploadId = UUID.randomUUID().toString();long fileSize = file.getSize();long totalChunks = (long) Math.ceil((double) fileSize / CHUNK_SIZE);if (totalChunks <= 0) {return "文件大小異常,無法分片";}try (InputStream inputStream = file.getInputStream()) {byte[] buffer = new byte[(int) CHUNK_SIZE];int bytesRead;int chunkIndex = 0;// 用于等待所有分片完成CountDownLatch latch = new CountDownLatch((int) totalChunks);// 邊讀取邊提交分片任務,無需臨時文件while ((bytesRead = inputStream.read(buffer)) != -1) {// 復制當前分片數據(避免buffer被下一次read覆蓋)byte[] chunkData = Arrays.copyOfRange(buffer, 0, bytesRead);final int currentIndex = chunkIndex;// 提交異步任務ThreadPoolUtils.getNewInstance().executor(() -> {try {// 直接用內存中的分片數據寫入數據庫FileUploadEntity entity = new FileUploadEntity();entity.setId(IdGenerator.nextId());entity.setUploadId(uploadId);entity.setChunkSize((long) chunkData.length);entity.setChunkNum(totalChunks);entity.setChunkFile(chunkData);entity.setChunkIndex(currentIndex);fileUploadMapper.insertFile(entity);} catch (Exception e) {throw new RuntimeException("分片" + currentIndex + "存儲失敗", e);} finally {latch.countDown();}});chunkIndex++;}// 等待所有分片完成latch.await();} catch (Exception e) {log.error("文件分片上傳失敗", e);throw new BusinessException("文件分片上傳失敗");}return "文件已成功分片存儲,uploadId: " + uploadId;}
- 工具類PostgreSQL大對象工具類```java
import lombok.extern.slf4j.Slf4j;
import org.postgresql.largeobject.LargeObject;
import org.postgresql.largeobject.LargeObjectManager;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;import java.io.InputStream;
import java.io.OutputStream;
import java.sql.Connection;
import java.sql.SQLException;/*** PostgreSQL大對象工具類* @author: zrf* @date: 2025/08/25 16:09*/
@Slf4j
@Component
public class PostgresLargeObjectUtil {private final JdbcTemplate jdbcTemplate;public PostgresLargeObjectUtil(JdbcTemplate jdbcTemplate) {this.jdbcTemplate = jdbcTemplate;}/*** 從輸入流創建大對象并返回OID*/@Transactionalpublic long createLargeObject(InputStream inputStream) throws SQLException {// 獲取數據庫連接并關閉自動提交Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {// 獲取PostgreSQL大對象管理器LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();// 創建大對象,返回OIDlong oid = lobjManager.createLO(LargeObjectManager.READ | LargeObjectManager.WRITE);// 打開大對象并寫入數據try (LargeObject largeObject = lobjManager.open(oid, LargeObjectManager.WRITE)) {OutputStream outputStream = largeObject.getOutputStream();byte[] buffer = new byte[8192];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {outputStream.write(buffer, 0, bytesRead);}}connection.commit();return oid;} catch (Exception e) {connection.rollback();throw new SQLException("創建大對象失敗", e);} finally {connection.close();}}/*** 根據OID讀取大對象內容到輸出流*/public void readLargeObject(long oid, OutputStream outputStream) throws Exception {Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();try (LargeObject largeObject = lobjManager.open(oid, LargeObjectManager.READ)) {InputStream inputStream = largeObject.getInputStream();byte[] buffer = new byte[8192];int bytesRead;while ((bytesRead = inputStream.read(buffer)) != -1) {outputStream.write(buffer, 0, bytesRead);}}connection.commit();} catch (Exception e) {log.error("讀取大對象失敗", e);} finally {connection.close();}}/*** 刪除大對象(釋放磁盤空間)*/@Transactionalpublic void deleteLargeObject(long oid) throws SQLException {Connection connection = jdbcTemplate.getDataSource().getConnection();connection.setAutoCommit(false);try {LargeObjectManager lobjManager = connection.unwrap(org.postgresql.PGConnection.class).getLargeObjectAPI();lobjManager.delete(oid);connection.commit();} catch (Exception e) {connection.rollback();throw new SQLException("刪除大對象失敗", e);} finally {connection.close();}}
}

線程池工具類


import java.util.List;
import java.util.concurrent.*;
import java.util.function.Consumer;/*** @Author:zrf* @Date:2023/8/14 10:05* @description:線程池工具類*/
public class ThreadPoolUtils {/*** 系統可用計算資源*/private static final int CPU_COUNT = Runtime.getRuntime().availableProcessors();/*** 核心線程數*/private static final int CORE_POOL_SIZE = Math.max(2, Math.min(CPU_COUNT - 1, 4));/*** 最大線程數*/private static final int MAXIMUM_POOL_SIZE = CPU_COUNT * 2 + 1;/*** 空閑線程存活時間*/private static final int KEEP_ALIVE_SECONDS = 30;/*** 工作隊列*/private static final BlockingQueue<Runnable> POOL_WORK_QUEUE = new LinkedBlockingQueue<>(128);/*** 工廠模式*/private static final MyThreadFactory MY_THREAD_FACTORY = new MyThreadFactory();/*** 飽和策略*/private static final ThreadRejectedExecutionHandler THREAD_REJECTED_EXECUTION_HANDLER = new ThreadRejectedExecutionHandler.CallerRunsPolicy();/*** 線程池對象*/private static final ThreadPoolExecutor THREAD_POOL_EXECUTOR;/*** 聲明式定義線程池工具類對象靜態變量,在所有線程中同步*/private static volatile ThreadPoolUtils threadPoolUtils = null;/*** 初始化線程池靜態代碼塊*/static {THREAD_POOL_EXECUTOR = new ThreadPoolExecutor(//核心線程數CORE_POOL_SIZE,//最大線程數MAXIMUM_POOL_SIZE,//空閑線程執行時間KEEP_ALIVE_SECONDS,//空閑線程執行時間單位TimeUnit.SECONDS,//工作隊列(或阻塞隊列)POOL_WORK_QUEUE,//工廠模式MY_THREAD_FACTORY,//飽和策略THREAD_REJECTED_EXECUTION_HANDLER);}/*** 線程池工具類空參構造方法*/private ThreadPoolUtils() {}/*** 獲取線程池工具類實例* @return*/public static ThreadPoolUtils getNewInstance(){if (threadPoolUtils == null) {synchronized (ThreadPoolUtils.class) {if (threadPoolUtils == null) {threadPoolUtils = new ThreadPoolUtils();}}}return threadPoolUtils;}/*** 執行線程任務* @param runnable 任務線程*/public void executor(Runnable runnable) {THREAD_POOL_EXECUTOR.execute(runnable);}/*** 執行線程任務-有返回值* @param callable 任務線程*/public <T> Future<T> submit(Callable<T> callable) {return THREAD_POOL_EXECUTOR.submit(callable);}/*** 提交批量任務并等待所有任務完成* @param totalTasks 總任務數量* @param taskConsumer 任務消費者(接收任務索引,處理具體任務邏輯)* @throws InterruptedException 等待被中斷時拋出*/public void submitBatchTasks(int totalTasks, Consumer<Integer> taskConsumer) throws InterruptedException {CountDownLatch countDownLatch = new CountDownLatch(totalTasks);for (int i = 0; i < totalTasks; i++) {final int taskIndex = i;// 使用現有線程池提交任務THREAD_POOL_EXECUTOR.submit(() -> {try {taskConsumer.accept(taskIndex); // 執行具體任務邏輯} finally {countDownLatch.countDown(); // 任務完成,計數器減1}});}countDownLatch.await(); // 等待所有任務完成}/*** 獲取線程池狀態* @return 返回線程池狀態*/public boolean isShutDown(){return THREAD_POOL_EXECUTOR.isShutdown();}/*** 停止正在執行的線程任務* @return 返回等待執行的任務列表*/public List<Runnable> shutDownNow(){return THREAD_POOL_EXECUTOR.shutdownNow();}/*** 關閉線程池*/public void showDown(){THREAD_POOL_EXECUTOR.shutdown();}/*** 關閉線程池后判斷所有任務是否都已完成* @return*/public boolean isTerminated(){return THREAD_POOL_EXECUTOR.isTerminated();}
}

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/920279.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/920279.shtml
英文地址,請注明出處:http://en.pswp.cn/news/920279.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

AI應用--接口測試篇

1. 接口測試過程中的痛點接口的內容都是在yapi上&#xff0c;接口的內容都是以表格的形式呈現。在接口測試過程中&#xff0c;需要將表格形式的入參&#xff0c;手動敲成JSON格式&#xff0c;并且需要跟進字段類型&#xff0c;編輯字段值的形式。過程較為麻煩。使用postman進行…

Boris FX Samplitude Suite 2025.0.0 音頻錄制/編輯和母帶處理

描述 Samplitude是一款專業的DAW&#xff0c;用于錄音、編輯、混音和母帶制作。通過基于對象的編輯和多軌錄音&#xff0c;可以更快地進行創作。 原生杜比全景聲 &#xff08;Dolby Atmos&#xff09; 支持 體驗音頻制作的新維度。由于集成了杜比全景聲 &#xff08;Dolby Atm…

龍虎榜——20250827

上證指數今天放量下跌&#xff0c;收大陰線跌破5天均線&#xff0c;形成強勢頂分型&#xff0c;日線轉回調的概率很大。目前均線依然是多頭排列&#xff0c;但是離60天均線較遠&#xff0c;有回歸均線的需求。深證指數今天放量收長上影的大陰線&#xff0c;日內高點12665.36&am…

項目智能家居---OrangePi全志H616

1 需求及項目準備 語音接入控制各類家電,如客廳燈、臥室燈、風扇。 Socket編程,實現Sockect發送指令遠程控制各類家電。 煙霧警報監測, 實時檢查是否存在煤氣泄漏或者火災警情,當存在警情時及時觸發蜂鳴器報警及語音播報。 控制人臉識別打開房門功能,并語音播報識別成功或…

項目概要設計說明文檔

一、 引言 &#xff08;一&#xff09; 編寫目的 &#xff08;二&#xff09; 范圍 &#xff08;三&#xff09; 文檔約定 &#xff08;四&#xff09; 術語 二、 項目概要 &#xff08;一&#xff09; 建設背景 &#xff08;二&#xff09; 建設目標 &#xff08;三&a…

解決mac brew4.0安裝速度慢的問題

Homebrew 4.0 版本的重大變化自 Homebrew 4.0 版本起&#xff0c;官方棄用了傳統的 homebrew-core Git 倉庫模式&#xff0c;改為通過 API&#xff08;formulae.brew.sh&#xff09; 獲取軟件包元數據。因此&#xff0c;手動替換 homebrew-core 倉庫的目錄可能不再存在。目錄結…

AI需求優先級:數據價值密度×算法成熟度

3.3 需求優先級模型:ROI(數據價值密度算法成熟度) 核心公式: AI需求ROI = 數據價值密度 算法成熟度 總優先級 = ROI 倫理合規系數 (系數范圍:合規=1.0,高風險=0~0.5) 一、數據價值密度:從數據垃圾到石油精煉 量化評估模型(融合3.1節數據可行性) 維度 評估指標…

手寫MyBatis第37彈: 深入MyBatis MapperProxy:揭秘SQL命令類型與動態方法調用的完美適配

&#x1f942;(???)您的點贊&#x1f44d;?評論&#x1f4dd;?收藏?是作者創作的最大動力&#x1f91e; &#x1f496;&#x1f4d5;&#x1f389;&#x1f525; 支持我&#xff1a;點贊&#x1f44d;收藏??留言&#x1f4dd;歡迎留言討論 &#x1f525;&#x1f525;&…

GD32VW553-IOT 測評和vscode開發環境搭建

GD32VW553-IOT 測評和vscode開發環境搭建 1. 背景介紹 iCEasy商城的產品, Firefly Workshop 螢火工廠的樣片, 是一款基于GD32VW553 MCU的開源硬件, 這款MCU內置了32bit的RISC-V內核, 支持雙模無線WIFI-6和BLE-5.2, 最高主頻可達160Mhz. 本人曾在公司參與開發了一款基于RISC-V內…

斯塔克工業技術日志:用基礎模型打造 “戰甲級” 結構化 AI 功能

引子 在斯塔克工業的地下研發實驗室里&#xff0c;弧光反應堆的藍光映照著布滿代碼的顯示屏&#xff0c;工程師詹姆斯?“羅迪”?羅德斯正對著一堆 AI 生成的雜亂食譜皺眉。 上周他剛搞定基礎模型&#xff08;Foundation Models&#xff09;的文本生成&#xff0c;讓 AI 能像…

如何解決pip安裝報錯ModuleNotFoundError: No module named ‘click’問題

【Python系列Bug修復PyCharm控制臺pip install報錯】如何解決pip安裝報錯ModuleNotFoundError: No module named ‘click’問題 摘要 在日常Python開發中&#xff0c;pip install 報錯 ModuleNotFoundError: No module named click 是一個非常常見的問題&#xff0c;尤其是在…

PLC_博圖系列?基本指令”S_PULSE:分配脈沖定時器參數并啟動“

PLC_博圖系列?基本指令”S_PULSE&#xff1a;分配脈沖定時器參數并啟動“ 文章目錄PLC_博圖系列?基本指令”S_PULSE&#xff1a;分配脈沖定時器參數并啟動“背景介紹S_PULSE&#xff1a; 分配脈沖定時器參數并啟動說明參數脈沖時序圖示例關鍵字&#xff1a; PLC、 西門子、 …

【大模型】Qwen2.5-VL-3B模型量化以及運行測試,保留多模態能力(實踐版)

目錄 ■獲取原始模型 ■構建llama.cpp ■轉換模型到GGUF ▲視覺模塊轉換 ▲llm模塊轉換 ▲llm模塊量化 ▲推理測試 ■報錯處理 以下是幾種多模態模型量化方案的簡要對比: 特性 llama.cpp GGUF 量化

C語言 | 高級C語言面試題

側重于內存管理、指針、編譯器行為、底層原理和編程實踐。 C語言面試 一、核心概念與深度指針題 1. `const` 關鍵字的深度理解 2. volatile 關鍵字的作用 3. 復雜聲明解析 二、內存管理 4. `malloc(0)` 的行為 5. 結構體內存對齊與大小計算 三、高級技巧與底層原理 6. setjmp()…

【deepseek問答記錄】:chatGPT的參數數量和上下文長度有關系嗎?

這是一個非常好的問題&#xff0c;它觸及了大型語言模型設計的核心。 簡單來說&#xff1a;參數數量和上下文長度在技術上是兩個獨立的概念&#xff0c;但在模型的設計、訓練和實際應用中&#xff0c;它們存在著深刻且重要的聯系。 我們可以從以下幾個層面來理解它們的關系&…

5GNR CSI反饋 TypeI碼本

5GNR CSI反饋 TypeI碼本 前言 最近孬孬在學習5gnr中的CSI反饋內容&#xff0c;對于目前的5GNR主要是基于碼本的隱式反饋機制&#xff0c;在NR中主要是分為 TypeI 和 TypeII&#xff0c;對于TypeI是用于常規精度的&#xff0c;對于TypeII更為復雜&#xff0c;更多的適用于多用戶…

使用appium對安卓(使用夜神模擬器)運行自動化測試

環境安裝 基本環境安裝 安裝node.js 下載地址&#xff1a;Node.js — Run JavaScript Everywhere 安裝Java JDK 下載地址&#xff1a;JDK Builds from Oracle 安裝夜神模擬器 360上找下就能裝&#xff0c;安裝好后將夜神的bin目錄&#xff0c;添加到系統變量的path中。 …

用wp_trim_words函數實現WordPress截斷部分內容并保持英文單詞完整性

在WordPress中&#xff0c;wp_trim_words函數用于截斷字符串并限制單詞數量。如果你希望在截斷時保持單詞的完整性&#xff08;讓單詞顯示全&#xff09;&#xff0c;可以通過自定義函數來實現。 以下是一個示例代碼&#xff0c;展示如何修改你的代碼以確保截斷時顯示完整的單…

Codeforces Round 1042 (Div. 3) G Wafu! 題解

Codeforces Round 1042 (Div. 3) G Wafu! 題解 題意&#xff1a;每一次操作刪除集合中最小的元素 x&#xff0c;并產生新的 x - 1 個元素值分別為 1 2 3 … x - 1 放入集合之中。 每次操作一個數 x 可以使得最終答案乘上 x&#xff0c;問我們操作 k 次在模 1e9 7 的基礎上最終…

APP與WEB測試的區別?

web與app核心區別&#xff1a;一個基于瀏覽器 &#xff0c;一個基于操作系統這是所有區別的根源&#xff1a;Web測試&#xff1a;測試對象是網站&#xff0c;通過瀏覽器(Chrome,Firefox等)訪問&#xff0c;運行環境核心是瀏覽器引擎&#xff1b;App測試&#xff1a;測試對象是應…