java 實現人臉特征提取和比對

特征提取?

1. 安裝必要的庫

確保你已經安裝了JPEG庫、BLAS和LAPACK庫。在Ubuntu或Debian系統上,可以使用以下命令安裝:

sudo apt-get update
sudo apt-get update
sudo apt-get install build-essential cmake
sudo apt-get install libgtk-3-dev
sudo apt-get install libboost-all-dev
sudo apt-get install libopenblas-dev liblapack-dev
sudo apt-get install libx11-dev libatlas-base-dev
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev

在CentOS或Fedora系統上,可以使用以下命令安裝:

sudo yum update
sudo yum groupinstall "Development Tools"
sudo yum install gtk3-devel
sudo yum install boost-devel
sudo yum install openblas-devel lapack-devel
sudo yum install xorg-x11-devel
sudo yum install atlas-devel
sudo yum install libjpeg-devel libpng-devel libtiff-devel
sudo yum install ffmpeg-devel  # 注意用ffmpeg-devel代替libavcodec-dev等#首先,刪除已安裝的CMake版本(如果需要的話):sudo yum remove cmake
#下載并安裝CMake的新版:wget https://github.com/Kitware/CMake/releases/download/v3.16.3/cmake-3.16.3-Linux-x86_64.sh
chmod +x cmake-3.16.3-Linux-x86_64.sh
sudo ./cmake-3.16.3-Linux-x86_64.sh --prefix=/usr/local --exclude-subdir
#注意:這里使用的是3.16.3版本,你可以下載最新的穩定版本。#使CMake的更新生效:source /etc/profile
#檢查CMake版本:
#安裝完成后,通過運行以下命令來檢查CMake的版本:cmake --version
#確保輸出顯示的版本號是3.8.0或更高。sudo yum install centos-release-scl
sudo yum install devtoolset-9-gcc* 

2. 確保dlib使用正確的庫

dlib通常會自動檢測系統上的JPEG、BLAS和LAPACK庫。如果你已經安裝了這些庫,dlib應該能夠自動找到并使用它們。

3. 重新編譯dlib庫

重新編譯dlib庫,并確保啟用位置無關代碼(PIC):

git clone https://github.com/davisking/dlib.git
cd dlib
mkdir build
cd build
cmake .. -DDLIB_USE_CUDA=OFF -DUSE_AVX_INSTRUCTIONS=ON -DCMAKE_POSITION_INDEPENDENT_CODE=ON
cmake --build .
sudo make install
 

FaceRecognition.java

public class FaceRecognition {static {System.loadLibrary("dlib_face_recognition");}public native String extractFeatures(String imagePath);public static void main(String[] args) {if (args.length != 1) {System.out.println("Usage: java FaceRecognition <image_path>");return;}String imagePath = args[0];FaceRecognition fr = new FaceRecognition();String features = fr.extractFeatures(imagePath);System.out.println("Extracted features: \n" + features);}
}

dlib_face_recognition.cpp

include <jni.h>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include <dlib/image_io.h>
#include <dlib/dnn.h>
#include <sstream>
#include <string>
#include <vector>// 定義用于臉部識別的深度神經網絡
template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual = dlib::add_prev1<block<N, BN, 1, dlib::tag1<SUBNET>>>;template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual_down = dlib::add_prev2<dlib::avg_pool<2, 2, 2, 2, dlib::skip1<dlib::tag2<block<N, BN, 2, dlib::tag1<SUBNET>>>>>>;template <int N, template <typename> class BN, int stride, typename SUBNET>
using block  = BN<dlib::con<N, 3, 3, 1, 1, dlib::relu<dlib::affine<dlib::con<N, 3, 3, stride, stride, SUBNET>>>>>;template <int N, typename SUBNET> using res  = dlib::relu<residual<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares = dlib::relu<residual<block, N, dlib::affine, SUBNET>>;
template <int N, typename SUBNET> using res_down  = dlib::relu<residual_down<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares_down = dlib::relu<residual_down<block, N, dlib::affine, SUBNET>>;template <typename SUBNET> using alevel0 = ares_down<256, SUBNET>;
template <typename SUBNET> using alevel1 = ares<256, ares<256, ares_down<256, SUBNET>>>;
template <typename SUBNET> using alevel2 = ares<128, ares<128, ares_down<128, SUBNET>>>;
template <typename SUBNET> using alevel3 = ares<64, ares<64, ares<64, ares_down<64, SUBNET>>>>;
template <typename SUBNET> using alevel4 = ares<32, ares<32, ares<32, SUBNET>>>;
using anet_type = dlib::loss_metric<dlib::fc_no_bias<128, dlib::avg_pool_everything<alevel0<alevel1<alevel2<alevel3<alevel4<dlib::max_pool<3, 3, 2, 2, dlib::relu<dlib::affine<dlib::con<32, 7, 7, 2, 2,dlib::input_rgb_image_sized<150>>>>>>>>>>>>>;extern "C" JNIEXPORT jstring JNICALL Java_FaceRecognition_extractFeatures(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();dlib::shape_predictor sp;dlib::deserialize("shape_predictor_68_face_landmarks.dat") >> sp;anet_type net;dlib::deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::matrix<dlib::rgb_pixel>> faces;for (auto face : detector(img)) {auto shape = sp(img, face);dlib::matrix<dlib::rgb_pixel> face_chip;dlib::extract_image_chip(img, dlib::get_face_chip_details(shape,150,0.25), face_chip);faces.push_back(std::move(face_chip));}std::vector<dlib::matrix<float,0,1>> face_descriptors = net(faces);std::ostringstream oss;for (auto& descriptor : face_descriptors) {for (int i = 0; i < descriptor.size(); ++i) {oss << descriptor(i) << " ";}oss << "\n";}env->ReleaseStringUTFChars(imagePath, path);return env->NewStringUTF(oss.str().c_str());
}

4. 編譯你的C++代碼

g++ -I${JAVA_HOME}/include -I${JAVA_HOME}/include/linux -shared -o libdlib_face_recognition.so -fPIC dlib_face_recognition.cpp -ldlib -lpthread -lblas -llapack -ljpeg

5.編譯Java代碼并生成頭文件

確保在編譯Java代碼時指定編碼為UTF-8:

javac -encoding UTF-8 -h . FaceRecognition.java

6. 運行Java程序?

java -Djava.library.path=. FaceRecognition 1.jpg

人臉比對

#include <jni.h>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include <dlib/image_io.h>
#include <dlib/dnn.h>
#include <sstream>
#include <string>
#include <vector>// 定義用于臉部識別的深度神經網絡
template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual = dlib::add_prev1<block<N, BN, 1, dlib::tag1<SUBNET>>>;template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual_down = dlib::add_prev2<dlib::avg_pool<2, 2, 2, 2, dlib::skip1<dlib::tag2<block<N, BN, 2, dlib::tag1<SUBNET>>>>>>;template <int N, template <typename> class BN, int stride, typename SUBNET>
using block  = BN<dlib::con<N, 3, 3, 1, 1, dlib::relu<dlib::affine<dlib::con<N, 3, 3, stride, stride, SUBNET>>>>>;template <int N, typename SUBNET> using res  = dlib::relu<residual<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares = dlib::relu<residual<block, N, dlib::affine, SUBNET>>;
template <int N, typename SUBNET> using res_down  = dlib::relu<residual_down<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares_down = dlib::relu<residual_down<block, N, dlib::affine, SUBNET>>;template <typename SUBNET> using alevel0 = ares_down<256, SUBNET>;
template <typename SUBNET> using alevel1 = ares<256, ares<256, ares_down<256, SUBNET>>>;
template <typename SUBNET> using alevel2 = ares<128, ares<128, ares_down<128, SUBNET>>>;
template <typename SUBNET> using alevel3 = ares<64, ares<64, ares<64, ares_down<64, SUBNET>>>>;
template <typename SUBNET> using alevel4 = ares<32, ares<32, ares<32, SUBNET>>>;
using anet_type = dlib::loss_metric<dlib::fc_no_bias<128, dlib::avg_pool_everything<alevel0<alevel1<alevel2<alevel3<alevel4<dlib::max_pool<3, 3, 2, 2, dlib::relu<dlib::affine<dlib::con<32, 7, 7, 2, 2,dlib::input_rgb_image_sized<150>>>>>>>>>>>>>;//檢測人臉
extern "C" JNIEXPORT jint JNICALL Java_FaceRecognition_detectFaces(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::rectangle> faces = detector(img);env->ReleaseStringUTFChars(imagePath, path);return faces.size();
}//人臉關鍵點提取
extern "C" JNIEXPORT jstring JNICALL Java_FaceRecognition_getFaceLandmarks(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();dlib::shape_predictor sp;dlib::deserialize("shape_predictor_68_face_landmarks.dat") >> sp;dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::rectangle> faces = detector(img);std::ostringstream oss;for (auto face : faces) {auto shape = sp(img, face);for (int i = 0; i < shape.num_parts(); ++i) {oss << shape.part(i).x() << "," << shape.part(i).y() << " ";}oss << "\n";}env->ReleaseStringUTFChars(imagePath, path);return env->NewStringUTF(oss.str().c_str());
}//人臉特征提取
extern "C" JNIEXPORT jstring JNICALL Java_getFaceFeatures(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();dlib::shape_predictor sp;dlib::deserialize("shape_predictor_68_face_landmarks.dat") >> sp;anet_type net;dlib::deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::matrix<dlib::rgb_pixel>> faces;for (auto face : detector(img)) {auto shape = sp(img, face);dlib::matrix<dlib::rgb_pixel> face_chip;dlib::extract_image_chip(img, dlib::get_face_chip_details(shape,150,0.25), face_chip);faces.push_back(std::move(face_chip));}std::vector<dlib::matrix<float,0,1>> face_descriptors = net(faces);std::ostringstream oss;for (auto& descriptor : face_descriptors) {for (int i = 0; i < descriptor.size(); ++i) {oss << descriptor(i) << " ";}oss << "\n";}env->ReleaseStringUTFChars(imagePath, path);return env->NewStringUTF(oss.str().c_str());
}extern "C" JNIEXPORT jdouble JNICALL Java_FaceRecognition_compareFaceFeatures(JNIEnv *env, jobject obj, jstring imagePath, jstring featureVectorStr) {// 從 Java 獲取圖像路徑和特征向量字符串const char *path = env->GetStringUTFChars(imagePath, 0);const char *featureVectorC = env->GetStringUTFChars(featureVectorStr, 0);// 初始化 dlib 的人臉檢測器、形狀預測器和神經網絡模型dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();dlib::shape_predictor sp;dlib::deserialize("shape_predictor_68_face_landmarks.dat") >> sp;// 確保 anet_type 已經定義且正確anet_type net;dlib::deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;// 加載圖像dlib::matrix<dlib::rgb_pixel> img;load_image(img, path);// 檢測圖像中的人臉std::vector<dlib::rectangle> dets = detector(img);// 如果圖像中沒有人臉獲取人臉大于1if (dets.empty() || dets.size() > 1) {env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);throw  std::invalid_argument("no face or faces greater than 1");}std::vector<dlib::matrix<dlib::rgb_pixel>> faces;for (auto face : dets) {auto shape = sp(img, face);dlib::matrix<dlib::rgb_pixel> face_chip;dlib::extract_image_chip(img, dlib::get_face_chip_details(shape,150,0.25), face_chip);faces.push_back(std::move(face_chip));}std::vector<dlib::matrix<float,0,1>> imageFeatures = net(faces);// 將傳入的特征字符串轉換為 dlib 矩陣std::istringstream featureStream(featureVectorC);std::vector<float> featureVector;float value;while (featureStream >> value) {featureVector.push_back(value);}// 確保特征向量大小與模型輸出大小一致if (featureVector.size() != imageFeatures[0].size()) {     // 釋放 Java 字符串env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);throw std::invalid_argument("Feature vector size does not match model output size.");}// 計算特征向量之間的歐氏距離double distance = 0;// 假定第一個人臉特征 imageFeatures[0] 是我們要比較的特征向量for (size_t i = 0; i < imageFeatures[0].size(); ++i) {distance += (imageFeatures[0](i) - featureVector[i]) * (imageFeatures[0](i) - featureVector[i]);}distance = std::sqrt(distance);// 釋放 Java 字符串env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);return distance;
}

避免模型多次加載

#include <jni.h>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing.h>
#include <dlib/image_io.h>
#include <dlib/dnn.h>
#include <sstream>
#include <string>
#include <vector>
#include <mutex>// 定義用于臉部識別的深度神經網絡
template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual = dlib::add_prev1<block<N, BN, 1, dlib::tag1<SUBNET>>>;template <template <int, template <typename> class, int, typename> class block, int N, template <typename> class BN, typename SUBNET>
using residual_down = dlib::add_prev2<dlib::avg_pool<2, 2, 2, 2, dlib::skip1<dlib::tag2<block<N, BN, 2, dlib::tag1<SUBNET>>>>>>;template <int N, template <typename> class BN, int stride, typename SUBNET>
using block  = BN<dlib::con<N, 3, 3, 1, 1, dlib::relu<dlib::affine<dlib::con<N, 3, 3, stride, stride, SUBNET>>>>>;template <int N, typename SUBNET> using res  = dlib::relu<residual<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares = dlib::relu<residual<block, N, dlib::affine, SUBNET>>;
template <int N, typename SUBNET> using res_down  = dlib::relu<residual_down<block, N, dlib::bn_con, SUBNET>>;
template <int N, typename SUBNET> using ares_down = dlib::relu<residual_down<block, N, dlib::affine, SUBNET>>;template <typename SUBNET> using alevel0 = ares_down<256, SUBNET>;
template <typename SUBNET> using alevel1 = ares<256, ares<256, ares_down<256, SUBNET>>>;
template <typename SUBNET> using alevel2 = ares<128, ares<128, ares_down<128, SUBNET>>>;
template <typename SUBNET> using alevel3 = ares<64, ares<64, ares<64, ares_down<64, SUBNET>>>>;
template <typename SUBNET> using alevel4 = ares<32, ares<32, ares<32, SUBNET>>>;
using anet_type = dlib::loss_metric<dlib::fc_no_bias<128, dlib::avg_pool_everything<alevel0<alevel1<alevel2<alevel3<alevel4<dlib::max_pool<3, 3, 2, 2, dlib::relu<dlib::affine<dlib::con<32, 7, 7, 2, 2,dlib::input_rgb_image_sized<150>>>>>>>>>>>>>;// 前置聲明全局靜態變量
std::mutex& get_global_mutex();
dlib::frontal_face_detector& get_global_face_detector();
dlib::shape_predictor& get_global_shape_predictor();
anet_type& get_global_anet_type();// 全局靜態變量定義在.cpp文件中
std::mutex global_mutex;
dlib::frontal_face_detector global_face_detector;
dlib::shape_predictor global_shape_predictor;
anet_type global_anet_type;// 實現線程安全的單例模式
std::mutex& get_global_mutex() {return global_mutex;
}dlib::frontal_face_detector& get_global_face_detector() {static dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();return detector;
}dlib::shape_predictor& get_global_shape_predictor() {static dlib::shape_predictor sp;static std::once_flag flag;std::call_once(flag, []() {dlib::deserialize("shape_predictor_68_face_landmarks.dat") >> sp;});return sp;
}anet_type& get_global_anet_type() {static anet_type net;static std::once_flag flag;std::call_once(flag, []() {dlib::deserialize("dlib_face_recognition_resnet_model_v1.dat") >> net;});return net;
}//檢測人臉
extern "C" JNIEXPORT jint JNICALL Java_FaceRecognition_detectFaces(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);dlib::frontal_face_detector& detector = get_global_face_detector();dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::rectangle> faces = detector(img);env->ReleaseStringUTFChars(imagePath, path);return faces.size();
}//人臉關鍵點提取
extern "C" JNIEXPORT jstring JNICALL Java_FaceRecognition_getFaceLandmarks(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);// 使用 get_global_face_detector() 獲取全局人臉檢測器dlib::frontal_face_detector& detector = get_global_face_detector();// 使用 get_global_shape_predictor() 獲取全局形狀預測器dlib::shape_predictor& sp = get_global_shape_predictor();dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::rectangle> faces = detector(img);std::ostringstream oss;for (auto face : faces) {auto shape = sp(img, face);for (int i = 0; i < shape.num_parts(); ++i) {oss << shape.part(i).x() << "," << shape.part(i).y() << " ";}oss << "\n";}env->ReleaseStringUTFChars(imagePath, path);return env->NewStringUTF(oss.str().c_str());
}//人臉特征提取
extern "C" JNIEXPORT jstring JNICALL Java_getFaceFeatures(JNIEnv *env, jobject obj, jstring imagePath) {const char *path = env->GetStringUTFChars(imagePath, 0);// 使用 get_global_face_detector() 獲取全局人臉檢測器dlib::frontal_face_detector& detector = get_global_face_detector();// 使用 get_global_shape_predictor() 獲取全局形狀預測器dlib::shape_predictor& sp = get_global_shape_predictor();anet_type& net = get_global_anet_type();dlib::matrix<dlib::rgb_pixel> img;dlib::load_image(img, path);std::vector<dlib::matrix<dlib::rgb_pixel>> faces;for (auto face : detector(img)) {auto shape = sp(img, face);dlib::matrix<dlib::rgb_pixel> face_chip;dlib::extract_image_chip(img, dlib::get_face_chip_details(shape,150,0.25), face_chip);faces.push_back(std::move(face_chip));}std::vector<dlib::matrix<float,0,1>> face_descriptors = net(faces);std::ostringstream oss;for (auto& descriptor : face_descriptors) {for (int i = 0; i < descriptor.size(); ++i) {oss << descriptor(i) << " ";}oss << "\n";}env->ReleaseStringUTFChars(imagePath, path);return env->NewStringUTF(oss.str().c_str());
}extern "C" JNIEXPORT jdouble JNICALL Java_FaceRecognition_compareFaceFeatures(JNIEnv *env, jobject obj, jstring imagePath, jstring featureVectorStr) {// 從 Java 獲取圖像路徑和特征向量字符串const char *path = env->GetStringUTFChars(imagePath, 0);const char *featureVectorC = env->GetStringUTFChars(featureVectorStr, 0);// 使用 get_global_face_detector() 獲取全局人臉檢測器dlib::frontal_face_detector& detector = get_global_face_detector();// 使用 get_global_shape_predictor() 獲取全局形狀預測器dlib::shape_predictor& sp = get_global_shape_predictor();anet_type& net = get_global_anet_type();// 加載圖像dlib::matrix<dlib::rgb_pixel> img;load_image(img, path);// 檢測圖像中的人臉std::vector<dlib::rectangle> dets = detector(img);// 如果圖像中沒有人臉獲取人臉大于1if (dets.empty() || dets.size() > 1) {env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);throw  std::invalid_argument("no face or faces greater than 1");}std::vector<dlib::matrix<dlib::rgb_pixel>> faces;for (auto face : dets) {auto shape = sp(img, face);dlib::matrix<dlib::rgb_pixel> face_chip;dlib::extract_image_chip(img, dlib::get_face_chip_details(shape,150,0.25), face_chip);faces.push_back(std::move(face_chip));}std::vector<dlib::matrix<float,0,1>> imageFeatures = net(faces);// 將傳入的特征字符串轉換為 dlib 矩陣std::istringstream featureStream(featureVectorC);std::vector<float> featureVector;float value;while (featureStream >> value) {featureVector.push_back(value);}// 確保特征向量大小與模型輸出大小一致if (featureVector.size() != imageFeatures[0].size()) {     // 釋放 Java 字符串env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);throw std::invalid_argument("Feature vector size does not match model output size.");}// 計算特征向量之間的歐氏距離double distance = 0;// 假定第一個人臉特征 imageFeatures[0] 是我們要比較的特征向量for (size_t i = 0; i < imageFeatures[0].size(); ++i) {distance += (imageFeatures[0](i) - featureVector[i]) * (imageFeatures[0](i) - featureVector[i]);}distance = std::sqrt(distance);// 釋放 Java 字符串env->ReleaseStringUTFChars(imagePath, path);env->ReleaseStringUTFChars(featureVectorStr, featureVectorC);return distance;
}

java

public class FaceRecognition {static {System.loadLibrary("dlib_face_recognition");}// 以下方法已經定義好,用于檢測人臉、獲取特征、比對特征和獲取關鍵點public native int detectFaces(String imagePath);public native String getFaceFeatures(String imagePath);public native double compareFaceFeatures(String imagePath, String featureVector);public native String getFaceLandmarks(String imagePath);public static void main(String[] args) {// 確保傳入正確的參數數量if (args.length != 2) {System.out.println("Usage: java FaceRecognition <image_path> <feature_vector>");return;}String imagePath = args[0];String featureVector = args[1]; // 128維特征向量,以空格分隔的字符串形式FaceRecognition fr = new FaceRecognition();// 使用圖像路徑和特征向量調用 compareFaceFeaturesdouble distance = fr.compareFaceFeatures(imagePath, featureVector);System.out.println("The distance between the image and the feature vector is: " + distance);}
}

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/web/42487.shtml
繁體地址,請注明出處:http://hk.pswp.cn/web/42487.shtml
英文地址,請注明出處:http://en.pswp.cn/web/42487.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

【C語言】標識符大通關!

目錄 1. 簡介2. 標識符的定義3. 標識符的命名規則3.1 有效字符3.2 長度限制 4. 關鍵字與保留字5. 標識符的作用域5.1 塊作用域5.2 文件作用域5.3 函數作用域5.4 原型作用域 6. 命名規范與最佳實踐6.1 命名習慣6.2 避免沖突 7. 標識符示例與解析8. 參考文獻9. 結束語 1. 簡介 標…

Zynq系列FPGA實現SDI視頻編解碼+UDP以太網傳輸,基于GTX高速接口,提供3套工程源碼和技術支持

目錄 1、前言工程概述免責聲明 2、相關方案推薦本博已有的 SDI 編解碼方案本博已有的以太網方案本博已有的FPGA圖像縮放方案1G/2.5G Ethernet PCS/PMA or SGMII架構以太網通信方案AXI 1G/2.5G Ethernet Subsystem架構以太網通信方案本方案的縮放應用本方案在Xilinx--Kintex系列…

2024年全國青少年信息素養大賽復賽及決賽、我知道的有這些

周末兩天2024年全國青少年信息素養大賽復賽部分賽區已經結束&#xff0c;還沒有考試的同學加緊備考后面的2次&#xff0c;成績預計&#xff08;7月13日、7月20日兩次考試&#xff09;結束之后的2周左右出&#xff0c;2024年全國青少年信息素養大賽決賽將在2024年8月16日-20日在…

解決:Flink向kafka寫數據使用Producer精準一次(EXACTLY_ONCE)異常

在使用flink向kafka寫入數據報錯&#xff1a;Caused by: org.apache.kafka.common.KafkaException: Unexpected error in InitProducerIdResponse; The transaction timeout is larger than the maximum value allowed by the broker (as configured by transaction.max.timeou…

文獻解讀-基準與方法研究-第十六期|《GeneMind 公司的 GenoLab M 測序平臺 WGS 和 WES 數據基準測試》

關鍵詞&#xff1a;基準與方法研究&#xff1b;基因測序&#xff1b;變異檢測&#xff1b; 文獻簡介 標題&#xff08;英文&#xff09;&#xff1a;Accuracy benchmark of the GeneMind GenoLab M sequencing platform for WGS and WES analysis標題&#xff08;中文&#xf…

差分+前綴和習題集

&#xff08;luogu題號&#xff09; P6568 [NOI Online #3 提高組] 水壺 思路分析 前綴和優化問題。 其實題意就是讓你求有k1個數的區間和最大值&#xff0c;那么直接前綴和優化&#xff0c;就可以通過本題。 代碼 #include<bits/stdc.h> using namespace std;const in…

@component注解的分類

Component作用類似于xml文件里面的<Bean>:交給IOC去創建相關的實體類對象&#xff1b; 如果用xml配置的話&#xff0c;還要在xml配置文件中添加<context:component-scan base-package”掃描范圍路徑”> Component有三個主要的衍生注解&#xff0c;它們分別用于標…

QByteArray 轉換成 QString 類型

在Qt中&#xff0c;QByteArray和QString是兩種常用的數據類型&#xff0c;分別用于處理字節數組和字符串。如果你有一個QByteArray對象&#xff0c;并希望將其轉換為QString對象&#xff0c;你可以使用QString的構造函數或fromUtf8()靜態方法來完成這一轉換。 以下是兩種常用的…

機器學習——關于極大似然估計法的一些個人思考

最近在回顧機器學習的一些相關理論知識&#xff0c;回顧到極大似然法時&#xff0c;對于極大似然法中的一些公式有些迷糊了&#xff0c;所以本文主要想記錄并分享一下個人關于極大似然估計法的一些思考&#xff0c;如果有誤&#xff0c;請見諒&#xff0c;歡迎一起前來探討。當…

Could not find Chrome (ver.xxxxx). This can occur if either\n

文章目錄 錯誤解決方法 錯誤 Could not find Chrome (ver. 119.0.6045.105). This can occur if either\n 1. you did not perform an installation before running the script (e.g. npx puppeteer browsers install chrome) or\n 2. your cache path is incorrectly configu…

topic 之RCLCPP實現

創建節點 本節我們將創建一個控制節點和一個被控節點。 控制節點創建一個話題發布者publisher&#xff0c;發布控制命令&#xff08;command&#xff09;話題&#xff0c;接口類型為字符串&#xff08;string&#xff09;&#xff0c;控制接點通過發布者發布控制命令&#xf…

【Linux】升級FastJSON版本-jar

摘要 在長期運行的應用服務器上&#xff0c;近期的安全漏洞掃描揭示了fastjson組件存在潛在的安全隱患&#xff08;FastJSON是一個Java 語言實現的 JSON 解析器和生成器。FastJSON存在遠程代碼執行漏洞&#xff0c;惡意攻擊者可以通過此漏洞遠程執行惡意代碼來入侵服務器&…

怎么解析二級域名,一個一級域名可以解析多少二級域名?

在構建網站或應用時&#xff0c;域名是連接用戶與服務器的重要橋梁。注冊了一級域名后&#xff0c;如何解析二級域名&#xff0c;以及一個一級域名可以解析多少個二級域名&#xff0c;是很多網站管理人員都非常關心的問題。本文國科云將簡單探討下這兩個問題&#xff0c;并給出…

數學,LeetCode 3102. 最小化曼哈頓距離

一、題目 1、題目描述 給你一個下標從 0 開始的數組 points &#xff0c;它表示二維平面上一些點的整數坐標&#xff0c;其中 points[i] [xi, yi] 。 兩點之間的距離定義為它們的 曼哈頓距離 。 請你恰好移除一個點&#xff0c;返回移除后任意兩點之間的 最大 距離可能的 最小…

Dynadot 2024年第一季度回顧

關于Dynadot Dynadot是通過ICANN認證的域名注冊商&#xff0c;自2002年成立以來&#xff0c;服務于全球108個國家和地區的客戶&#xff0c;為數以萬計的客戶提供簡潔&#xff0c;優惠&#xff0c;安全的域名注冊以及管理服務。 Dynadot平臺操作教程索引&#xff08;包括域名郵…

java進程把服務器CPU打滿問題排查

1、top命令定位問題進程 2、查看進程的所有線程信息&#xff0c;記下占用最高的進程 top -Hp 38080553、將第2步得到的線程號轉化為十六進制 printf %x\n 38080594、結果里搜索 jstack 3808055|grep -A 10 3a1b3b5、定位問題 根據上步搜索到的結果&#xff0c;可以看到是GC…

【PyQt5】

PyQT5線程基礎&#xff08;1&#xff09; 分離UI主線程和耗時子線程QThread自定義信號 分離UI主線程和耗時子線程 在應用程序中&#xff0c;主線程負責處理用戶的輸入事件、更新UI元素和響應系統的回調&#xff0c;而耗時的任務&#xff08;例如網絡請求、數據庫訪問、圖像處理…

關閉這八個電腦設置,保護個人隱私

你知道嗎&#xff1f;電腦可能一直在偷窺你的小秘密。朋友們&#xff0c;一定要記得關閉這8個電腦設置哦&#xff0c;這樣可以有效地保護我們的個人隱私。 按住鍵盤Windows鍵加i鍵&#xff0c;快速打開Windows設置。然后點擊隱私選項。 我們來看基本的常規設置。里面有四個設置…

在表格中選中el-radio后, 怎么獲取選中的這一行的所有數據?

演示: 圖中, 選中這行數據后, 怎么獲取到當前的數據? 代碼: <tr v-for"item in gridData"><td><input type"radio" v-model"checkout" change"getDateFn" :data-type"item.articleType" :data-channelNam…

GEE代碼實例教程詳解:年度和月度土地覆蓋變化分析

簡介 在本篇博客中&#xff0c;我們將使用Google Earth Engine (GEE) 對土地覆蓋變化進行年度和月度的分析。通過Google的Dynamic World數據集&#xff0c;我們可以識別2023年至2024年間土地覆蓋的類型和變化。 背景知識 Google Dynamic World數據集 Google/DYNAMICWORLD/V…