實用篇 | 一文快速構建人工智能前端展示streamlit應用

----------------------- 🎈API 相關直達 🎈--------------------------

🚀Gradio:?實用篇 | 關于Gradio快速構建人工智能模型實現界面,你想知道的都在這里-CSDN博客

🚀Streamlit :實用篇 | 一文快速構建人工智能前端展示streamlit應用-CSDN博客

🚀Flask:?實用篇 | 一文學會人工智能中API的Flask編寫(內含模板)-CSDN博客

Streamlit是一個用于機器學習、數據可視化的 Python 框架,它能幾行代碼就構建出一個精美的在線 app 應用。相比于Gradio,能展示更多的功能~

目錄

1.Streamlit的安裝

2.Streamlit的語法

2.1.基本語法

2.2.進階語法

2.2.1.圖片,語音,視頻

2.2.2.進程提示

2.3.高級語法

2.3.1.@st.cache_data

2.3.2.st.cache_resource

3.創建一個簡單的app

實時讀取數據并作圖

4.人工智能深度學習項目Streamlit實例

4.1.實例1:文本生成

4.1.1ChatGLM的交互

4.1.2.OpenAI的交互

4.2.圖像類

4.2.1.圖像分類

4.2.2.圖片生成

4.3.語音類

4.3.1.語音合成

?4.3.2.語音轉文本

參考文獻


官網:Get started - Streamlit Docs

1.Streamlit的安裝

# 安裝
pip install streamlit
pip install streamlit-chat# 測試
streamlit hello

會出現一些案例

2.Streamlit的語法

2.1.基本語法

import streamlit as st

最常用的幾種

  • 標題st.title() : st.title("標題")
  • 寫入st.write(): st.write("Hello world?")
  • 文本st.text():單行文本
  • 多行文本框st.text_area():st.text_area("文本框",value=''key=None)
  • 滑動條st.slider():st.slider(““)
  • 按鈕st.button():st.button(“按鈕“)
  • 輸入文本st.text_input():st.text_input(“請求用戶輸入“)
  • 單選框組件st.radio()

2.2.進階語法

2.2.1.圖片,語音,視頻

都可以輸入向量值,比特值,加載文件,文件路徑

  • st.image()
  • st.audio()
  • st.video()

2.2.2.進程提示

  • st.progress() 顯示進度
  • st.spinner()顯示執行狀態
  • st.error()顯示錯誤信息
  • st.warning - 顯示警告信息

2.3.高級語法

2.3.1.@st.cache_data

當使用 Streamlit 的緩存注釋標記函數時,它會告訴 Streamlit 每當調用函數時,它應該檢查兩件事:

  • 用于函數調用的輸入參數
  • 函數內部的代碼

2.3.2.st.cache_resource

用于緩存返回全局資源(例如數據庫連接、ML 模型)的函數的裝飾器。

緩存的對象在所有用戶、會話和重新運行之間共享。他們 必須是線程安全的,因為它們可以從多個線程訪問 同時。如果線程安全是一個問題,請考慮改用?st.session_state?來存儲每個會話的資源。

默認情況下,cache_resource函數的所有參數都必須是可哈希的。 名稱以?_?開頭的任何參數都不會進行哈希處理。

3.創建一個簡單的app

實時讀取數據并作圖

import streamlit as st
import pandas as pd
import numpy as npst.title('Uber pickups in NYC')DATA_COLUMN = 'data/time'
DATA_URL = ('https://s3-us-west-2.amazonaws.com/''streamlit-demo-data/uber-raw-data-sep14.csv.gz')# 增加緩存
@st.cache_data
# 下載數據函數
def load_data(nrows):# 讀取csv文件data = pd.rea_csv(data_url,nrows=nrows)# 轉換小寫字母lowercase = lambda x:tr(x).lower()# 將數據重命名 data.rename(lowercase,axis='columns',inplace=True)# 將數據以panda的數據列的形式展示出來data[DATA_COLUMN] = pd.to_datatime(data[DATA_COLUMN])# 返回最終數據return data# 直接打印文本信息
data_load_state = st.text('正在下載')
# 下載一萬條數據中的數據
data = load_data(10000)
# 最后輸出文本顯示
data_load_state.text("完成!(using st.cache_data)")# 檢查原始數據
if st.checkbox('Show raw data'):st.subheader('Raw data')st.write(data)# 繪制直方圖
# 添加一個子標題
st.subheader('Number of pickups by hour')# 使用numpy生成一個直方圖,按小時排列
hist_values = np.histogram(data[DATE_COLUMN].dt.hour, bins=24, range=(0,24))[0]
# 使用Streamlit 的 st.bar_chart() 方法來繪制直方圖
st.bar_chart(hist_values)# 使用滑動塊篩選結果
hour_to_filter = st.slider('hour', 0, 23, 17)
# 實時更新
filtered_data = data[data[DATE_COLUMN].dt.hour == hour_to_filter]# 為地圖添加一個副標題
st.subheader('Map of all pickups at %s:00' % hour_to_filter)
# 使用st.map()函數繪制數據
st.map(filtered_data)

運行

streamlit run demo.py

4.人工智能深度學習項目Streamlit實例

4.1.實例1:文本生成

4.1.1ChatGLM的交互

from transformers import AutoModel, AutoTokenizer
import streamlit as st
from streamlit_chat import messagest.set_page_config(page_title="ChatGLM-6b 演示",page_icon=":robot:"
)@st.cache_resource
def get_model():tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True)model = AutoModel.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True).half().cuda()model = model.eval()return tokenizer, modelMAX_TURNS = 20
MAX_BOXES = MAX_TURNS * 2def predict(input, max_length, top_p, temperature, history=None):tokenizer, model = get_model()if history is None:history = []with container:if len(history) > 0:if len(history)>MAX_BOXES:history = history[-MAX_TURNS:]for i, (query, response) in enumerate(history):message(query, avatar_style="big-smile", key=str(i) + "_user")message(response, avatar_style="bottts", key=str(i))message(input, avatar_style="big-smile", key=str(len(history)) + "_user")st.write("AI正在回復:")with st.empty():for response, history in model.stream_chat(tokenizer, input, history, max_length=max_length, top_p=top_p,temperature=temperature):query, response = history[-1]st.write(response)return historycontainer = st.container()# create a prompt text for the text generation
prompt_text = st.text_area(label="用戶命令輸入",height = 100,placeholder="請在這兒輸入您的命令")max_length = st.sidebar.slider('max_length', 0, 4096, 2048, step=1
)
top_p = st.sidebar.slider('top_p', 0.0, 1.0, 0.6, step=0.01
)
temperature = st.sidebar.slider('temperature', 0.0, 1.0, 0.95, step=0.01
)if 'state' not in st.session_state:st.session_state['state'] = []if st.button("發送", key="predict"):with st.spinner("AI正在思考,請稍等........"):# text generationst.session_state["state"] = predict(prompt_text, max_length, top_p, temperature, st.session_state["state"])

4.1.2.OpenAI的交互

from openai import OpenAI
import streamlit as stwith st.sidebar:openai_api_key = st.text_input("OpenAI API Key", key="chatbot_api_key", type="password")"[Get an OpenAI API key](https://platform.openai.com/account/api-keys)""[View the source code](https://github.com/streamlit/llm-examples/blob/main/Chatbot.py)""[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/streamlit/llm-examples?quickstart=1)"st.title("💬 Chatbot")
st.caption("🚀 A streamlit chatbot powered by OpenAI LLM")
if "messages" not in st.session_state:st.session_state["messages"] = [{"role": "assistant", "content": "How can I help you?"}]for msg in st.session_state.messages:st.chat_message(msg["role"]).write(msg["content"])if prompt := st.chat_input():if not openai_api_key:st.info("Please add your OpenAI API key to continue.")st.stop()client = OpenAI(api_key=openai_api_key)st.session_state.messages.append({"role": "user", "content": prompt})st.chat_message("user").write(prompt)response = client.chat.completions.create(model="gpt-3.5-turbo", messages=st.session_state.messages)msg = response.choices[0].message.contentst.session_state.messages.append({"role": "assistant", "content": msg})st.chat_message("assistant").write(msg)

4.2.圖像類

4.2.1.圖像分類

import streamlit as stst.markdown('<h1 style="color:black;">Vgg 19 Image classification model</h1>', unsafe_allow_html=True)
st.markdown('<h2 style="color:gray;">The image classification model classifies image into following categories:</h2>', unsafe_allow_html=True)
st.markdown('<h3 style="color:gray;"> street,  buildings, forest, sea, mountain, glacier</h3>', unsafe_allow_html=True)# 背景圖片background image to streamlit@st.cache(allow_output_mutation=True)
# 以base64的方式傳輸文件
def get_base64_of_bin_file(bin_file):with open(bin_file, 'rb') as f:data = f.read()return base64.b64encode(data).decode()
#設置背景圖片,顏色等
def set_png_as_page_bg(png_file):bin_str = get_base64_of_bin_file(png_file) page_bg_img = '''<style>.stApp {background-image: url("data:image/png;base64,%s");background-size: cover;background-repeat: no-repeat;background-attachment: scroll; # doesn't work}</style>''' % bin_strst.markdown(page_bg_img, unsafe_allow_html=True)returnset_png_as_page_bg('/content/background.webp')# 上傳png/jpg的照片
upload= st.file_uploader('Insert image for classification', type=['png','jpg'])
c1, c2= st.columns(2)
if upload is not None:im= Image.open(upload)img= np.asarray(im)image= cv2.resize(img,(224, 224))img= preprocess_input(image)img= np.expand_dims(img, 0)c1.header('Input Image')c1.image(im)c1.write(img.shape)# 下載預訓練模型# 輸入尺寸input_shape = (224, 224, 3)# 定義優化器optim_1 = Adam(learning_rate=0.0001)# 分類數n_classes=6# 定義模型vgg_model = model(input_shape, n_classes, optim_1, fine_tune=2)# 下載權重vgg_model.load_weights('/content/drive/MyDrive/vgg/tune_model19.weights.best.hdf5')#預測vgg_preds = vgg_model.predict(img)vgg_pred_classes = np.argmax(vgg_preds, axis=1)c2.header('Output')c2.subheader('Predicted class :')c2.write(classes[vgg_pred_classes[0]] )

4.2.2.圖片生成

import streamlit as st 
from dotenv import load_dotenv
import os 
import openai
from diffusers import StableDiffusionPipeline
import torchload_dotenv()
openai.api_key = os.getenv("OPENAI_API_KEY")#function to generate AI based images using OpenAI Dall-E
def generate_images_using_openai(text):response = openai.Image.create(prompt= text, n=1, size="512x512")image_url = response['data'][0]['url']return image_url#function to generate AI based images using Huggingface Diffusers
def generate_images_using_huggingface_diffusers(text):pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)pipe = pipe.to("cuda")prompt = textimage = pipe(prompt).images[0] return image#Streamlit Code
choice = st.sidebar.selectbox("Select your choice", ["Home", "DALL-E", "Huggingface Diffusers"])if choice == "Home":st.title("AI Image Generation App")with st.expander("About the App"):st.write("This is a simple image generation app that uses AI to generates images from text prompt.")elif choice == "DALL-E":st.subheader("Image generation using Open AI's DALL-E")input_prompt = st.text_input("Enter your text prompt")if input_prompt is not None:if st.button("Generate Image"):image_url = generate_images_using_openai(input_prompt)st.image(image_url, caption="Generated by DALL-E")elif choice == "Huggingface Diffusers":st.subheader("Image generation using Huggingface Diffusers")input_prompt = st.text_input("Enter your text prompt")if input_prompt is not None:if st.button("Generate Image"):image_output = generate_images_using_huggingface_diffusers(input_prompt)st.info("Generating image.....")st.success("Image Generated Successfully")st.image(image_output, caption="Generated by Huggingface Diffusers")

4.3.語音類

4.3.1.語音合成

import torch
import streamlit as st
# 這里使用coqui-tts,直接pip install tts就可以
from TTS.api import TTS
import tempfile
import osdevice = "cuda" if torch.cuda.is_available() else "cpu"
# 模型選擇
model_name = 'tts_models/en/jenny/jenny'
tts = TTS(model_name).to(device)st.title('Coqui TTS')# 輸入文本
text_to_speak = st.text_area('Entire article text here:', '')# 點擊按鈕監聽
if st.button('Listen'):if text_to_speak:# temp path needed for audio to listen to# 定義合成語音文件名稱temp_audio_path = './temp_audio.wav'# 使用tts庫中的tts_to_file函數tts.tts_to_file(text=text_to_speak, file_path=temp_audio_path)#輸出語音st.audio(temp_audio_path, format='audio/wav')os.unlink(temp_audio_path)


?4.3.2.語音轉文本

import logging
import logging.handlers
import queue
import threading
import time
import urllib.request
import os
from collections import deque
from pathlib import Path
from typing import Listimport av
import numpy as np
import pydub
import streamlit as st
from twilio.rest import Clientfrom streamlit_webrtc import WebRtcMode, webrtc_streamerHERE = Path(__file__).parentlogger = logging.getLogger(__name__)# This code is based on https://github.com/streamlit/demo-self-driving/blob/230245391f2dda0cb464008195a470751c01770b/streamlit_app.py#L48  # noqa: E501
def download_file(url, download_to: Path, expected_size=None):# Don't download the file twice.# (If possible, verify the download using the file length.)if download_to.exists():if expected_size:if download_to.stat().st_size == expected_size:returnelse:st.info(f"{url} is already downloaded.")if not st.button("Download again?"):returndownload_to.parent.mkdir(parents=True, exist_ok=True)# These are handles to two visual elements to animate.weights_warning, progress_bar = None, Nonetry:weights_warning = st.warning("Downloading %s..." % url)progress_bar = st.progress(0)with open(download_to, "wb") as output_file:with urllib.request.urlopen(url) as response:length = int(response.info()["Content-Length"])counter = 0.0MEGABYTES = 2.0 ** 20.0while True:data = response.read(8192)if not data:breakcounter += len(data)output_file.write(data)# We perform animation by overwriting the elements.weights_warning.warning("Downloading %s... (%6.2f/%6.2f MB)"% (url, counter / MEGABYTES, length / MEGABYTES))progress_bar.progress(min(counter / length, 1.0))# Finally, we remove these visual elements by calling .empty().finally:if weights_warning is not None:weights_warning.empty()if progress_bar is not None:progress_bar.empty()# This code is based on https://github.com/whitphx/streamlit-webrtc/blob/c1fe3c783c9e8042ce0c95d789e833233fd82e74/sample_utils/turn.py
@st.cache_data  # type: ignore
def get_ice_servers():"""Use Twilio's TURN server because Streamlit Community Cloud has changedits infrastructure and WebRTC connection cannot be established without TURN server now.  # noqa: E501We considered Open Relay Project (https://www.metered.ca/tools/openrelay/) too,but it is not stable and hardly works as some people reported like https://github.com/aiortc/aiortc/issues/832#issuecomment-1482420656  # noqa: E501See https://github.com/whitphx/streamlit-webrtc/issues/1213"""# Ref: https://www.twilio.com/docs/stun-turn/apitry:account_sid = os.environ["TWILIO_ACCOUNT_SID"]auth_token = os.environ["TWILIO_AUTH_TOKEN"]except KeyError:logger.warning("Twilio credentials are not set. Fallback to a free STUN server from Google."  # noqa: E501)return [{"urls": ["stun:stun.l.google.com:19302"]}]client = Client(account_sid, auth_token)token = client.tokens.create()return token.ice_serversdef main():st.header("Real Time Speech-to-Text")st.markdown("""
This demo app is using [DeepSpeech](https://github.com/mozilla/DeepSpeech),
an open speech-to-text engine.A pre-trained model released with
[v0.9.3](https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3),
trained on American English is being served.
""")# https://github.com/mozilla/DeepSpeech/releases/tag/v0.9.3MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.pbmm"  # noqaLANG_MODEL_URL = "https://github.com/mozilla/DeepSpeech/releases/download/v0.9.3/deepspeech-0.9.3-models.scorer"  # noqaMODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.pbmm"LANG_MODEL_LOCAL_PATH = HERE / "models/deepspeech-0.9.3-models.scorer"download_file(MODEL_URL, MODEL_LOCAL_PATH, expected_size=188915987)download_file(LANG_MODEL_URL, LANG_MODEL_LOCAL_PATH, expected_size=953363776)lm_alpha = 0.931289039105002lm_beta = 1.1834137581510284beam = 100sound_only_page = "Sound only (sendonly)"with_video_page = "With video (sendrecv)"app_mode = st.selectbox("Choose the app mode", [sound_only_page, with_video_page])if app_mode == sound_only_page:app_sst(str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam)elif app_mode == with_video_page:app_sst_with_video(str(MODEL_LOCAL_PATH), str(LANG_MODEL_LOCAL_PATH), lm_alpha, lm_beta, beam)def app_sst(model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int):webrtc_ctx = webrtc_streamer(key="speech-to-text",mode=WebRtcMode.SENDONLY,audio_receiver_size=1024,rtc_configuration={"iceServers": get_ice_servers()},media_stream_constraints={"video": False, "audio": True},)status_indicator = st.empty()if not webrtc_ctx.state.playing:returnstatus_indicator.write("Loading...")text_output = st.empty()stream = Nonewhile True:if webrtc_ctx.audio_receiver:if stream is None:from deepspeech import Modelmodel = Model(model_path)model.enableExternalScorer(lm_path)model.setScorerAlphaBeta(lm_alpha, lm_beta)model.setBeamWidth(beam)stream = model.createStream()status_indicator.write("Model loaded.")sound_chunk = pydub.AudioSegment.empty()try:audio_frames = webrtc_ctx.audio_receiver.get_frames(timeout=1)except queue.Empty:time.sleep(0.1)status_indicator.write("No frame arrived.")continuestatus_indicator.write("Running. Say something!")for audio_frame in audio_frames:sound = pydub.AudioSegment(data=audio_frame.to_ndarray().tobytes(),sample_width=audio_frame.format.bytes,frame_rate=audio_frame.sample_rate,channels=len(audio_frame.layout.channels),)sound_chunk += soundif len(sound_chunk) > 0:sound_chunk = sound_chunk.set_channels(1).set_frame_rate(model.sampleRate())buffer = np.array(sound_chunk.get_array_of_samples())stream.feedAudioContent(buffer)text = stream.intermediateDecode()text_output.markdown(f"**Text:** {text}")else:status_indicator.write("AudioReciver is not set. Abort.")breakdef app_sst_with_video(model_path: str, lm_path: str, lm_alpha: float, lm_beta: float, beam: int
):frames_deque_lock = threading.Lock()frames_deque: deque = deque([])async def queued_audio_frames_callback(frames: List[av.AudioFrame],) -> av.AudioFrame:with frames_deque_lock:frames_deque.extend(frames)# Return empty frames to be silent.new_frames = []for frame in frames:input_array = frame.to_ndarray()new_frame = av.AudioFrame.from_ndarray(np.zeros(input_array.shape, dtype=input_array.dtype),layout=frame.layout.name,)new_frame.sample_rate = frame.sample_ratenew_frames.append(new_frame)return new_frameswebrtc_ctx = webrtc_streamer(key="speech-to-text-w-video",mode=WebRtcMode.SENDRECV,queued_audio_frames_callback=queued_audio_frames_callback,rtc_configuration={"iceServers": get_ice_servers()},media_stream_constraints={"video": True, "audio": True},)status_indicator = st.empty()if not webrtc_ctx.state.playing:returnstatus_indicator.write("Loading...")text_output = st.empty()stream = Nonewhile True:if webrtc_ctx.state.playing:if stream is None:from deepspeech import Modelmodel = Model(model_path)model.enableExternalScorer(lm_path)model.setScorerAlphaBeta(lm_alpha, lm_beta)model.setBeamWidth(beam)stream = model.createStream()status_indicator.write("Model loaded.")sound_chunk = pydub.AudioSegment.empty()audio_frames = []with frames_deque_lock:while len(frames_deque) > 0:frame = frames_deque.popleft()audio_frames.append(frame)if len(audio_frames) == 0:time.sleep(0.1)status_indicator.write("No frame arrived.")continuestatus_indicator.write("Running. Say something!")for audio_frame in audio_frames:sound = pydub.AudioSegment(data=audio_frame.to_ndarray().tobytes(),sample_width=audio_frame.format.bytes,frame_rate=audio_frame.sample_rate,channels=len(audio_frame.layout.channels),)sound_chunk += soundif len(sound_chunk) > 0:sound_chunk = sound_chunk.set_channels(1).set_frame_rate(model.sampleRate())buffer = np.array(sound_chunk.get_array_of_samples())stream.feedAudioContent(buffer)text = stream.intermediateDecode()text_output.markdown(f"**Text:** {text}")else:status_indicator.write("Stopped.")breakif __name__ == "__main__":import osDEBUG = os.environ.get("DEBUG", "false").lower() not in ["false", "no", "0"]logging.basicConfig(format="[%(asctime)s] %(levelname)7s from %(name)s in %(pathname)s:%(lineno)d: ""%(message)s",force=True,)logger.setLevel(level=logging.DEBUG if DEBUG else logging.INFO)st_webrtc_logger = logging.getLogger("streamlit_webrtc")st_webrtc_logger.setLevel(logging.DEBUG)fsevents_logger = logging.getLogger("fsevents")fsevents_logger.setLevel(logging.WARNING)main()

參考文獻

【1】API Reference - Streamlit Docs

【2】andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com)turner-anderson/streamlit-cropper: A simple image cropper for Streamlit (github.com)andfanilo/streamlit-lottie: Streamlit component to render Lottie animations (github.com)?

【3】awetomate/text-to-speech-streamlit: Text-to-Speech solution using Google's Cloud TTS API and a Streamlit front end (github.com)?【4】Using streamlit for an STT / TTS model demo? - 🧩 Streamlit Components - Streamlit

【5】AI-App/Streamlit-TTS (github.com)

【6】Building a Voice Assistant using ChatGPT API | Vahid's ML-Blog (vahidmirjalili.com)?

【7】streamlit/llm-examples: Streamlit LLM app examples for getting started (github.com)?

【8】whitphx/streamlit-stt-app: Real time web based Speech-to-Text app with Streamlit (github.com)

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/210831.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/210831.shtml
英文地址,請注明出處:http://en.pswp.cn/news/210831.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Activity從下往上彈出視差效果實現

其實這篇文章是轉至簡書上的大佬的&#xff0c;加上我自己的代碼實踐了下發現可行&#xff0c;于是就分享下 先看效果 介紹: 其實有很多方法都可以實現這種效果&#xff0c;popwindow&#xff0c;Dialog&#xff0c;BottomSheetDialogFragment&#xff0c;BottomSheetDialog等…

Linux 安裝 Gitea.md

### 從官網下載git 和 gitea Git下載地址: https://mirrors.edge.kernel.org/pub/software/scm/git/ 下載 git-2.43.0.tar.gz: https://mirrors.edge.kernel.org/pub/software/scm/git/git-2.43.0.tar.gz Gitea下載地址: https://dl.gitea.com/gitea/ 下載 linux-arm64 的二進…

鏈表OJ—相交鏈表

提示&#xff1a;文章寫完后&#xff0c;目錄可以自動生成&#xff0c;如何生成可參考右邊的幫助文檔 文章目錄 前言 1、相交鏈表的題目&#xff1a; 方法講解&#xff1a; 圖文解析&#xff1a; 代碼實現&#xff1a; 總結 前言 世上有兩種耀眼的光芒&#xff0c;一種是正在升…

15.Java程序設計-基于SSM框架的微信小程序校園求職系統的設計與實現

摘要&#xff1a; 本研究旨在設計并實現一款基于SSM框架的微信小程序校園求職系統&#xff0c;以提升校園求職流程的效率和便捷性。通過整合微信小程序平臺和SSM框架的優勢&#xff0c;本系統涵蓋了用戶管理、職位發布與搜索、簡歷管理、消息通知等多個功能模塊&#xff0c;為…

愛智EdgerOS之深入解析AI圖像引擎如何實現AI視覺開發

一、前言 AI 視覺是為了讓計算機利用攝像機來替代人眼對目標進行識別&#xff0c;跟蹤并進一步完成一些更加復雜的圖像處理。這一領域的學術研究已經存在了很長時間&#xff0c;但直到 20 世紀 70 年代后期&#xff0c;當計算機的性能提高到足以處理圖片這樣大規模的數據時&am…

ArkUI組件

目錄 一、概述 聲明式UI 應用模型 二、常用組件 1、Image&#xff1a;圖片展示組件 示例 配置控制授權申請 2、Text&#xff1a;文本顯示組件 示例 3、TextInput&#xff1a;文本輸入組件 示例 4、Button&#xff1a;按鈕組件 5、Slider&#xff1a;滑動條組件 …

Swagger PHP Thinkphp 接口文檔

安裝 1. 安裝依賴 composer require zircote/swagger-php 2. 下載Swagger UI git clone https://github.com/swagger-api/swagger-ui.git 3. 復制下載好的Swagger UI 中的dist目錄到public目錄中&#xff0c;修改目錄名稱 cp -rf swagger-ui/dist /home/htdocs/public/ m…

vue中設置滾動條的樣式

在vue項目中&#xff0c;想要設置如下圖中所示滾動條的樣式&#xff0c;可以采用如下方式&#xff1a; ?// 直接寫在vue.app文件中 ::-webkit-scrollbar {width: 3px;height: 3px; } ::-webkit-scrollbar-thumb { //滑塊部分// border-radius: 5px;background-color: #1890ff;…

【智能家居】智能家居項目

智能家居項目目錄 項目目錄結構 完整而典型的項目目錄結構 CMake模板 CMake編譯運行 README.md 項目說明文檔 智能家居項目目錄 【智能家居】面向對象編程OOP和設計模式(工廠模式) 【智能家居】一、工廠模式實現繼電器燈控制 【智能家居】二、添加火災檢測模塊&#xff08;…

4-Docker命令之docker ps

1.docker ps介紹 docker ps命令是用來列出容器的相關信息 2.docker ps用法 docker ps [參數] [rootcentos79 ~]# docker ps --helpUsage: docker ps [OPTIONS]List containersAliases:docker container ls, docker container list, docker container ps, docker psOptions…

【重點】【二叉樹】199.二叉樹的右視圖

題目 法1:層次遍歷 最佳方法&#xff0c;牢記&#xff01;&#xff01;&#xff01; class Solution {public List<Integer> rightSideView(TreeNode root) {List<Integer> res new ArrayList<>();if (root null) {return res;}Queue<TreeNode> q…

Java 克隆:復制構造函數與克隆

為了實現克隆&#xff0c;我們需要配置我們的類并遵循以下步驟&#xff1a; 在我們的類或其超類或接口中實現 Cloneable 接口。 定義一個應處理 CloneNotSupportedException&#xff08;拋出或記錄&#xff09;的 clone() 方法。 并且&#xff0c;在大多數情況下&#xff0c;我…

Ubuntu上svn基本使用(gitee提交下載)

目錄 環境準備 1. 獲取代碼到本地 直接獲取 獲取代碼時加入用戶名密碼 指定版本更新 2. 提交代碼 3. 展示代碼列表 4. 添加代碼文件(目錄) 5. 刪除gitee倉庫中的文件 參考文檔鏈接 環境準備 當前操作系統為Ubuntu22.04LTS gitee 創建倉庫時 需要打開svn的支持 sudo…

GoLong的學習之路,進階,微服務之使用,RPC包(包括源碼分析)

今天這篇是接上上篇RPC原理之后這篇是講如何使用go本身自帶的標準庫RPC。這篇篇幅會比較短。重點在于上一章對的補充。 文章目錄 RPC包的概念使用RPC包服務器代碼分析如何實現的&#xff1f;總結Server還提供了兩個注冊服務的方法 客戶端代碼分析如何實現的&#xff1f;如何異步…

nginx配置正向代理支持https

操作系統版本&#xff1a; Alibaba Cloud Linux 3.2104 LTS 64位 nginx版本&#xff1a; nginx-1.25.3 1. 下載軟件 切換目錄 cd /server wget http://nginx.org/download/nginx-1.25.3.tar.gz 1.1解壓 tar -zxvf nginx-1.25.3.tar.gz 1.2切換到源碼所在目錄…

【探索Linux】—— 強大的命令行工具 P.21(多線程 | 線程同步 | 條件變量 | 線程安全)

閱讀導航 引言一、線程同步1. 競態條件的概念2. 線程同步的概念 二、條件變量1. 條件變量函數?使用前提&#xff08;1&#xff09;初始化條件變量&#xff08;2&#xff09;等待條件滿足&#xff08;3&#xff09;喚醒等待pthread_cond_broadcast()pthread_cond_signal() &…

JavaGUI詳解

GUI Java GUI**1、Java GUI 概述****2、容器****2、1 窗口****2、2 彈窗和對話框****對話框****自定義彈窗** **2、3 面板****普通面板****滾動面板****分隔面板****選項卡面板** **3、布局****3.1、流式布局****3.2、網格布局****3.3、邊框布局****4、組件****4.1、基本組件**…

Steampipe的安裝部署及簡單使用(附帶AWS CLI的安裝與使用)

介紹 Steampipe 將 API 和服務公開為高性能關系數據庫&#xff0c;使您能夠編寫基于 SQL 的查詢來探索動態數據。Mods 通過使用簡單 HCL 構建的儀表板、報告和控件擴展了 Steampipe 的功能。 官網&#xff1a;https://steampipe.io/ steampipe的安裝 下載腳本并執行 sudo /…

Unity優化——批處理的優勢

大家好&#xff0c;這里是七七&#xff0c;前段時間在忙一些事情&#xff0c;最近終于有空來更新優化篇了。本文本打算分為上下兩篇&#xff0c;但為了看更方便&#xff0c;就多花了幾天寫成一文發布&#xff0c;具體是介紹了圖形優化中批處理的具體效果&#xff0c;雖然本文篇…

【Linux】cat 命令使用

cat 命令 cat&#xff08;英文全拼&#xff1a;concatenate&#xff09;命令用于連接文件并打印到標準輸出設備上。 可以使用cat連接多個文件、創建新文件、將內容附加到現有文件、查看文件內容以及重定向終端或文件中的輸出。 cat可用于在不同選項的幫助下格式化文件的輸出…