大模型相關非安全綜述
LLM演化和分類法
- A survey on evaluation of large language models,” arXiv preprint arXiv:2307.03109, 2023.
- “A survey of large language models,” arXiv preprint arXiv:2303.18223, 2023.
- “A survey on llm-gernerated text detection: Necessity, methods, and future directions,” arXiv preprint arXiv:2310.14724, 2023.
- “A survey on large language models: Applications, challenges, limitations, and practical usage,” TechRxiv, 2023.
- “Unveiling security, privacy, and ethical concerns of chatgpt,” 2023.
- “Eight things to know about large language models,” arXiv preprint arXiv:2304.00612, 2023.
LLM on 軟件工程
- “Large language models for software engineering: Survey and open problems,” 2023.
- “Large language models for software engineering: A systematic literature review,” arXiv preprint arXiv:2308.10620, 2023.
醫學
- “Large language models in medicine,” Nature medicine, vol. 29, no. 8, pp. 1930–1940, 2023.
- “The future landscape of large language models in medicine,” Communications Medicine, vol. 3, no. 1, p. 141, 2023.
安全領域
LLM on 網絡安全
- “A more insecure ecosystem? chatgpt’s influence on cybersecurity,” ChatGPT’s Influence on Cybersecurity (April 30, 2023), 2023.
- “Chatgpt for cybersecurity: practical applications, challenges, and future directions,” Cluster Computing, vol. 26, no. 6, pp. 3421–3436, 2023.
- “What effects do large language models have on cybersecurity,” 2023.
- “Synergizing generative ai and cybersecurity: Roles of generative ai entities, companies, agencies, and government in enhancing cybersecurity,” 2023.LLM 幫助安全分析師開發針對網絡威脅的安全解決方案。
突出針對 LLM 的威脅和攻擊
主要關注點在于安全應用程序領域,深入研究利用 LLM 發起網絡攻擊。
- “From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE Access, 2023.
- “A security risk taxonomy for large language models,” arXiv preprint arXiv:2311.11415, 2023.
- “Survey of vulnerabilities in large language models revealed by adversarial attacks,” 2023.
- “Are chatgpt and deepfake algorithms endangering the cybersecurity industry? a review,” International Journal of Engineering and Applied Sciences, vol. 10, no. 1, 2023.
- “Beyond the safeguards: Exploring the security risks of chatgpt,” 2023.
- From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review, 2023.
- “Adversarial attacks and defenses in large language models: Old and new threats,” 2023.
- “Do chatgpt and other ai chatbots pose a cybersecurity risk?: An exploratory study,” International Journal of Security and Privacy in Pervasive Computing (IJSPPC), vol. 15, no. 1, pp. 1–11, 2023.
- “Unveiling the dark side of chatgpt: Exploring cyberattacks and enhancing user awareness,” 2023.
網絡犯罪分子利用的漏洞,關注與LLM相關的風險
- “Chatbots to chatgpt in a cybersecurity space: Evolution, vulnerabilities, attacks, challenges, and future recommendations,” 2023.
- “Use of llms for illicit purposes: Threats, prevention measures, and vulnerabilities,” 2023.
LLM隱私問題
- “Privacy-preserving prompt tuning for large language model services,” arXiv preprint arXiv:2305.06212, 2023.分析LLM的隱私問題,根據對手的能力對其進行分類,并探討防御策略。
- “Privacy and data protection in chatgpt and other ai chatbots: Strategies for securing user information,” Available at SSRN 4454761, 2023. 探討了已建立的隱私增強技術在保護LLM隱私方面的應用
- “Identifying and mitigating privacy risks stemming from language models: A survey,” 2023. 討論了LLM的隱私風險。
- A Survey on Large Language Model (LLM) Security and Privacy: The Good, the Bad, and the Ugly. 隱私問題和安全性問題。