諸神緘默不語-個人技術博文與視頻目錄
文章目錄
- 一、前言
- 二、安裝方式
- 三、基本使用
- 1. 發起 GET 請求
- 2. 發起 POST 請求
- 四、requests請求調用常用參數
- 1. URL
- 2. 數據data
- 3. 請求頭 headers
- 4. 參數 params
- 5. 超時時間 timeout
- 6. 文件上傳 file:上傳純文本文件流
- 7. json
- 五. 響應的屬性和函數
- 1. 屬性:headers、cookies、編碼格式
- 2. 異常處理:raise_for_status()
- 六、Session 會話對象(保持登錄態)
- 七、進階用法
- 1. 上傳壓縮文件
- 2. 并發
- 七、常見異常
- 1. requests.exceptions.JSONDecodeError
- 2. requests.exceptions.Timeout
- 3. requests.exceptions.ProxyError: HTTPSConnectionPool
- 八、實戰案例:爬取豆瓣電影 Top250(示例)
- 本文撰寫過程中參考的其他網絡資料
一、前言
在進行網絡編程或爬蟲開發時,我們經常需要向網頁或服務器發送 HTTP 請求,獲取數據。這時,requests
包無疑是最受歡迎、最簡潔易用的 Python 庫之一。
相比原生的 urllib
模塊,requests
提供了更人性化的 API,更容易上手,幾乎成為了網絡請求的“標準庫”。
本文將介紹 requests
的基本用法、進階操作以及常見問題處理,配合實際代碼演示,帶你快速掌握這個神器!
https://httpbin.org/是一個簡單的用來模擬各種HTTP服務請求的網站,以下很多代碼示例都會用這個網站的鏈接來實現。
因為這個網站部署在海外,所以可能會出現網絡訪問的問題,可以通過部署到本地來解決。部署到本地可以參考官方教程,或者這篇博文:五、接口測試 — Httpbin介紹(請求調試工具) - 知乎
二、安裝方式
pip install requests
三、基本使用
關于get請求和post請求的區別請參考我撰寫的另一篇博文:Web應用中的GET與POST請求詳解
1. 發起 GET 請求
import requestsresponse = requests.get('https://httpbin.org/get')
print(response.status_code) # 狀態碼
print(response.text) # 響應內容(字符串)
print(response.json()) # 如果是 JSON,解析成字典
2. 發起 POST 請求
payload = {'username': 'test', 'password': '123456'}
response = requests.post('https://httpbin.org/post', data=payload)
print(response.json())
四、requests請求調用常用參數
1. URL
就是第一個參數,網站的鏈接地址
2. 數據data
請求攜帶的數據。
如果值是字符串或字節流,默認不設置Content-Type會設置。
如果值是字典、元組組成的列表或列表對象,會默認Content-Type會設置為application/x-www-form-urlencoded
,也就是HTML表單形式的鍵值對數據。(對Content-Type的詳細介紹請見下一節headers參數)
import requests
import jsonpayload = {"key1": "value1", "key2": "value2"}# String payload in json format
r = requests.post("https://httpbin.org/post", data="a random sentence")
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))# String payload in json format
r = requests.post("https://httpbin.org/post", data=json.dumps(payload))
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))# String payload in json content type
r = requests.post("https://httpbin.org/post",data=json.dumps(payload),headers={"Content-Type": "application/json"},
)
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))# Dictionary payload
r = requests.post("https://httpbin.org/post", data=payload)
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))# List of tuples payload
payload_tuples = [("key1", "value1"), ("key2", "value2")]
r = requests.post("https://httpbin.org/post", data=payload_tuples)
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))# Bytes payload
payload_bytes = "key1=value1&key2=value2".encode("utf-8")
r = requests.post("https://httpbin.org/post", data=payload_bytes)
print(r.json())
print(r.json()["headers"].get("Content-Type","None"))
3. 請求頭 headers
一般會攜帶請求的Content-Type、系統信息(如使用的設備、編碼方式等)、認證信息、時間戳等
headers = {'User-Agent': 'MyUserAgent/1.0'}
response = requests.get('https://httpbin.org/headers', headers=headers)
print(response.json())
Content-Type的常見類型:
(圖源1)
4. 參數 params
這個在get請求中的效果就類似于直接在URL后面加?k=v
params = {'q': 'python'}
response = requests.get('https://httpbin.org/get', params=params)
print(response.url) # 實際請求的完整 URL
輸出:https://httpbin.org/get?q=python
5. 超時時間 timeout
response = requests.get('https://httpbin.org/delay/3', timeout=2)
如果超過2秒沒響應,會拋出
requests.exceptions.Timeout
異常。
6. 文件上傳 file:上傳純文本文件流
files = {'file': open('test.txt', 'rb')}
response = requests.post('https://httpbin.org/post', files=files)
print(response.text)
↑ 需要注意的是雖然file參數確實可以直接這么傳文件流……但我沒咋見過真這么干的。
一般純文本不用file傳,一般都直接塞data里面帶過去。
非純文本文件流(二進制字節流),我一般看比較多的傳輸方式是把字節流轉換為base64編碼塞到data里帶。用base64編碼的代碼可參考我寫的另一篇博文:深入理解 Python 的 base64 模塊
(不過說實話直接用file參數傳文件流好像實際上背后也經過了base64編碼-解碼的過程,但是大家都這么干一定有大家的道理)
7. json
用json參數傳JSON對象(在Python 3中表現為字典對象)就相當于用data參數傳JSON對象、然后顯示設置Content-Type為application/json
payload = {'id': 1, 'name': 'chatgpt'}
response = requests.post('https://httpbin.org/post', json=payload)
print(response.json())
上面這個請求和下面這個請求是一樣的:
response = requests.post("https://httpbin.org/post",data=json.dumps(payload),headers={"Content-Type": "application/json"},
)
print(response.json())
作為對比可以看看另外兩種請求參數格式的效果(可以注意到第一種寫法返回的data和json值好歹還是一樣的,第二種寫法的話對象就放到form里了,因為是以表單對象形式來解析的):
response = requests.post("https://httpbin.org/post",data=json.dumps(payload)
)
print(response.json())response = requests.post("https://httpbin.org/post",data=payload
)
print(response.json())
五. 響應的屬性和函數
1. 屬性:headers、cookies、編碼格式
r = requests.get('https://httpbin.org/get')
print(r.headers)
print(r.cookies)
print(r.encoding)
2. 異常處理:raise_for_status()
如果status_code不是200就報錯
六、Session 會話對象(保持登錄態)
requests.Session()
可以模擬保持會話,適合需要登錄認證的網站。
s = requests.Session()
s.post('https://httpbin.org/cookies/set', data={'cookie': 'value'})
response = s.get('https://httpbin.org/cookies')
print(response.text)
七、進階用法
1. 上傳壓縮文件
- gzip實現
import requests import gzip import jsondata = json.dumps({'key': 'value'}).encode('utf-8') compressed_data = gzip.compress(data)headers = {'Content-Encoding': 'gzip'}response = requests.post('https://httpbin.dev/api', data=compressed_data, headers=headers) response.raise_for_status()print("Gzip Compressed Request Status:", response.status_code)
- brotli實現
import requests import brotlidata = json.dumps({'key': 'value'}).encode('utf-8') compressed_data = brotli.compress(data)headers = {'Content-Encoding': 'br'}response = requests.post('https://httpbin.dev/api', data=compressed_data, headers=headers) response.raise_for_status()print("Brotli Compressed Request Status:", response.status_code)
2. 并發
- httpx實現(來源于Concurrency vs Parallelism)
import asyncio import httpx import time# Asynchronous function to fetch the content of a URL async def fetch(url):async with httpx.AsyncClient(timeout=10.0) as client:response = await client.get(url)return response.text# Concurrently fetch multiple URLs using asyncio.gather async def concurrent_fetch(urls):tasks = [fetch(url) for url in urls]return await asyncio.gather(*tasks)# Synchronous version to demonstrate performance difference def sync_fetch(urls):results = []for url in urls:response = httpx.get(url)results.append(response.text)return resultsdef run_concurrent():urls = ["http://httpbin.org/delay/2"] * 100 # Use the same delay for simplicitystart_time = time.time()# Running fetch requests concurrentlyasyncio.run(concurrent_fetch(urls))duration = time.time() - start_timeprint(f"Concurrent fetch completed in {duration:.2f} seconds")def run_sync():urls = ["http://httpbin.org/delay/2"] * 100 # Use the same delay for simplicitystart_time = time.time()# Running fetch requests synchronouslysync_fetch(urls)duration = time.time() - start_timeprint(f"Synchronous fetch completed in {duration:.2f} seconds")if __name__ == "__main__":print("Running concurrent version:")# Concurrent fetch completed in 2.05 secondsrun_concurrent()print("Running synchronous version:")# Synchronous fetch completed in 200.15 secondsrun_sync()
- threading實現
import threading import requestsdef post_data(data):requests.post('https://httpbin.dev/api', json=data)# Sample data list data_list = [{'name': 'User1'}, {'name': 'User2'}]threads = [] for data in data_list:thread = threading.Thread(target=post_data, args=(data,))threads.append(thread)thread.start()for thread in threads:thread.join()
關于并發的相關知識也可以參考我寫的另一篇博文:Python中的并發與并行
七、常見異常
1. requests.exceptions.JSONDecodeError
如果response帶的報文不是JSON,還調用response.json()
函數,會報requests.exceptions.JSONDecodeError
錯誤,完整的報錯信息類似這樣:
Traceback (most recent call last):File "myenv_path\Lib\site-packages\requests\models.py", line 974, in jsonreturn complexjson.loads(self.text, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\json\__init__.py", line 346, in
loadsreturn _default_decoder.decode(s)^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\json\decoder.py", line 337, in decodeobj, end = self.raw_decode(s, idx=_w(s, 0).end())^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\json\decoder.py", line 355, in raw_decoderaise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)During handling of the above exception, another exception occurred:Traceback (most recent call last):File "tryrequests1.py", line 6, in <module>print(response.json()) # 如果是 JSON,解析成字典^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\requests\models.py", line 978, in jsonraise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2. requests.exceptions.Timeout
等待請求返回結果的時長超過了timeout
參數設置的時長。
3. requests.exceptions.ProxyError: HTTPSConnectionPool
訪問URL失敗。
有時候網絡服務不穩定是臨時的,直接重試幾次就行。重試的策略可以參考我撰寫的另一篇博文:Python3:在訪問不可靠服務時的重試策略(持續更新ing…)
一個典型的由于臨時的網絡不穩定而產生的訪問失敗報錯輸出全文:
Traceback (most recent call last):File "myenv_path\Lib\site-packages\urllib3\connectionpool.py", line 789, in urlopenresponse = self._make_request(^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\urllib3\connectionpool.py", line 536, in _make_requestresponse = conn.getresponse()^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\urllib3\connection.py", line 507, in getresponsehttplib_response = super().getresponse()^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\http\client.py", line 1374, in getresponseresponse.begin()File "myenv_path\Lib\http\client.py", line 318, in beginversion, status, reason = self._read_status()^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\http\client.py", line 287, in _read_statusraise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without responseThe above exception was the direct cause of the following exception:urllib3.exceptions.ProxyError: ('Unable to connect to proxy', RemoteDisconnected('Remote end closed connection without response'))The above exception was the direct cause of the following exception:Traceback (most recent call last):File "myenv_path\Lib\site-packages\requests\adapters.py", line 667, in sendresp = conn.urlopen(^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\urllib3\connectionpool.py", line 843, in urlopenretries = retries.increment(^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\urllib3\util\retry.py", line 519, in incrementraise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded with url: /cookies (Caused by ProxyError('Unable to connect to proxy', RemoteDisconnected('Remote end
closed connection without response')))During handling of the above exception, another exception occurred:Traceback (most recent call last):File "tryrequests1.py", line 5, in <module>response = s.get('https://httpbin.org/cookies')^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\requests\sessions.py", line 602, in getreturn self.request("GET", url, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\requests\sessions.py", line 589, in requestresp = self.send(prep, **send_kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\requests\sessions.py", line 703, in sendr = adapter.send(request, **kwargs)^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^File "myenv_path\Lib\site-packages\requests\adapters.py", line 694, in sendraise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='httpbin.org', port=443): Max retries exceeded
with url: /cookies (Caused by ProxyError('Unable to connect to proxy', RemoteDisconnected('Remote end closed connection without response')))
八、實戰案例:爬取豆瓣電影 Top250(示例)
import requests
from bs4 import BeautifulSoupheaders = {'User-Agent': 'Mozilla/5.0'}for start in range(0, 250, 25):url = f'https://movie.douban.com/top250?start={start}'r = requests.get(url, headers=headers)soup = BeautifulSoup(r.text, 'html.parser')titles = soup.find_all('span', class_='title')for title in titles:print(title.text)
本文撰寫過程中參考的其他網絡資料
- What is the difference between the ‘json’ and ‘data’ parameters in Requests? | WebScraping.AI
- python requests.post() 請求中 json 和 data 的區別 - 小嘉欣 - 博客園
- Python requests.post()方法中data和json參數的使用_requests.post中data和json是否可以同時設置-CSDN博客
Python requests POST ??