在學習深度學習的卷積神經算法時,需要貓和狗的訓練數據集。這時想到在百度網上爬取貓和狗的圖片。
在爬取狗狗圖片的時候,我抓包分析了下獲取這個url1 “https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=&st=-1&fm=index&fr=&hs=0&xthttps=111110&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&word=”,然后在該URL返回的信息中獲取狗圖片的url鏈接。但是在使用該鏈接獲取的狗圖片鏈接只有30張,這些遠遠不夠訓練數據。
我開始抓包分析,后面的圖片加載出來的時候是通過url2“https://image.baidu.com/search/acjson?tn=resultjson_com&word=%E7%8B%97%E7%8B%97&ie=utf-8&fp=result&fr=&ala=0&applid=7765865225436197871&pn=30&rn=30&nojc=0&gsm=1e&newReq=1"這個網址來加載的,當pn和rn都等于30時,相當于該網址的第二頁。當pn和rn等于60時是該網址的第三頁。找到該規律后,就可以通過for循環來獲取很多狗狗圖片的url.
于是,我將url的地址從url1換成url2,但是在發送request請求時,報b'{"antiFlag":1,"message":"Forbid spider access"}'錯誤。我爬蟲的代碼被反爬了。我使用該url2在apifox上運行,apifox上是可以成功返回url2的返回信息的。
即然通過接口可以獲取正確的返回值,那通過代碼應該也可以獲取正確的返回信息。在網上查詢Forbid spider access錯誤信息,有提示說添加豐富headers信息會解決該問題。我將網上headers的相關字段都拔下來發送requests消息,該Forbid spider access錯誤信息解決了,但是返回的消息都是一段看不懂的字符。
我猜測應該是與編碼相關,我查看了下抓包信息response的content-encoding值為br,我嘗試了下將'Accept-Encoding':'gzip, deflate, br, zstd',注釋掉然后試試。不出所料,注釋后就可以正常運行了。
獲取img相關url的代碼如下:
urls_img = []
for n in range(100):pn = n*30url = "https://image.baidu.com/search/acjson?tn=resultjson_com&word=%E7%8C%AB&ie=utf-8&fp=result&fr=&ala=0&applid=10467951401242802557&pn=" + str(pn) + "&rn=" + str(pn) + "&nojc=0&gsm=5e&newReq=1"print(url)time.sleep(20)payload={}headers = {'Cookie': 'cookie=BDIMGISLOGIN=0; winWH=%5E6_1560x882; BIDUPSID=31E30236016B14E87E80A761DA8D007D; PSTM=1746601325; BAIDUID=31E30236016B14E870600C64626E7373:FG=1; MAWEBCUID=web_zLZtQkKKSPdTpACZxFACKprGPULtIeLcIQMzqvrDsrtFgKqqSu; H_WISE_SIDS_BFESS=62327_62833_63143_63241_63326_63352_63380_63382_63394_63390_63403_63441_63458_63472_63497_63543_63533_63548; BDSFRCVID=laPOJeC62xv16McsstZOeePUug5K4enTH6bHG1IqkxAuf9BSprw9EG0PZM8g0KuhkXxkogKKKgOTHICF_2uxOjjg8UtVJeC6EG0Ptf8g0x5; H_BDCLCKID_SF=JRKqoD-afI83fP36q4bHK-t052T22jnQKGR9aJ5nJDoWfCDCXtb5Kn0lXUo-QpQt5bTi_n58QpP-HlnjDfraMnkF5fD83qJj-jk8Kl0MLUcYbb0xynosMpkbMUnMBMni52OnapTn3fAKftnOM46JehL3346-35543bRTLnLy5KJWMDcnK4-Xj5bWjG5P; delPer=0; PSINO=5; BDSFRCVID_BFESS=laPOJeC62xv16McsstZOeePUug5K4enTH6bHG1IqkxAuf9BSprw9EG0PZM8g0KuhkXxkogKKKgOTHICF_2uxOjjg8UtVJeC6EG0Ptf8g0x5; H_BDCLCKID_SF_BFESS=JRKqoD-afI83fP36q4bHK-t052T22jnQKGR9aJ5nJDoWfCDCXtb5Kn0lXUo-QpQt5bTi_n58QpP-HlnjDfraMnkF5fD83qJj-jk8Kl0MLUcYbb0xynosMpkbMUnMBMni52OnapTn3fAKftnOM46JehL3346-35543bRTLnLy5KJWMDcnK4-Xj5bWjG5P; BA_HECTOR=2k8g8k2k2l0k840g24a10k0kag04061k4khge25; BAIDUID_BFESS=31E30236016B14E870600C64626E7373:FG=1; ZFY=gTCA97ON7I:BQC2pFSM9Q0QHQvSKXixg:BldTCH3HmRJc:C; H_PS_PSSID=62327_62833_63143_63241_63326_63352_63403_63441_63458_63497_63543_63533_63548_63568_63564_63582_63576; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; H_WISE_SIDS=62327_62833_63241_63352_63441_63458_63564_63582_63576; arialoadData=false; ab_sr=1.0.1_MmM5MzUxNDBhN2I5NGE5MWRjN2JmOTc5ZjU3ODA1NmUwOWQ0Zjg0YmVkODNhYmNhNTk0MjI4MDYxYmIyNGNhYWYzYjY0MDg2NmM0YjBjNzUwNGNjMWI0NGNlYTA5MGYyNWY5MzcwZWM0ZGM1YTg2YmM4YzE5N2ZmODUyMjg5ODU4MTk4YzU3YzgxMmVhNTYwMGEwYTMyNzVmYjIwMmY0MA==','User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36','content-type': 'application/json','Accept': 'application/json, text/plain, */*','Host': 'image.baidu.com','Connection': 'keep-alive',# 'Accept-Encoding':'gzip, deflate, br, zstd','Accept-Language':'zh-CN,zh;q=0.9','sec-ch-ua':'"Google Chrome";v="137", "Chromium";v="137", "Not/A)Brand";v="24"','referer':'https://image.baidu.com/search/index?tn=baiduimage&ipn=r&ct=201326592&cl=2&lm=&st=-1&fm=index&fr=&hs=0&xthttps=111110&sf=1&fmq=&pv=&ic=0&nc=1&z=&se=&showtab=0&fb=0&width=&height=&face=0&istype=2&ie=utf-8&word=%E7%8B%97%E7%8B%97','sec-ch-ua-platform':'macOS'}response = requests.request("GET", url, headers=headers, data=payload)time.sleep(10)print(str(response.content))string_context = str(response.content)format = r'https://img2.baidu.com(.*?)(?=\")'url_second = re.findall(pattern=format,string=string_context)urls_img.append(url_second)
print(urls_img)
怎樣在request返回的response信息中獲取圖片的url鏈接呢?如下圖所示:
這種可以考慮使用正則表達式來獲取url的鏈接,然后使用re.findall()函數將對應的url全找出。
import re
string_context = str(response.content)
format = r'https://img2.baidu.com(.*?)(?=\")'---這個正則表達式是表示將https://img2.baidu.com和“中間字符取出。
url_second = re.findall(pattern=format,string=string_context)
##將url信息存儲到json文件中 with open("/Users/zc/PyCharmMiscProject/dataset/cat_urls.json","w") as f:json.dump(urls_img,f)###讀取json文件 with open("/Users/zc/PyCharmMiscProject/dataset/cat_urls.json","r") as f:urls_load = json.load(f)print(len(urls_load))因為截取的url鏈接為/it/u=2826177801,1382156594\\\\u0026fm=253\\\\u0026app=138\\\\u0026f=JPEG?w=500\\\\u0026h=667,我和真實的img鏈接對比了下,需要將\\\\u0026替換成&,所以還需要對各個鏈接做相應的處理。我使用了字符串的replace()函數進行處理,還有一種處理方法(即使用str.split("\\\\u0026")對字符串進行分割,然后將分割后的list1組合一起,使用(&).join(list1)函數連接) ###url鏈接轉換 url_format=[] for i in range(len(urls_load)-1):for j in range(len(urls_load[i])-1):url_new = urls_load[i][j].replace("\\\\u0026","&")url_format.append(url_new)
組合img的鏈接,然后下載圖片并標號。需要注意的是headers_img中的'content-type'為'image/webp'
path = '/Users/zc/PyCharmMiscProject/dataset/cat/'
host_1 = "https://img2.baidu.com"
headers_img = {'user-agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36','content-type':'image/webp'}
for i in range(len(url_format)-1):url_img = host_1 + url_format[i]img_res = requests.request(url=url_img,method='GET',headers=headers_img)if img_res.status_code == 200:img_name = str(i) + ".jpg"with open(path+img_name,'wb') as file:file.write(img_res.content)else:print("it fails to download jpg")