python爬蟲之scrapy基于管道持久化存儲操作
本文基于python爬蟲之基于終端指令的持久化存儲和python爬蟲之數據解析操作而寫
scrapy持久化存儲
基于管道:
編碼流程:
1、數據解析
2、在item類中定義相關屬性
3、將解析的數據封裝存儲到item類型的對象
4、在管道類的process_item中要將接受到的item對象中存儲的數據進行持久化存儲操作
5、在配置文件中開啟管道
實際操作:
1、在items.py中定義item類
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapyclass QiushiproItem(scrapy.Item):# define the fields for your item here like:title = scrapy.Field()content = scrapy.Field()# name = scrapy.Field()# pass
2、在qiushi.py中將數據封裝到item類中
import scrapy
from qiushiPro.items import QiushiproItemclass QiushiSpider(scrapy.Spider):name = "qiushi"# allowed_domains = ["www.xxx.com"]start_urls = ["https://www.qiushile.com/duanzi/"]# def parse(self, response):# #解析:段子標題+段子內容# li_list = response.xpath('//*[@id="ct"]/div[1]/div[2]/ul')# all_data = []# for li in li_list:# #xpath返回的是列表,但是列表元素一定是Selector類型的對象# #extract可以將Selector對象中data參數存儲的字符串提取出來# # title = li.xpath('./li/div[2]/div[1]/a/text()')[0].extract()# title = li.xpath('./li/div[2]/div[1]/a/text()').extract_first()# #列表調用了extract之后,則表示將列表中每一個Selector對象中data對應的字符串提取了出來# content = li.xpath('./li/div[2]/div[2]//text()')[0].extract()## dic = {# 'title':title,# 'content':content# }# all_data.append(dic)# # print(title,content)def parse(self, response):#解析:段子標題+段子內容li_list = response.xpath('//*[@id="ct"]/div[1]/div[2]/ul')all_data = []for li in li_list:#xpath返回的是列表,但是列表元素一定是Selector類型的對象#extract可以將Selector對象中data參數存儲的字符串提取出來# title = li.xpath('./li/div[2]/div[1]/a/text()')[0].extract()title = li.xpath('./li/div[2]/div[1]/a/text()').extract_first()#列表調用了extract之后,則表示將列表中每一個Selector對象中data對應的字符串提取了出來content = li.xpath('./li/div[2]/div[2]//text()')[0].extract()item = QiushiproItem()item['title'] = titleitem['content'] = contentyield item#將item提交給了管道
3、在pipelines.py中的process_item類中進行持久化存儲
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface
from itemadapter import ItemAdapterclass QiushiproPipeline:fp = None#重寫父類的一個方法:該方法只在開始爬蟲的時候被調用一次def open_spider(self,spider):print('開始爬蟲……')self.fp = open('./qiushi.txt','w',encoding='utf-8')#專門用來處理item類型對象#該方法可以接收爬蟲文件提交過來的item對象#該方法每接收到一個item就會被調用一次def process_item(self, item, spider):title = item['title']content = item['content']self.fp.write(title+':'+content+'\n')return itemdef close_spider(self,spider):print('結束爬蟲!')self.fp.close()
4、在settings.py配置文件中取消管道注釋,開啟管道
ITEM_PIPELINES = {"qiushiPro.pipelines.QiushiproPipeline": 300,#300表示的是優先級,數值越小優先級越高
}
運行:終端輸入scrapy crawl qiushi
可觀察到qiushi.txt文件的生成