一、Scrapy框架概述
Scrapy是一個為了爬取網站數據,提取結構性數據而編寫的應用框架,它提供了強大的數據提取能力、靈活的擴展機制以及高效的異步處理性能。其核心架構包括:
- Engine:控制所有組件之間的數據流,當某個動作發生時觸發事件
- Scheduler:接收Engine發送的請求并入隊,當Engine請求時返回給Engine
- Downloader:負責下載網頁內容并將結果返回給Spider
- Spider:用戶編寫的用于分析響應、提取項目和額外URL的類
- Item Pipeline:負責處理Spider提取的項目,進行數據清洗、驗證和存儲
二、項目環境搭建
首先,我們需要安裝Scrapy和相關的依賴庫:
對于分布式爬蟲,我們還需要安裝和配置Redis服務器作為調度隊列。
三、創建Scrapy項目
使用Scrapy命令行工具創建項目:
scrapy startproject kuaikan_crawler
cd kuaikan_crawler
scrapy genspider kuaikan www.kuaikanmanhua.com
四、定義數據模型
在items.py中定義我們需要抓取的數據結構:
import scrapyclass ComicItem(scrapy.Item):title = scrapy.Field() # 漫畫標題author = scrapy.Field() # 作者description = scrapy.Field() # 描述cover_url = scrapy.Field() # 封面URLtags = scrapy.Field() # 標簽likes = scrapy.Field() # 喜歡數comments = scrapy.Field() # 評論數chapters = scrapy.Field() # 章節列表source_url = scrapy.Field() # 源URLcrawl_time = scrapy.Field() # 爬取時間
五、編寫爬蟲核心邏輯
在spiders/kuaikan.py中編寫爬蟲的主要邏輯:
import scrapy
import json
from kuaikan_crawler.items import ComicItem
from urllib.parse import urljoinclass KuaikanSpider(scrapy.Spider):name = 'kuaikan'allowed_domains = ['www.kuaikanmanhua.com']start_urls = ['https://www.kuaikanmanhua.com/web/topic/all/']def parse(self, response):# 解析漫畫列表頁comics = response.css('.TopicList .topic-item')for comic in comics:detail_url = comic.css('a::attr(href)').get()if detail_url:yield scrapy.Request(url=urljoin(response.url, detail_url),callback=self.parse_comic_detail)# 分頁處理next_page = response.css('.next-page::attr(href)').get()if next_page:yield scrapy.Request(url=urljoin(response.url, next_page),callback=self.parse)def parse_comic_detail(self, response):# 解析漫畫詳情頁item = ComicItem()# 提取基本信息item['title'] = response.css('.comic-title::text').get()item['author'] = response.css('.author-name::text').get()item['description'] = response.css('.comic-description::text').get()item['cover_url'] = response.css('.cover img::attr(src)').get()item['tags'] = response.css('.tags .tag::text').getall()item['likes'] = response.css('.like-count::text').get()item['comments'] = response.css('.comment-count::text').get()item['source_url'] = response.urlitem['crawl_time'] = datetime.now().isoformat()# 提取章節信息chapters = []for chapter in response.css('.chapter-list li'):chapter_info = {'title': chapter.css('.chapter-title::text').get(),'url': urljoin(response.url, chapter.css('a::attr(href)').get()),'update_time': chapter.css('.update-time::text').get()}chapters.append(chapter_info)item['chapters'] = chaptersyield item
六、實現分布式爬蟲
為了將爬蟲轉換為分布式模式,我們需要使用scrapy-redis組件:
- 修改settings.py配置文件:
# 啟用scrapy-redis調度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"# 啟用去重過濾器
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"# 設置Redis連接
REDIS_URL = 'redis://localhost:6379/0'# 保持Redis隊列不清空,允許暫停/恢復爬取
SCHEDULER_PERSIST = True# 設置Item Pipeline
ITEM_PIPELINES = {'scrapy_redis.pipelines.RedisPipeline': 300,'kuaikan_crawler.pipelines.MongoPipeline': 400,
}
- 修改爬蟲代碼,繼承RedisSpider:
from scrapy_redis.spiders import RedisSpiderclass DistributedKuaikanSpider(RedisSpider):name = 'distributed_kuaikan'redis_key = 'kuaikan:start_urls'def __init__(self, *args, **kwargs):super(DistributedKuaikanSpider, self).__init__(*args, **kwargs)self.allowed_domains = ['www.kuaikanmanhua.com']def parse(self, response):# 解析邏輯與之前相同pass
七、數據存儲管道
創建MongoDB存儲管道,在pipelines.py中:
import pymongo
from scrapy import settingsclass MongoPipeline:def __init__(self, mongo_uri, mongo_db):self.mongo_uri = mongo_uriself.mongo_db = mongo_db@classmethoddef from_crawler(cls, crawler):return cls(mongo_uri=crawler.settings.get('MONGO_URI'),mongo_db=crawler.settings.get('MONGO_DATABASE', 'scrapy'))def open_spider(self, spider):self.client = pymongo.MongoClient(self.mongo_uri)self.db = self.client[self.mongo_db]def close_spider(self, spider):self.client.close()def process_item(self, item, spider):collection_name = item.__class__.__name__self.db[collection_name].insert_one(dict(item))return item
在settings.py中添加MongoDB配置:
MONGO_URI = 'mongodb://localhost:27017'
MONGO_DATABASE = 'kuaikan_comics'
八、中間件與反爬蟲策略
為了應對網站的反爬蟲機制,我們需要添加一些中間件:
# 在middlewares.py中添加隨機User-Agent中間件
import random
from scrapy import signalsclass RandomUserAgentMiddleware:def __init__(self, user_agents):self.user_agents = user_agents@classmethoddef from_crawler(cls, crawler):return cls(user_agents=crawler.settings.get('USER_AGENTS', []))def process_request(self, request, spider):request.headers['User-Agent'] = random.choice(self.user_agents)# 在settings.py中配置用戶代理列表
USER_AGENTS = ['Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36','Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15',# 添加更多用戶代理...
]
總結
本文詳細介紹了如何使用Scrapy框架構建一個高效的分布式漫畫爬蟲。通過結合Scrapy-Redis實現分布式抓取,使用MongoDB進行數據存儲,以及實施多種反反爬蟲策略,我們能夠構建一個穩定高效的爬蟲系統。這種架構不僅可以應用于漫畫網站,經過適當修改后也可以用于其他各種類型的網站數據抓取任務。