文章目录
- 分布式爬取的流程:
- 代码:
实质上就是在多个机器上运行爬虫文件,调用组件scrapy_redis实现共享调度器和管道,写入redis数据库的过程。
分布式爬取的流程:
https://www.cnblogs.com/foremostxl/p/10095663.html#_label1
安装scrapy-redis组件,pip install scrapy-redis
redis配置文件的设置:
bind 127.0.0.1 只允许本机链接,注释掉
protected-mode no 关闭保护模式
打开redis终端
创建基于crawlspider的爬虫文件
scrapy startproject redispro
cd redispro
scrapy genspider -t qiubai https://www.qiushibaike.com/pic/
导入类from scrapy_redis.spiders import RedisCrawlSpider,修改继承类class QiubaiSpider(RedisCrawlSpider):
start_urls 注释改为redis_key,redis_key='qiubaispider',字符串qiubaispider表示调度器中队列的名称,基于RedisCrawlSpider。
settings配置文件修改
运行:切换到爬虫py文件所在的目录,比之前运行项目的目录更深,scrapy runspider xxx.py运行py文件后,
在redis客户端放入redis_key对应的url
lpush 调度器队列的名称 “起始url”
redis客户端发送lpush时名称和redis_key要一致
redis_key=‘qiubaispider’
lpush qiubaispider https://www.qiushibeike.com/pic
结果查看:
lrange qiubai:items
代码:
qiubai.py
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy_redis.spiders import RedisCrawlSpider from redispro.items import RedisproItem
class QiubaiSpider(RedisCrawlSpider): name = 'qiubai' # allowed_domains = ['https://www.qiushibaike.com/pic/'] # start_urls = ['https://www.qiushibaike.com/pic//']
redis_key='qiubaispider'
rules = ( Rule(LinkExtractor(allow=r'/pic/page/d+'), callback='parse_item', follow=True), )
def parse_item(self, response): div_list=response.xpath('//div[@id="content-left"]/div') for div in div_list: # 相对于div_list .// img_url img_url ="https:" + div.xpath('.//div[@class="thumb"]/a/img/@src').extract_first() item = RedisproItem() item['img_url'] = img_url yield item
items.py
import scrapy
class RedisproItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() img_url=scrapy.Field()
settings.py
BOT_NAME = 'redispro'
SPIDER_MODULES = ['redispro.spiders'] NEWSPIDER_MODULE = 'redispro.spiders'
# 以下是加上去的: USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36' LOG_LEVEL = 'ERROR' LOG_FILE = 'log.txt' # Obey robots.txt rules ROBOTSTXT_OBEY = False
# 使用组件管道 ITEM_PIPELINES = { 'scrapy_redis.pipelines.RedisPipeline': 300, } # 去重 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" # 使用scrapy_redis组件的调度器 SCHEDULER = "scrapy_redis.scheduler.Scheduler" # 是否允许暂停,某台机器出现故障时会从暂停之前的位置开始 SCHEDULER_PERSIST = True
# 配置redis服务器,爬虫文件在其他电脑上运行。 REDIS_HOST = redis服务端地址 REDIS_PORT = 6379
神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试