爬虫——练习分页爬取糗事百科热图,保存图片到本地

835次阅读
没有评论
爬虫——练习分页爬取糗事百科热图,保存图片到本地
  • os模块操作:建文件夹
  • re模块操作:单行模式+非贪婪匹配+分组取值 img_list = re.findall('<div class="thumb">.*?<img src="(.*?)".*?>.*?</div>', page_text, re.S)
  • requests操作:图片地址要拼接+ 保存本地时文件名分割+ content属性+ wb模式写

缺点是有点慢

import requests import re import os

if not os.path.exists('./img2'): os.mkdir('./img2')

start_page = int(input("输入起始页码")) end_page = int(input("输入终止页码"))

proxies = { "https": "218.60.8.99:3129"

}

headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" }

# https://www.qiushibaike.com/imgrank/page/2/

base_url = 'https://www.qiushibaike.com/imgrank/page/'

for i in range(start_page, end_page + 1): print(i) url = base_url + str(i)

response = requests.get(url=url, proxies=proxies, headers=headers) page_text = response.text img_list = re.findall('<div class="thumb">.*?<img src="(.*?)".*?>.*?</div>', page_text, re.S) for url in img_list: img_url = 'https:' + url # 持久化处理 img_data = requests.get(url=img_url, proxies=proxies, headers=headers).content imgName = url.split('/')[-1] withPath = 'img2/' + imgName

with open(withPath, 'wb')as f: f.write(img_data) print(imgName + "写入成功")

神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试

相关文章:

版权声明:Python教程2022-10-25发表,共计1122字。
新手QQ群:570568346,欢迎进群讨论 Python51学习