Python爬虫应用

394次阅读
没有评论

个人灌水博文#1

本文使用python爬虫爬取学校内部网信箱内容,并将内容做成词云来直观获取学生最需要解决的问题

涉及到了爬虫,需要登陆验证网页的爬虫爬取,词云的制作

主要实现思路:用带有cookie信息的爬虫爬取学校内部网校务信箱信息,将信息通过jieba库分词并通过wordcloud库来生成词库

程序主体分为五个部分:

1、程序所使用的库的信息:

# coding:utf-8 import requests from bs4 import BeautifulSoup import re import jieba import wordcloud from cv2 import imread

其中requests,BeautifulSoup,re库用于爬取信息,jieba,wordcloud,imread库用于生成词云 

2、爬取网页部分: 

def getHTML(Cookie,url): #用于爬取网页内容 headers = { 'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.75 Chrome/73.0.3683.75 Safari/537.36', 'Cookie': Cookie #Cookie信息用于登陆网页 }

session = requests.Session()

response = session.get(url, headers=headers) response.encoding = response.apparent_encoding return response.text

 关键在于使用cookie信息来登陆网页进行信息的爬取

3、将爬取到的信息放入列表中并过滤掉一些无用的信息:

def fill_List(List,HTML): #将爬取的信息放于列表中 soup = BeautifulSoup(HTML,"html.parser") for i in soup.find_all(string = re.compile("\xa0")): o = "".join(i.split()) if o != '' and o != '检索结果:2018年起信件共' and o != '共358页直接跳到第' and o != '信件内容': List.append(o) pass

 因为笔者爬取的网页是学校的校务信箱,较多内容固定,所以可以剔除掉一些无用的信息

4、由列表生成词云部分:

def WordCloud(list): #词云部分,将列表分词并生成词云 #mk = imread('C:/Users/BoletusAo/Desktop/wordcloud.png') w = wordcloud.WordCloud(width=1920,height=1080,font_path="msyh.ttc",stopwords = {"问题","的","疑问","关于","对于","故障","可以","情况","建议","投诉","反馈","真的","能","不能","能否","呢","还是","楼","你","中","与","为什么","的一些","请问","已经","回复","要","疑惑","点","了吗","人","怎么","吗","是","又","也","我们","级","无法","一直","很","是不是","等","意见","以及","处理","部分","好","多","这","为","被","未","后","就","吧","啊","里","了","时候","什么","还","一点","一个","使用","在","晚上","希望","何时","想","存在","不","和","有","让","没","及","请","到","通知","是否","有关","为何","用","对","严重","解决","不合理","让","没有","我","都","不了","新","正常","导致","出现","一下","开", \ "深大","经常","差","说","作为","一些","最近","服务","稳定","人员","安排","吃","上","上课","再"},background_color="white") str = ",".join(list) jieba.setLogLevel(jieba.logging.INFO) w.generate(" ".join(jieba.lcut(str))) w.to_file('C:/Users/BoletusAo/Desktop/SZU2.png') pass

生成词云,关键在于屏蔽词的设置,信件标题有较多无用的词语需要屏蔽,例如:问题,的等 

5、main函数:

if __name__ == '__main__': List = [] for i in range (1,359): #校务信箱一共有358页 if i == 1: url = 'url信息' Cookie = 'cooki信息' HTML = getHTML(Cookie,url) fill_List(List,HTML) else: url = 'url信息{}'.format(i) #此处涉及到换页的相关操作,使用的是直接更改url信息的方法 Cookie = 'coookie信息' HTML = getHTML(Cookie, url) fill_List(List, HTML) WordCloud(List)

具体url,cookie和翻页的处理因涉及到学校信息固不提供 

最终结果展示:

Python爬虫应用

编写程序中遇到的问题:

1、需要登陆验证的网页该如何爬取

解决方法:使用cookie信息

2、如何在爬取的过程中进行翻页

更改url的相关信息实现翻页

神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试

相关文章:

版权声明:Python教程2022-10-24发表,共计2448字。
新手QQ群:570568346,欢迎进群讨论 Python51学习