爬虫数据解析及运用

256次阅读
没有评论
爬虫数据解析及运用

一、xpath的解析,应用

1、Xpath解析库介绍

# Xpath解析库介绍: 数据解析的过程中使用过正则表达式, 但正则表达式想要进准匹配难度较高, 一旦正则表达式书写错误, 匹配的数据也会出错. 网页由三部分组成: HTML, Css, JavaScript, HTML页面标签存在层级关系, 即DOM树, 在获取目 标数据时可以根据网页层次关系定位标签, 再获取标签的文本或属性.

# xpath解析库解析数据原理: 1. 根据网页DOM树定位节点标签 2. 获取节点标签的正文文本或属性值

# xpath安装, 初体验 –> 使用步骤: 1.xpath安装: pip install lxml 2.requests模块爬取糗事百科热门的标题:

import requests from lxml import etree

url = 'https://www.qiushibaike.com/' headers = { "User-Agent":'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36' } res = requests.get(url=url, headers=headers) #实例化对象 tree = etree.HTML(res.text) #解析数据 title_lst = tree.xpath('//ul/li/div/a/text()') for item in title_lst: print(item)

3.xpath使用步骤: from lxml import etree tree = etree.HTML(res.text) tree = etree.parse(res.html, etree.HTMLParse()) # 示例如下, 了解内容 tag_or_attr = tree.xpath('xpath表达式')

2、xpath语法

# xpath语法: 1.常用规则: 1. nodename: 节点名定位 2. //: 从当前节点选取子孙节点 3. /: 从当前节点选取直接子节点 4. nodename[@attribute="…"] 根据属性定位标签 '//div[@class="ui-main"]' 5. @attributename: 获取属性 6. text(): 获取文本 2.属性匹配两种情况: 多属性匹配 & 单属性多值匹配 2.1 多属性匹配 示例: tree.xpath('//div[@class="item" and @name="test"]/text()') 2.2 单属性多值匹配 示例: tree.xpath('//div[contains(@class, "dc")]/text()') 3.按序选择: 3.1 索引定位:1开始(牢记, 牢记, 牢记) 3.2 last()函数 3.3 position()函数 代表几个以内>,< 位置

3、xpath代码演示

from lxml import etree

# 1.实例化一个etree对象 # tree = etree.HTML('文本数据') # 解析直接从网络上爬取内容 # reel = etree.parse('文本数据',etree.HTMLParser()) # 解析本地的HTML文本 reel = etree.parse('./test.html',etree.HTMLParser()) # 解析本地的HTML文本

#2.调用 xpath 表达式定位标签及获取其属性与文本 #2.1根据节点定位

title = reel.xpath('//title/text()') #xpath匹配出来是一个列表 # print(title)

# 3. 定位id为007的标签,去直接文本 div_oo7 = reel.xpath('//div[@id="007"]/text()') # print(div_oo7)

div_008 = reel.xpath('//div[@id=007]//text()') # print(div_008)

# 4.获取节点的属性值 a_tag = reel.xpath('//a/@href') # print(a_tag)

# 5.多属性匹配和单属性多值匹配 # 多属性匹配 div_009 = reel.xpath('//div[@class="c1" and @name="laoda"]/text()') # print(div_009)

# 单属性多值匹配 div_010 = reel.xpath('//div[contains(@class,"c3")]/text()') # print(div_010)

#6、按序匹配 div_011 = reel.xpath('//div[@class="divtag"]/ul/li/text()') # print(div_011)

div_012 = reel.xpath('//div[@class="divtag"]/ul/li[4]/text()') # print(div_012)

# div_013 = reel.xpath('//div[@class="divtag"]/ul/li[last()-1]/text()') # print(div_013)

div_014 = reel.xpath('//div[@class="divtag"]/ul/li[position()<4]/text()') print(div_014)

3、爬取豆瓣网小案例

import requests from lxml import etree url = 'https://movie.douban.com/chart' headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' } res = requests.get(url=url,headers=headers) tree = etree.HTML(res.text) ret = tree.xpath('//div[@class="pl2"]') for i in ret: title = i.xpath('./a//text()') title_full = '' for j in title: c = j.replace('n','').replace(' ','') title_full += c author = i.xpath('./p//text()') pj = i.xpath('./div/span[2]/text()') pf = i.xpath('./div/span[3]/text()') print(title_full) print(author[0]) print(pj[0]) print(pf[0])

二、存入三大文件(text,json,csv)

import requests import json,csv from lxml import etree for i in range(1,10): if i == 1: url = 'http://www.lnzxzb.cn/gcjyxx/004001/subpage.html' else: # url = 'http://www.lnzxzb.cn/gcjyxx/004001/%s.html' % i url = 'http://www.lnzxzb.cn/gcjyxx/004001/'+str(i)+'.html' headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' } res = requests.get(url=url,headers=headers) tree = etree.HTML(res.text)

#存 txt 文件 *********************************** # with open('ztb.txt', 'a', encoding='utf-8') as f: # for i in range(1,16): # ret = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@href')[0] # ret1 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@title')[0] # ret2 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/span[1]/text()')[0] # # print(ret+ret1+ret2) # f.write(''.join([ret,ret1,ret2,'n']))

# 存 json 文件 ************************** # with open('ztb.json', 'a', encoding='utf-8') as f: # for i in range(1,16): # ret = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@href')[0] # ret1 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@title')[0] # ret2 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/span[1]/text()')[0] # # print(ret+ret1+ret2) # dic = {'ret':ret,'ret1':ret1,'ret2':ret2} # f.write(json.dumps(dic,indent=4,ensure_ascii=False)+',')

#存 CSV 文件—导包 import csv *************************** with open('ztb.csv', 'a', encoding='utf-8') as f: # delimiter=' ' 必须是一个字符,一个空格,或者逗号 # writer 俩个参数 wr = csv.writer(f,delimiter=',') # writerow—先写入CSV文件,定义格式 wr.writerow(['link','title','times']) for i in range(1,16): ret = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@href')[0] ret1 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/p/a/@title')[0] ret2 = tree.xpath('//ul[@id="showList"]/li['+str(i)+']/span[1]/text()')[0] # print(ret+ret1+ret2)

wr.writerow([ret,ret1,ret2])

三、BeautifulSoup库使用

1、BeautifulSoup库介绍:

# BeautifulSoup库介绍: BeautifulSoup也是一个解析库 BS解析数据是依赖解析器的, BS支持的解析器有html.parser, lxml, xml, html5lib 等, 其中lxml 解析器解析速度快, 容错能力强. BS现阶段应用的解析器多数是lxml

2、BeautifulSoup 使用步骤:

# BeautifulSoup 使用步骤: from bs4 import BeautifulSoup soup = BeautifulSoup(res.text, 'lxml') #实例化BeautifulSoup对象 tag = soup.select("CSS选择器表达式") # 返回一个列表

3、选择器分类:

1).节点选择器 2).方法选择器 3).CSS选择器

4、 CSS选择器:

# CSS选择器: 1.根据节点名及节点层次关系定位标签: 标签选择器 & 层级选择器 soup.select('title') soup.select('div > ul > li') # 单层级选择器 soup.select('div li') # 多层级选择器

2.根据节点的class属性定位标签: class选择器(classical:经典) soup.select('.panel')

3.根据id属性定位标签: id选择器 soup.select('#item')

4.嵌套选择: ul_list = soup.select('ul') #得到的依然是一个数据列表 for ul in ul_list: print(ul.select('li'))

# 获取节点的文本或属性: # 如果标签下除了直接子文本外还有其他标签,string将无法获取直接子文本 tag_obj.string: #获取直接子文本–>如果节点内有与直系文本平行的节点, 该方法拿到的是None tag_obj.get_text(): #获取子孙节点的所有文本 tag_obj['attribute']: #获取节点属性

5、BeautifulSoup语法练习

# 练习示例: from bs4 import BeautifulSoup html = ''' <div class="panel"> <div class="panel-heading"> <h4>BeautifulSoup练习</h4> </div> <div> 这是一个div的直接子文本 <p>这是一个段落</p> </div> <a href="https://www.baidu.com">这是百度的跳转连接</a> <div class="panel-body"> <ul class="list" id="list-1"> <li class="element">第一个li标签</li> <li class="element">第二个li标签</li> <li class="element">第三个li标签</li> </ul> <ul class="list list-small"> <li class="element">one</li> <li class="element">two</li> </ul> <li class="element">测试多层级选择器</li> </div> </div> ''' # 1.实例化BeautifulSoup对象 soup = BeautifulSoup(html,'lxml') #2.条用css选择器定位标签,获取标签的文本或属性

### 2.1 根据节点名定位 # r1 = soup.select('h4') # print(type(r1)) # #列表用下标0取出第一个元素 # print(r1[0].string) #直接获取子文本 # print(r1[0].get_text()) #直接获取所有的子孙文本

### 2.2 根据节点的class定位 # k1 = soup.select('.panel-heading') # print(k1[0])

### 2.3根据id选择标签定位 # k2 = soup.select('#list-1') # print(k2[0].get_text())

### 2.4单层级选择器 # cc = soup.select('.panel-body > ul >li') # print(cc)

### 2.5多层级选择器 # cc1 = soup.select('.panel-body li') # print(cc1)

6、列举爬取三国演义小说

''' 利用bs4语法,爬取三国演义文字 ''''' # import requests 导入bs4 导包 # from bs4 import BeautifulSoup # # url = 'http://www.shicimingju.com/book/sanguoyanyi.html' # headers = { # 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' # } # res = requests.get(url=url,headers=headers) # soup = BeautifulSoup(res.text,'lxml') # ret = soup.select('.book-mulu ul li') # for i in ret: # title = i.select('a')[0].string # comment = 'http://www.shicimingju.com'+i.select('a')[0]['href'] # ret1 = requests.get(url=comment,headers=headers) # res1 = BeautifulSoup(ret1.text,'lxml') # cc = res1.select('.chapter_content ')[0].get_text() # with open('threecountry.txt','a',encoding='utf-8') as f: # f.write(cc+'n') ''' 利用xpath语法,爬取三国演义文字 ''''' import requests from lxml import etree url = 'http://www.shicimingju.com/book/sanguoyanyi.html' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36' } res = requests.get(url=url,headers=headers) #实例化对象 tree = etree.HTML(res.text) ret = tree.xpath('//div[@class="book-mulu"]/ul/li') for i in ret: rec = i.xpath('./a/@href')[0] name = i.xpath('./a/text()')[0] url = 'http://www.shicimingju.com'+rec res1 = requests.get(url= url,headers=headers) tree1 = etree.HTML(res1.text) cope = tree1.xpath('//div[@class="chapter_content"]/p/text()')[0]+'n' with open(name+'.txt','a',encoding='utf-8') as f: f.write(cope)

神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试

相关文章:

版权声明:Python教程2022-10-27发表,共计9057字。
新手QQ群:570568346,欢迎进群讨论 Python51学习