爬虫(网页采集)

532次阅读
没有评论
爬虫(网页采集)

源代码:

import requests

# step1:指定url url = 'https://www.sogou.com/web' kw = input('input what you need') param = {'query': kw} header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36 Edg/104.0.1293.47'} # step2:发起请求 response = requests.get(url=url, params=param, headers=header) # step3:获取响应数据,text返回的是字符串形式的响应数据 page_text = response.text # step4:存储 path = kw + '.html' with open(path, 'w', encoding='utf-8') as fp: fp.write(page_text)

 编写流程:

1.指定url

url = 'https://www.sogou.com/web'

2.发起请求

response = requests.get(url=url, params=param, headers=header)

3.获取响应数据

page_text = response.text

4.存储

with open(path, 'w', encoding='utf-8') as fp:
    fp.write(page_text)

解析:

header = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.5112.81 Safari/537.36 Edg/104.0.1293.47'}

 header用来破解ua反爬机制,在不使用header情况下爬取的页面为不存在。

 param = {'query': kw}

 param用字典形式存储需要搜索的网页内容

神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试

相关文章:

版权声明:Python教程2022-10-27发表,共计1033字。
新手QQ群:570568346,欢迎进群讨论 Python51学习