Python爬虫—requests的使用及案例

272次阅读
没有评论
Python爬虫—requests的使用及案例

1、基本使用

1.1、安装

pip install requests -i https://pypi.douban.com/simple

1.2、一个类型和六个方法

  • 一个类型: Response
  • 六个属性:
    • response.encoding = ‘utf-8’      设置网页编码格式
    • response.text          以字符串形式返回网页源码
    • response.url          获取请求的url
    • response.content           返回二进制数据
    • response.status_code      返回响应的状态码
    • response.headers           返回响应头

2、get请求

应用实例

import requests

url = 'https://www.baidu.com/s?' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36' } data = { 'wd':'北京' } response = requests.get(url=url,params=data,headers=headers) content = response.text print(content)

3、post请求

应用实例

import json import requests

url = 'https://fanyi.baidu.com/sug' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36' } data = { 'kw':'eye' } response = requests.post(url=url,data=data,headers=headers) content = response.text obj = json.loads(content) print(obj)

4、代理

应用实例

import requests

url = 'https://www.baidu.com/s?' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36' } data = { 'wd':'ip' } proxy = { 'http':'183.246.170.14:30001' } response = requests.get(url=url,params=data,headers=headers,proxies=proxy) content = response.text with open("daili.html","w",encoding="utf-8") as fp: fp.write(content)

5、cookie登录古诗文网

import requests from lxml import etree from bs4 import BeautifulSoup

# 登录页面的地址 url = 'https://so.gushiwen.cn/user/login.aspx?from=http://so.gushiwen.cn/user/collect.aspx' headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36' } response = requests.get(url=url,headers=headers) content = response.text

tree = etree.HTML(content)

# (1) 获取 __VIEWSTATE viewstate = tree.xpath('//input[@id="__VIEWSTATE"]/@value') # (2) 获取 __VIEWSTATEGENERATOR viewstategenerntor = tree.xpath('//input[@id="__VIEWSTATEGENERATOR"]/@value')

# (3) 获取验证码图片 code = tree.xpath('//img[@id="imgCode"]/@src')[0] code_url = 'https://so.gushiwen.cn' + code # 下载验证码图片到本地,然后输入 session = requests.session() # 验证码的url内容 response_code = session.get(code_url) # 注意此时要使用二进制,因为我们要使用图片的下载 content_code = response_code.content with open("code.jpg", "wb") as fp: fp.write(content_code)

code_name = input('请输入你的验证码:')

# 点击登录 url_post = 'https://so.gushiwen.cn/user/login.aspx?from=http%3a%2f%2fso.gushiwen.cn%2fuser%2fcollect.aspx'

data_post = { '__VIEWSTATE':viewstate, '__VIEWSTATEGENERATOR':viewstategenerntor, 'from': 'http://so.gushiwen.cn/user/collect.aspx', 'email': '15284124517', 'pwd': 'xy251753', 'code': code_name, 'denglu': '登录' }

response_post = session.post(url=url_post,headers=headers,data=data_post) content_post = response_post.text

with open("gushiwen.html", "w", encoding='utf-8') as fp: fp.write(content_post)

神龙|纯净稳定代理IP免费测试>>>>>>>>天启|企业级代理IP免费测试>>>>>>>>IPIPGO|全球住宅代理IP免费测试

相关文章:

版权声明:Python教程2022-10-28发表,共计3033字。
新手QQ群:570568346,欢迎进群讨论 Python51学习