[点晴永久免费OA]什么是网络爬虫?
当前位置:点晴教程→点晴OA办公管理信息系统
→『 经验分享&问题答疑 』
|
A[爬虫] --> B[比价省钱]
A --> C[抢限量球鞋]
A --> D[追踪爱豆动态]
A --> E[查天气航班]
A --> F[找租房信息]
✅ 核心原理:模拟人类浏览行为,批量抓取网页中的目标数据
# 举个生活化例子理解爬虫
import requests
# 你每天用浏览器查看的天气
def get_weather():
response = requests.get("http://tianqi.com")
return response.text # 爬虫就是在代码里做这件事!
print("爬虫本质:自动获取网页数据的程序")
✅ 核心原理:模拟人类浏览行为,批量抓取网页中的目标数据
1️⃣ **安装Python 3.8+**:官网直达链接
2️⃣ 安装开发工具:推荐PyCharm社区版(免费)
3️⃣ 安装必备库:
pip install beautifulsoup4 requests lxml xlwt
💡 小技巧:Windows用户复制上方命令到cmd执行
graph LR
A[发送请求] --> B[解析数据]
B --> C[存储结果]
import urllib.request
# 伪装成浏览器的关键!
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
def get_html(url):
req = urllib.request.Request(url, headers=headers)
response = urllib.request.urlopen(req)
return response.read().decode("utf-8") # 解决中文乱码
# 测试获取第一页
print(get_html("https://movie.douban.com/top250")[:500])
from bs4 import BeautifulSoup
import re
# 抓取单页电影信息的秘密武器
def parse_html(html):
soup = BeautifulSoup(html, "html.parser")
movie_list = []
for item in soup.find_all('div', class_='item'):
movie = {}
movie['链接'] = item.find('a')['href']
movie['标题'] = item.find('span', class_='title').text
movie['评分'] = item.find('span', class_='rating_num').text
movie_list.append(movie)
return movie_list
# 测试解析
html = get_html("https://movie.douban.com/top250")
print(parse_html(html)[0])
✨ 输出效果:
{'链接': 'https://movie.douban.com/subject/1292052/',
'标题': '肖申克的救赎',
'评分': '9.7'}
import xlwt
def save_to_excel(data, filename):
workbook = xlwt.Workbook(encoding='utf-8')
sheet = workbook.add_sheet('豆瓣电影')
# 写表头
headers = ['排名', '标题', '评分', '详情链接']
for col, header in enumerate(headers):
sheet.write(0, col, header)
# 写数据
for row, movie in enumerate(data, 1):
sheet.write(row, 0, row)
sheet.write(row, 1, movie['标题'])
sheet.write(row, 2, movie['评分'])
sheet.write(row, 3, movie['链接'])
workbook.save(filename)
# 实战保存
all_movies = []
for i in range(0, 10): # 抓取10页
url = f"https://movie.douban.com/top250?start={i*25}"
html = get_html(url)
all_movies.extend(parse_html(html))
save_to_excel(all_movies, "豆瓣Top250.xls")
import time
time.sleep(2) # 每请求一次睡2秒
response.content.decode('utf-8') # 或gbk/GB2312
robots.txt
(如:https://www.douban.com/robots.txt)Q&A常见问题:
Q:爬虫必须用Python吗?
A:Java/PHP/C#都能写,但Python最适合新手
Q:需要数学基础吗?
A:加减乘除足矣,零门槛入门!