上一篇文章中带大家学习了 lxml 模块以及 XPath 语法,本文针对某网新房数据编写爬虫进行实战。
一、网页信息的获取
抓取地址:https://cd.fang.lianjia.com/loupan/
import requestsLink = 'https://cd.fang.lianjia.com/loupan/'
Headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
}
response = requests.get(url=Link, headers=Headers)
if response.status_code == 200:html_source = response.textprint(html_source)
else:print(f'状态码:{response.status_code}, 请检查')
二、新房数据的抓取
(1)当前页面所有在售新房获取root = etree.HTML(html_source)
# 找到所有房屋信息对应的 li 标签,构建 li 列表
li_list = root.xpath('/html/body/div[3]/ul[@class="resblock-list-wrapper"]/li')
(2)部分房屋信息抓取
for li in li_list:# 经过分析,房屋名称信息较好获取,而要获得房屋面积单价则需要借助分支语法house_name = li.xpath('./div/div[1]/h2/a/text()') # 房屋名称house_unit_price = li.xpath('./div/div[6]/div[1]/span[1]/text()|./div/div[6]/div[1]/span[2]/text()') # 房屋面积单价print(house_name[0], ''.join(house_unit_price))
三、完整代码
爬虫代码的编写,除了要有扎实的基础知识外,还要善于分析网页内容。import requests
from lxml import etreeLink = 'https://cd.fang.lianjia.com/loupan/'
Headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36'
}
response = requests.get(url=Link, headers=Headers)
if response.status_code == 200:html_source = response.textroot = etree.HTML(html_source)# 找到所有房屋信息对应的 li 标签,构建 li 列表li_list = root.xpath('/html/body/div[3]/ul[@class="resblock-list-wrapper"]/li')for li in li_list:# 经过分析,房屋名称信息较好获取,而要获得房屋面积单价则需要借助分支语法house_name = li.xpath('./div/div[1]/h2/a/text()') # 房屋名称house_unit_price = li.xpath('./div/div[6]/div[1]/span[1]/text()|./div/div[6]/div[1]/span[2]/text()') # 房屋面积单价house_price = li.xpath('./div/div[6]/div[@class="second"]/text()') # 价格区间house_address = li.xpath('./div/div[2]/span[1]/text()|./div/div[2]/span[2]/text()|./div/div[2]/a/text()') # 地理位置print(house_name[0], ''.join(house_unit_price), house_price[0], '/'.join(house_address))
else:print(f'状态码:{response.status_code}, 请检查')