一、选题背景
1.背景
随着大数据时代的来临,网络爬虫在互联网中的地位将越来越重要。互联网中的数据是海量的,如何自动高效地获取互联网中我们感兴趣的信息并为我们所用是一个重要的问题,而爬虫技术就是为了解决这些问题而生的。对于身为数据科学与大数据技术专业的学生来说,网络爬虫成为必要的技能之一,结合自己的喜好,这次我选择爬取个人比较喜欢的B站up主老番茄目前(2023-5-20以前)的所有视频数据,并进行简单的可视化分析。
2.预期目标
老番茄自2013年起在哔哩哔哩平台进行视频创作,是该站首位拥有1000万粉丝的个人UP主,同时他也发布了近500条视频内容,通过对其视频信息的爬取和数据分析,目的是了解老番茄什么时候开始获得流量、播放量和视频上传时间是否存在关联、其主要视频内容分区、每年的视频产量等。
如果你也刚接触或者正在学习Python,但是找不到方向,不知道如何系统的学习Python,那么你可以试试我分享这个学习规划和知识点+实战案例
点击领取(绝对免费哦~)
二、B站up主—老番茄视频信息数据爬取方案
1.课题名称
B站up主—老番茄视频信息数据爬取
2.爬取的内容与数据特征分析
主要将爬取的内容分为16个字段,分别是:视频标题、视频地址、up主昵称、up主UID、视频上传时间、视频时长、是否合作视频、视频分区、弹幕数、播放量、点赞数、投币量、收藏量、评论数、转发量以及爬取时间;爬取到的478条视频数据,保存到excel表格中。
3.设计方案概述
(1)实现思路:
- 打开一个B站的网页,打开开发者模式,点击network,刷新。
- 发起请求,获取响应内容,得到json字符串,进行js解密。
- 解析自己想要的数据,保存文本。
(2)技术难点:
- 应对反爬。
- json解密。
- 视频内容分区。
三、老番茄B站主页的特征分析
根据鼠标光标确定具体爬取内容的部分,以便分析
四、网络爬虫程序设计
1.数据的爬取与采集
爬取网站:https://space.bilibili.com/546195/video
(1)导入相关库并且建立一个用户代理池,以解决反爬。用户代理池取自:csdn :https://deepboat.blog.csdn.net/ 【Python】【进阶篇】三、Python爬虫的构建User-Agnet代理池
import requests
import random
import time
import datetime
import pandas as pd
import hashlib
from pprint import pprint
from lxml import etree
#爬取b站up主老番茄全部视频信息
up_mid = '546195' # 这是老番茄的aid号
max_page = 16 # 爬取最大页数
'''用户代理池,防止反爬 '''
user_agent = ["Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)","Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)","Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)","Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)","Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)","Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6","Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1","Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0","Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5","Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52"
]
def web_rid(param):"""js解密"""n = "9cd4224d4fe74c7e9d6963e2ef891688" + "263655ae2cad4cce95c9c401981b044a"c = ''.join([n[i] for i in[46, 47, 18, 2, 53, 8, 23, 32, 15, 50, 10, 31, 58, 3, 45, 35, 27, 43, 5, 49, 33, 9, 42, 19, 29, 28, 14,39, 12, 38, 41, 13, 37, 48, 7, 16, 24, 55, 40, 61, 26, 17, 0, 1, 60, 51, 30, 4, 22, 25, 54, 21, 56,59, 6, 63, 57, 62, 11, 36, 20, 34, 44, 52]][:32])s = int(time.time())param["wts"] = sparam = "&".join([f"{i[0]}={i[1]}" for i in sorted(param.items(), key=lambda x: x[0])])return hashlib.md5((param + c).encode(encoding='utf-8')).hexdigest(), s
复制代码
(2)对我们要爬取的视频内容进行分类,在开发者界面找到以下部分
代码如下:
""" 进行分类 """
def get_video_type(v_num):if v_num == 28:return '音乐'elif v_num == 17:return '游戏'elif v_num == 71:return '娱乐'elif v_num == 138 or v_num == 161 or v_num == 239:return '生活'elif v_num == 85:return '影视'elif v_num == 218:return '动物圈'elif v_num == 214:return '美食'else:return '未知分区'
(3)获取前几页视频的url列表
代码如下:
'''获取前几页视频的url列表'''
def get_url_list():headers = {'User-Agent': random.choice(user_agent),'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2','Connection': 'keep-alive',}url_list = [] # 视频地址title_list = [] # 视频标题author_list = [] # UP主昵称mid_list = [] # UP主UIDcreate_time_list = [] # 上传时间play_count_list = [] # 播放数length_list = [] # 视频时长comment_count_list = [] # 评论数is_union_list = [] # 是否合作视频type_list = [] # 分区danmu_count_list = [] # 弹幕数for i in range(1, max_page + 1): # 前n页print('开始爬取第{}页'.format(str(i)))url = 'https://api.bilibili.com/x/space/wbi/arc/search'params = {'mid': up_mid,'ps': 30,'tid': 0,'pn': i, # 第几页'keyword': '','order': 'pubdate','platform': 'web','web_location': '1550101','order_avoided': 'true',}# 增加解密参数ret = web_rid(params)w_rid = ret[0]wts = ret[1]params['w_rid'] = w_ridparams['wts'] = wts
(4)发送请求
# 发送请求r = requests.get(url, headers=headers, params=params)print(r.status_code) # 响应码200json_data = r.json()video_list = json_data['data']['list']['vlist']for i in video_list:bvid = i['bvid']url = 'https://www.bilibili.com/video/' + bvidurl_list.append(url)title = i['title']title_list.append(title)author = i['author']author_list.append(author)mid = i['mid']mid_list.append(mid)create_time = i['created']create_time = trans_date(v_timestamp=create_time)create_time_list.append(create_time)play_count = i['play']play_count_list.append(play_count)length = i['length']length_list.append(length)comment = i['comment']comment_count_list.append(comment)is_union = '是' if i['is_union_video'] == 1 else '否'is_union_list.append(is_union)type_name = get_video_type(v_num=i['typeid'])type_list.append(type_name)danmu_count = i['video_review']danmu_count_list.append(danmu_count)return url_list, title_list, author_list, mid_list, create_time_list, play_count_list, length_list, comment_count_list, is_union_list, type_list, danmu_count_list
(5)爬取每个视频详细数据,并保存为excel表格
'视频标题': title_list,
'视频地址': url_list,
'UP主昵称': author_list,
'UP主UID': mid_list,
'视频上传时间': create_time_list,
'视频时长': length_list,
'是否合作视频': is_union_list,
'视频分区': type_list,
'弹幕数': danmu_count_list,
'播放量': play_count_list,
'点赞数': like_count_list,
'投币量': coin_count_list,
'收藏量': fav_count_list,
'评论数': comment_count_list,
'转发量': share_count_list,
'实时爬取时间': now_list
'''爬取每个视频的详细数据'''
def get_video_info(v_url):headers = {'User-Agent': random.choice(user_agent),'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8','Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2','Connection': 'keep-alive',}r = requests.get(v_url, headers=headers)print('当前的url是:', v_url)# 用xpath解析页面数据html = etree.HTML(r.content)try: # 点赞数like_count = html.xpath('//*[@id="arc_toolbar_report"]/div[1]/div[1]/div/span/text()')[0]if like_count.endswith('万'):like_count = int(float(like_count.replace('万', '')) * 10000) # 把万转换为数字except:like_count = ''try: # 投币数coin_count = html.xpath('//*[@id="arc_toolbar_report"]/div[1]/div[2]/div/span/text()')[0]if coin_count.endswith('万'):coin_count = int(float(coin_count.replace('万', '')) * 10000) # 把万转换为数字except:coin_count = ''try: # 收藏数fav_count = html.xpath('//*[@id="arc_toolbar_report"]/div[1]/div[3]/div/span/text()')[0]if fav_count.endswith('万'):fav_count = int(float(fav_count.replace('万', '')) * 10000) # 把万转换为数字except:fav_count = ''try: # 转发数share_count = html.xpath('//*[@class="video-share-info video-toolbar-item-text"]/text()')[0]if share_count.endswith('万'):share_count = int(float(share_count.replace('万', '')) * 10000) # 把万转换为数字except Exception as e:print('share_count except', str(e))share_count = ''try:union_team = html.xpath('//*[@id="member-container"]')for i in union_team:url_tail = i.xpath('./div/div/a/@href')print(url_tail)members = [i.replace('//space.bilibili.com/', '') for i in url_tail]print('members is: {}'.format(members))except:members = Nonereturn like_count, coin_count, fav_count, share_count, membersif __name__ == '__main__':url_list, title_list, author_list, mid_list, create_time_list, play_count_list, length_list, comment_count_list, is_union_list, type_list, danmu_count_list = get_url_list()pprint(title_list)pprint(is_union_list)pprint(type_list)like_count_list = [] # 点赞数coin_count_list = [] # 投币数fav_count_list = [] # 收藏数share_count_list = [] # 分享数now_list = [] # 实时爬取时间for url in url_list:now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f') # 实时爬取时间like_count, coin_count, fav_count, share_count, members = get_video_info(v_url=url)like_count_list.append(like_count)coin_count_list.append(coin_count)fav_count_list.append(fav_count)share_count_list.append(share_count)now_list.append(now)df = pd.DataFrame(data={'视频标题': title_list,'视频地址': url_list,'UP主昵称': author_list,'UP主UID': mid_list,'视频上传时间': create_time_list,'视频时长': length_list,'是否合作视频': is_union_list,'视频分区': type_list,'弹幕数': danmu_count_list,'播放量': play_count_list,'点赞数': like_count_list,'投币量': coin_count_list,'收藏量': fav_count_list,'评论数': comment_count_list,'转发量': share_count_list,'实时爬取时间': now_list})df.to_excel('laofanqie_B站视频数据.xlsx', index=None)
文件部分截图(共438条数据):
如何系统化学习Python
这里,我为您精心准备了一份全面的Python学习大礼包,完全免费分享给每一位渴望成长、希望突破自我现状却略感迷茫的朋友。无论您是编程新手还是希望深化技能的开发者,都欢迎加入我们的学习之旅,共同交流进步!
🌟 学习大礼包包含内容:
Python全领域学习路线图:一目了然,指引您从基础到进阶,再到专业领域的每一步学习路径,明确各方向的核心知识点。
超百节Python精品视频课程:涵盖Python编程的必备基础知识、高效爬虫技术、以及深入的数据分析技能,让您技能全面升级。
实战案例集锦:精选超过100个实战项目案例,从理论到实践,让您在解决实际问题的过程中,深化理解,提升编程能力。
华为独家Python漫画教程:创新学习方式,以轻松幽默的漫画形式,让您随时随地,利用碎片时间也能高效学习Python。
互联网企业Python面试真题集:精选历年知名互联网企业面试真题,助您提前备战,面试准备更充分,职场晋升更顺利。
👉 立即领取方式:只需【点击这里】,即刻解锁您的Python学习新篇章!让我们携手并进,在编程的海洋里探索无限可能!