批量爬取并存入数据库
小插曲
- 有点打脸,还以为自己失败了,刚开始自己换了个地址,发现输出的是空列表,以为没换个地址就要换个cookie,这样的话就该放弃了。
- 然后在取新鲜的cookie,发现只是时间变了,然后在爬多个不同地址,发现成功了,可已推测cookie在一定时间内有效。

- 从cookie信息中可以看出里面包含了时间点和有效时间长
代码
import requests
from lxml import etree
import pymysql
import pymysql
def my_request(url):headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0','Cookie':'jianyue_session=eyJpdiI6ImZoZVlcL1RcLzRuN0U2cmlCR2ZJcTFhdz09IiwidmFsdWUiOiJrbzZkWE9NREkxSkMwaHhEZXc0THB1NFhTXC85aDlmSjNqU2o2ZUtcL2NrdGNPeWg4cWd5XC9JQ3NENVlpcnhqcTk4eWt2NTRYNjBoMFZYUExMdGJKWUJKZz09IiwibWFjIjoiNDU5MmEzZmE1N2RhMzQ0MTFlM2M2NDMyZjg3MTQ0NTE2ZGFmM2I1ZTY1MzA5MWY1MDFlNmRiMjliMmI5ZTUxYyJ9; expires=Wed, 11-Dec-2019 04:10:16 GMT; Max-Age=7200; path=/; httponly'}params = {"_token": "P1o8Fz9ZOAuBojBsNGNfPa9vivr5PqRBUFwstL8I","mobile": '15263819410',"password": "15263819410","remember": "1"}session = requests.Session()post_url = 'http://jianyue.pro/login'image_url = "http://jianyue.pro"session.post(post_url,data=params,headers=headers)resopnse = session.get(url,headers=headers)resopnse.encoding='utf-8'if resopnse.status_code == 200:result_status = resopnse.status_codehtmlEmts_name_2 = etree.HTML(resopnse.text).xpath('/html/head/title/text()')result_title = ''.join(htmlEmts_name_2)htmlEmts_name = etree.HTML(resopnse.text).xpath('/html/body/div[4]/div/div/img/@src')result_image = ''.join(htmlEmts_name)result_image = image_url + result_imagehtmlEmts_name_3 = etree.HTML(resopnse.text).xpath('/html/body/div[5]/p/text()')result_cotent = ''.join(htmlEmts_name_3)htmlEmts_name_4 = etree.HTML(resopnse.text).xpath('/html/body/div[4]/div/dl/dd[2]/span/text()')result_writer = ''.join(htmlEmts_name_4)htmlEmts_name_5 = etree.HTML(resopnse.text).xpath('/html/body/div[4]/div/dl/dd[1]/span[2]/text()')result_sorce = ''.join(htmlEmts_name_5)my_dict = {"title":result_title,"writer":result_writer,"content":result_cotent,"sorce":result_sorce,"image":result_image}return my_dictelse:return "网页不存在"if __name__ == '__main__':for num_1 in range(7836,9200):id = num_1url = "http://jianyue.pro/app/ebook/details/"url =url + str(num_1)result = my_request(url)if result == "网页不存在":continuedb = pymysql.connect("localhost", "root", "password", "book_xianyushaishu", charset="utf8")cursor = db.cursor() sql = "INSERT INTO JYUE(TITLE, \ID, WRITER,CONTENT,SORCE,IMAGE ) \VALUES ('%s', '%d','%s','%s','%s','%s')" % \(result['title'], id, result['writer'],result['content'],result['sorce'],result['image'])try:cursor.execute(sql)db.commit() cursor.close() db.close() print("第%d条数据爬取成功"%num_1)except:print("第%d条数据爬取失败---------------"%num_1)print("结束")
- 代码简单的封装了一下,my_request函数功能为:传入url,返回爬取的相关信息,然后用字典的形式返回。
- 让后再将信息写入到本地数据库,这段代码不多解释了,我也是照套的。
- 还有try:部分代码报错的话就会直接取执行expect部分代码,这样可以使程序一直运行。
- for num_1 in range(7836,9200):
range中的是开始地址和结束地址的页数

- 注意:cookie有效时间是7200s,要按时更换一下cookie,如果是多线程的话速度会快些,这里我就不写了(其实是不会,只是了解了一下,到了必要的时候再现学现用)
- 刚觉自己写玩没有怎么去解释,如果有疑问可以留言交流,如果不懂怎么安装mysql和破解navicat可以下方留言评论,自己以前在安装mysql的时候花了一些时间,搞不懂怎是个文件夹而不是客户端,还有配置环境变量。
- 获取到数据后,我用django做电子书资源网站,这期教程就结束了额。