Python爬取新浪博客
——百度
有時候看到新浪博客某個博主的某一篇博客覺得寫得很滿足心意的時候,有沒有讀完此博主的所有博文的衝動呢?
比如韓寒吧,讓我們來動手爬取韓寒的博客!
分析下韓寒博客的目錄URL-->>(http://blog.sina.com.cn/s/articlelist_1191258123_0_1.html)
大概看一下就發現新浪博客的URL鏈接規律,「http://blog.sina.com.cn/s/articlelist_{ID}_0_{Page}.html」。每個博主有一個唯一的ID,然後在博客目錄的page頁面是從1往上的數字;那麼簡單計算博主的博客數,即可得到Page的數量,進而構造好博主的博客目錄;最後一個簡單的循環就可以遍歷所有的頁面;
當然這裡我是分了兩步的:
1. 構造沒一個博客目錄頁面,解析改頁面,得到每個具體博客的頁面鏈接;同時呢,還可以解析到博客的日期;
2. 得到每個博文的URL後,通過Requests 去爬取沒個頁面的正文啦!
當然其中會去人工看下HTML頁面的規律,使用BeautifulSoup來找到我們需要的信息,即可啦!
那麼上代碼吧!
# -*- coding: utf-8 -*-n#@FileName: xinLangBlogn#@Author : cqpauln#@Date : 2017-03-15 11:01:23nnimport requestsnfrom bs4 import BeautifulSoupnimport timenimport mathnimport MySQLdbnimport sysnreload(sys)nsys.setdefaultencoding("utf-8")nnheaders = {n "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",n "Accept-Encoding": "gzip, deflate, sdch, br",n "Accept-Language": "zh-CN,zh;q=0.8",n "User-Agent": "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36",n}nnhost = "127.0.0.1"nuser = "paul"npassword = "123456"ndatabase = "aml"nn# 通過博客數量計算頁面數量,並返回每個頁面的URLn# http://blog.sina.com.cn/s/articlelist_1374505811_0_4.htmlndef buildPageUrlList(articalListId, num):n pageNum = int(math.ceil(num/50))+1n urlList = []n for i in range(1,pageNum+1):n url = "http://blog.sina.com.cn/s/articlelist_{}_0_{}.html".format(articalListId,i)n urlList.append(url)n print urlListn return urlListnndef getPageListBlogInfo(url):n time.sleep(1)n blogItemList = []n try:n response = requests.get(url, headers)n pageSoup = BeautifulSoup(response.text.encode(response.encoding).decode("utf-8"), lxml)n divList = pageSoup.find_all("div", attrs={"class": "articleCell SG_j_linedot1"})n for div in divList:n aItem = div.find("a")n blogUrl = aItem[href]n blogTime = div.find("span", attrs={"class": "atc_tm SG_txtc"}).get_text()n blog = getBlogPageInfo(blogUrl)n blogInfo = {}n blogInfo[blogurl] = blogUrln blogInfo[blogtime] = blogTimen blogInfo["blogtitle"] = blog["title"]n blogInfo["blogbody"] = blog["body"]n blogInfo[blogType] = "Sina"n blogItemList.append(blogInfo)n except Exception as e:n print "Failed: " + urln print e.messagen return blogItemListnn#爬取每個blog的詳情頁ndef getBlogPageInfo(url):n blog = {}n try:n response = requests.get(url,headers=headers)n soupPage = BeautifulSoup(response.text.encode(response.encoding).decode(utf-8), lxml)n blogTitle = soupPage.title.get_text()n blog["title"] = blogTitlen # print blogTitlen divBody = soupPage.find(div, attrs={"id": "sina_keyword_ad_area2"})n blogBody = divBody.get_text().replace(t,).replace(n,).replace( ,)n blog["body"] = blogBodyn # print blogBodyn except Exception as e:n print "Failed: " + urln print e.messagen return blognnif __name__=="__main__":n db = MySQLdb.connect(host, user, password, database, charset=utf8)n cursor = db.cursor()n blogList = []n pageUrlList = buildPageUrlList(1374505811, 2255)n for url in pageUrlList:n blogItemList = getPageListBlogInfo(url)n print "finished one page.."n blogList.extend(blogItemList)n #將數據更新到資料庫n for item in blogList:n sqlstr = "INSERT INTO sinablog(blogUrl,blogTitle,blogType,blogDate,blogBody) VALUES (%s, %s, %s, %s, %s)" % (item["blogurl"], item["blogtitle"].encode(UTF-8), item["blogType"], item["blogtime"], item["blogbody"].encode(UTF-8))n sqlstr = sqlstr.encode(UTF-8)n cursor.execute(sqlstr)n db.commit()nn db.close()n
數據爬下來後,簡單的insert到Mysql數據了,這裡有個問題是在做資料庫連接的時候要加個charset=utf8的參數,不然中文進入Mysql會亂碼。
來個數據截圖:
推薦閱讀:
※Python數據分析及可視化實例之爬蟲源碼(02)
※利用爬蟲和樹莓派3打造自己的語音天氣鬧鐘
※爬取34萬專欄文章:304篇10K+高贊文章匯總
※寫爬蟲很簡單但也很難(附某美女站爬蟲源碼)