從零開始構建知識圖譜(一)半結構化數據的獲取

從零開始構建知識圖譜(一)半結構化數據的獲取

來自專欄從零開始構建知識圖譜1 人贊了文章


從零開始構建影視知識圖譜。漫漫長征第一步,半結構化數據的獲取。目標網站是百度百科和互動百科,用基於scrapy框架的爬蟲爬取。

簡介

本文章針對半結構化數據的獲取,介紹基於scrapy構建的百度百科爬蟲和互動百科爬蟲。同時為了練手還根據教程製作了基於BeautifulSoup和urllib2的百度百科爬蟲、微信公眾號爬蟲和虎嗅網爬蟲。

目前百度百科爬蟲,爬取電影類數據,包含電影22219部,演員13967人。互動百科爬蟲, 爬取電影類數據,包含電影13866部,演員5931 人。

Pelhans/Z_knowledge_graph?

github.com圖標

Mysql建庫

庫內包含 演員、電影、電影類型、演員->電影、電影->類型 五張表:

  • 演員 :爬取內容為 ID, 簡介, 中文名,外文名,國籍,星座,出生地,出生日期,代表作品,主要成就,經紀公司;
  • 電影 :ID,簡介,中文名,外文名,出品時間,出品公司,導演,編劇,類型,主演,片長,上映時間,對白語言,主要成就;
  • 電影類型: 愛情,喜劇,動作,劇情,科幻,恐怖,動畫,驚悚,犯罪,冒險,其他;
  • 演員->電影: 演員ID, 電影ID;
  • 電影-> 類型: 電影ID, 類型ID;

與其相對應的建表語句即要求請參考craw_without_spider/mysql/creat_sql.txt文件。 在修改目標庫的名稱後即可通過 mysql -uroot -pnlp < creat_sql.txt 命令創建資料庫。

百度百科爬蟲

該爬蟲對應與crawl 下的baidu_baike 文件夾。該爬蟲基於scrapy框架,爬取電影類數據,包含電影22219部,演員13967人,演員電影間聯繫1942個,電影與類別間聯繫23238,其中類別為『其他』的電影有10個。對應數據集可在堅果雲下載

修改item.py

在安裝scrapy 後,可以通過 scrapy startproject baidu_baike 初始化爬蟲框架,它的目錄結構為:

.:

baidu_baike scrapy.cfg

./baidu_baike:

init.py items.py middlewares.py pipelines.py settings.py spiders

init.pyc items.pyc middlewares.pyc pipelines.pyc settings.pyc

./baidu_baike/spiders:

baidu_baike.py baidu_baike.pyc init.py init.pyc

baidu_baike/目錄下的文件是需要我們手動修改的。其中 items.py 對需要爬取的內容進行管理,便於把抓取的內容傳遞進pipelines進行後期處理。現在我們對 baidu_baike/baidu_baike/item.py進行修改,添加要爬取的項。

import scrapyclass BaiduBaikeItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() # 包含演員相關屬性 actor_id = scrapy.Field() actor_bio = scrapy.Field() actor_chName = scrapy.Field() actor_foreName = scrapy.Field() actor_nationality = scrapy.Field() actor_constellation = scrapy.Field() actor_birthPlace = scrapy.Field() actor_birthDay = scrapy.Field() actor_repWorks = scrapy.Field() actor_achiem = scrapy.Field() actor_brokerage = scrapy.Field() # 電影相關屬性 movie_id = scrapy.Field() movie_bio = scrapy.Field() movie_chName = scrapy.Field() movie_foreName = scrapy.Field() movie_prodTime = scrapy.Field() movie_prodCompany = scrapy.Field() movie_director = scrapy.Field() movie_screenwriter = scrapy.Field() movie_genre = scrapy.Field() movie_star = scrapy.Field() movie_length = scrapy.Field() movie_rekeaseTime = scrapy.Field() movie_language = scrapy.Field() movie_achiem = scrapy.Field()

在爬蟲運行過程中,我們主要爬取電影和演員兩類及其對應的各項屬性。對於電影->類別 和 演員->電影兩個表會在爬取數據後進行建立。

修改 pipelines.py

pipelines.py 用來將爬取的內容存放到MySQL資料庫中。類內有初始化init()、處理爬取內容並保存process_item()、關閉資料庫close_spider()三個方法。

from __future__ import absolute_importfrom __future__ import division from __future__ import print_functionimport sysreload(sys)sys.setdefaultencoding(utf-8)import pymysqlfrom pymysql import connectionsfrom baidu_baike import settingsclass BaiduBaikePipeline(object): def __init__(self): # 初始化並連接到mysql資料庫 self.conn = pymysql.connect( host=settings.HOST_IP,# port=settings.PORT, user=settings.USER, passwd=settings.PASSWD, db=settings.DB_NAME, charset=utf8mb4, use_unicode=True ) self.cursor = self.conn.cursor() def process_item(self, item, spider): # process info for actor actor_chName = str(item[actor_chName]).decode(utf-8) actor_foreName = str(item[actor_foreName]).decode(utf-8) movie_chName = str(item[movie_chName]).decode(utf-8) movie_foreName = str(item[movie_foreName]).decode(utf-8) if (item[actor_chName] != None or item[actor_foreName] != None) and item[movie_chName] == None: actor_bio = str(item[actor_bio]).decode(utf-8) actor_nationality = str(item[actor_nationality]).decode(utf-8) actor_constellation = str(item[actor_constellation]).decode(utf-8) actor_birthPlace = str(item[actor_birthPlace]).decode(utf-8) actor_birthDay = str(item[actor_birthDay]).decode(utf-8) actor_repWorks = str(item[actor_repWorks]).decode(utf-8) actor_achiem = str(item[actor_achiem]).decode(utf-8) actor_brokerage = str(item[actor_brokerage]).decode(utf-8) self.cursor.execute("SELECT actor_chName FROM actor;") actorList = self.cursor.fetchall() if (actor_chName,) not in actorList : # get the nums of actor_id in table actor self.cursor.execute("SELECT MAX(actor_id) FROM actor") result = self.cursor.fetchall()[0] if None in result: actor_id = 1 else: actor_id = result[0] + 1 sql = """ INSERT INTO actor(actor_id, actor_bio, actor_chName, actor_foreName, actor_nationality, actor_constellation, actor_birthPlace, actor_birthDay, actor_repWorks, actor_achiem, actor_brokerage ) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) """ self.cursor.execute(sql, (actor_id, actor_bio, actor_chName, actor_foreName, actor_nationality, actor_constellation, actor_birthPlace, actor_birthDay, actor_repWorks, actor_achiem, actor_brokerage )) self.conn.commit() else: print("#" * 20, "Got a duplict actor!!", actor_chName) elif (item[movie_chName] != None or item[movie_foreName] != None) and item[actor_chName] == None: movie_bio = str(item[movie_bio]).decode(utf-8) movie_prodTime = str(item[movie_prodTime]).decode(utf-8) movie_prodCompany = str(item[movie_prodCompany]).decode(utf-8) movie_director = str(item[movie_director]).decode(utf-8) movie_screenwriter = str(item[movie_screenwriter]).decode(utf-8) movie_genre = str(item[movie_genre]).decode(utf-8) movie_star = str(item[movie_star]).decode(utf-8) movie_length = str(item[movie_length]).decode(utf-8) movie_rekeaseTime = str(item[movie_rekeaseTime]).decode(utf-8) movie_language = str(item[movie_language]).decode(utf-8) movie_achiem = str(item[movie_achiem]).decode(utf-8) self.cursor.execute("SELECT movie_chName FROM movie;") movieList = self.cursor.fetchall() if (movie_chName,) not in movieList : self.cursor.execute("SELECT MAX(movie_id) FROM movie") result = self.cursor.fetchall()[0] if None in result: movie_id = 1 else: movie_id = result[0] + 1 sql = """ INSERT INTO movie( movie_id, movie_bio, movie_chName, movie_foreName, movie_prodTime, movie_prodCompany, movie_director, movie_screenwriter, movie_genre, movie_star, movie_length, movie_rekeaseTime, movie_language, movie_achiem ) VALUES ( %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s) """ self.cursor.execute(sql, ( movie_id, movie_bio, movie_chName, movie_foreName, movie_prodTime, movie_prodCompany, movie_director, movie_screenwriter, movie_genre, movie_star, movie_length, movie_rekeaseTime, movie_language, movie_achiem )) self.conn.commit() else: print("Got a duplict movie!!", movie_chName) else: print("Skip this page because wrong category!! ") return item def close_spider(self, spider): self.conn.close()

修改中間件 middlewares.py

middlewares.py 內包含一些UserAgent 和 代理來防止被封。可以自己搜集把自帶的替換掉,也可以直接用我項目內的。

修改 settings.py

settings.py 包含了爬蟲相關的設置,通常我們需要修改我們自定義的pipelines、中間件、隨機延遲等信息。這裡只需要注意的是,使用爬蟲時最好設置一些延遲,尤其目標網站較小時。

編寫 baidu_baike,py

from __future__ import absolute_importfrom __future__ import division from __future__ import print_functionfrom baidu_baike.items import BaiduBaikeItemimport scrapyfrom scrapy.http import Requestfrom bs4 import BeautifulSoupimport reimport urlparseclass BaiduBaikeSpider(scrapy.Spider, object): # 定義爬蟲名稱 name = baidu # 設置允許的域,不以這個開頭的鏈接不會爬取 allowed_domains = ["baike.baidu.com"] # 爬蟲開始的的網址 start_urls = [https://baike.baidu.com/item/%E5%91%A8%E6%98%9F%E9%A9%B0/169917?fr=aladdin]# start_urls = [https://baike.baidu.com/item/%E4%B8%83%E5%B0%8F%E7%A6%8F] # 將返回的標籤列表提取文本並返回 def _get_from_findall(self, tag_list): result = [] for slist in tag_list: tmp = slist.get_text() result.append(tmp) return result # 程序的核心,可以獲取頁面內的指定信息,並獲取頁面內的所有鏈接做進一步的爬取 # response 是初始網址的返回 def parse(self, response): # 分析 response來提取出頁面最下部的標籤信息,如果包含演員或電影則進行爬取,否則跳過 page_category = response.xpath("//dd[@id=open-tag-item]/span[@class=taglist]/text()").extract() page_category = [l.strip() for l in page_category] item = BaiduBaikeItem() # tooooo ugly,,,, but can not use defaultdict for sub_item in [ actor_bio, actor_chName, actor_foreName, actor_nationality, actor_constellation, actor_birthPlace, actor_birthDay, actor_repWorks, actor_achiem, actor_brokerage,movie_bio, movie_chName, movie_foreName, movie_prodTime, movie_prodCompany, movie_director, movie_screenwriter, movie_genre, movie_star, movie_length, movie_rekeaseTime, movie_language, movie_achiem ]: item[sub_item] = None # 如果包含演員標籤則認為是演員 if u演員 in page_category: print("Get a actor page") soup = BeautifulSoup(response.text, lxml) summary_node = soup.find("div", class_ = "lemma-summary") item[actor_bio] = summary_node.get_text().replace("
"," ") # 使用 bs4 對頁面內信息進行提取並保存到對應的item內 all_basicInfo_Item = soup.find_all("dt", class_="basicInfo-item name") basic_item = self._get_from_findall(all_basicInfo_Item) basic_item = [s.strip() for s in basic_item] all_basicInfo_value = soup.find_all("dd", class_ = "basicInfo-item value" ) basic_value = self._get_from_findall(all_basicInfo_value) basic_value = [s.strip() for s in basic_value] for i, info in enumerate(basic_item): info = info.replace(u"xa0", "") if info == u中文名: item[actor_chName] = basic_value[i] elif info == u外文名: item[actor_foreName] = basic_value[i] elif info == u國籍: item[actor_nationality] = basic_value[i] elif info == u星座: item[actor_constellation] = basic_value[i] elif info == u出生地: item[actor_birthPlace] = basic_value[i] elif info == u出生日期: item[actor_birthDay] = basic_value[i] elif info == u代表作品: item[actor_repWorks] = basic_value[i] elif info == u主要成就: item[actor_achiem] = basic_value[i] elif info == u經紀公司: item[actor_brokerage] = basic_value[i] yield item elif u電影 in page_category: print("Get a movie page!!") soup = BeautifulSoup(response.text, lxml) summary_node = soup.find("div", class_ = "lemma-summary") item[movie_bio] = summary_node.get_text().replace("
"," ") all_basicInfo_Item = soup.find_all("dt", class_="basicInfo-item name") basic_item = self._get_from_findall(all_basicInfo_Item) basic_item = [s.strip() for s in basic_item] all_basicInfo_value = soup.find_all("dd", class_ = "basicInfo-item value" ) basic_value = self._get_from_findall(all_basicInfo_value) basic_value = [s.strip() for s in basic_value] for i, info in enumerate(basic_item): info = info.replace(u"xa0", "") if info == u中文名: item[movie_chName] = basic_value[i] elif info == u外文名: item[movie_foreName] = basic_value[i] elif info == u出品時間: item[movie_prodTime] = basic_value[i] elif info == u出品公司: item[movie_prodCompany] = basic_value[i] elif info == u導演: item[movie_director] = basic_value[i] elif info == u編劇: item[movie_screenwriter] = basic_value[i] elif info == u類型: item[movie_genre] = basic_value[i] elif info == u主演: item[movie_star] = basic_value[i] elif info == u片長: item[movie_length] = basic_value[i] elif info == u上映時間: item[movie_rekeaseTime] = basic_value[i] elif info == u對白語言: item[movie_language] = basic_value[i] elif info == u主要成就: item[movie_achiem] = basic_value[i] yield item # 使用 bs4 對頁面內的鏈接進行提取,而後進行循環爬取 soup = BeautifulSoup(response.text, lxml) links = soup.find_all(a, href=re.compile(r"/item/")) for link in links: new_url = link["href"] new_full_url = urlparse.urljoin(https://baike.baidu.com/, new_url) yield scrapy.Request(new_full_url, callback=self.parse)

互動百科爬蟲

該爬蟲對應與crawl 下的hudong_baike 文件夾。該爬蟲基於scrapy框架,爬取電影類數據,包含電影13866部,演員5931人,演員電影間聯繫800個,電影與類別間聯繫14558,其中類別為『其他』的電影有0個。對應數據集可在堅果雲下載

互動百科爬蟲的結構和百度百科相同。二者的主要不同之處在於二者的 info box 的格式不一致,因此採用了不同的方法進行提取。此處不再贅述。

總結

本文章對半結構化數據,即百度百科和互動百科做了爬取並保存到資料庫中。這樣就相當於我們獲得了一份結構化的數據。下篇文章將使用直接映射和D2RQ將其轉化為三元組的形式。

推薦閱讀:

[譯]A Convolutional Encoder Model for Neural Machine Translation
網路表示學習綜述:一文理解Network Embedding
知識圖譜的構建
機器之心專訪 | 追一科技首席科學家楊振宇:對話機器人里不能「耳聞目覽」卻又「無所不在」的 AI
Attention的梳理、隨想與嘗試

TAG:機器學習 | 知識圖譜 | 自然語言處理 |