第十二章 Scrapy:Python3下的第一次運行測試

1,引言

《Scrapy的架構初探》一文講解了Scrapy的架構,本文就實際來安裝運行一下Scrapy爬蟲。本文以官網的tutorial作為例子,完整的代碼可以在github上下載。

2,運行環境配置

  • 本次測試的環境是:Windows10, Python3.4.3 32bit
  • 安裝Scrapy : $ pip install Scrapy #實際安裝時,由於伺服器狀態的不穩定,出現好幾次中途退出的情況

3,編寫運行第一個Scrapy爬蟲

3.1. 生成一個新項目:tutorial

$ scrapy startproject tutorialn

項目目錄結構如下:

3.2. 定義要抓取的item

# -*- coding: utf-8 -*-nn# Define here the models for your scraped itemsn#n# See documentation in:n# http://doc.scrapy.org/en/latest/topics/items.htmlnnimport scrapynnclass DmozItem(scrapy.Item):n title = scrapy.Field()n link = scrapy.Field()n desc = scrapy.Field()n

3.3. 定義Spider

import scrapynfrom tutorial.items import DmozItemnnclass DmozSpider(scrapy.Spider):n name = "dmoz"n allowed_domains = ["dmoz.org"]n start_urls = [n "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",n "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"n ]nn def parse(self, response):n for sel in response.xpath(//ul/li):n item = DmozItem()n item[title] = sel.xpath(a/text()).extract()n item[link] = sel.xpath(a/@href).extract()n item[desc] = sel.xpath(text()).extract()n yield itemn

3.4. 運行

$ scrapy crawl dmoz -o item.jsonn

1) 結果報錯:

A) ImportError: cannot import name _win32stdio

B) ImportError: No module named win32api

2) 查錯過程:查看官方的FAQ和stackoverflow上的信息,原來是scrapy在python3上測試還不充分,還有小問題。

3) 解決過程:

A) 需要手工去下載twisted/internet下的 _win32stdio 和 _pollingfile,存放到python目錄的libsitepackagestwistedinternet下

B) 下載並安裝pywin32

再次運行,成功!在控制台上可以看到scrapy的輸出信息,待運行完成退出後,到項目目錄打開結果文件items.json, 可以看到裡面以json格式存儲的爬取結果。

[n{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},n{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},n{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},n{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},n{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},n{"title": [], "desc": [" ", " Share via Facebook "], "link": []},n{"title": [], "desc": [" ", " Share via Twitter "], "link": []},n{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},n{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},n{"title": [], "desc": [" ", " "], "link": []},n{"title": [], "desc": [" ", " "], "link": []},n{"title": [" About "], "desc": [" ", " "], "link": ["/docs/en/about.html"]},n{"title": [" Become an Editor "], "desc": [" ", " "], "link": ["/docs/en/help/become.html"]},n{"title": [" Suggest a Site "], "desc": [" ", " "], "link": ["/docs/en/add.html"]},n{"title": [" Help "], "desc": [" ", " "], "link": ["/docs/en/help/helpmain.html"]},n{"title": [" Login "], "desc": [" ", " "], "link": ["/editors/"]},n{"title": [], "desc": [" ", " Share via Facebook "], "link": []},n{"title": [], "desc": [" ", " Share via Twitter "], "link": []},n{"title": [], "desc": [" ", " Share via LinkedIn "], "link": []},n{"title": [], "desc": [" ", " Share via e-Mail "], "link": []},n{"title": [], "desc": [" ", " "], "link": []},n{"title": [], "desc": [" ", " "], "link": []}n]n

第一次運行scrapy的測試成功。

4,接下來的工作

接下來,我將使用GooSeeker API來實現網路爬蟲,省掉對每個item人工去生成和測試xpath的工作量。目前有2個計劃:

  1. 在gsExtractor中封裝一個方法:從xslt內容中自動提取每個item的xpath
  2. 從gsExtractor的提取結果中自動提取每個item的結果

具體選擇哪個方案,將在接下來的實驗中確定,並發布到gsExtractor新版本中

5,文檔修改歷史

2016-06-11:V1.0,首次發布

上一章 Scrapy入門程序點評 <<<<<首頁>>>>> 下一章 將採集結果從xml格式轉json


推薦閱讀:

如何利用python爬取靜態網頁數據?
為何抵觸爬蟲?
Python網頁信息採集:使用PhantomJS採集某貓寶商品內容
【記錄】Scrapy模擬登錄cookie失效問題

TAG:Python | scrapy | 爬虫计算机网络 |