仿写原创——单页面爬取
爬取网站:左侧的标题,连接,内容1.item.py定义爬取内容import scrapyclass MaiziItem(scrapy.Item): title = scrapy.Field() link=scrapy.Field() desc =scrapy.Field()
2.spider文件编写
# -*- coding: utf-8 -*-#encoding=utf-8import scrapyfrom LianHeZaoBao.items import LianhezaobaoItemreload(__import__('sys')).setdefaultencoding('utf-8') class MaimaiSpider(scrapy.Spider): name = "lianhe" allowed_domains = ["http://www.zaobao.com/news/china//"] start_urls = ( 'http://www.zaobao.com/news/china//', ) def parse(self, response): for li in response.xpath('//*[@id="l_title"]/ul/li'): item = LianhezaobaoItem() item['title'] = li.xpath('a[1]/p/text()').extract() item['link']=li.xpath('a[1]/@href').extract() item['desc'] = li.xpath('a[2]/p/text()').extract() yield item
3.保存文件:命令scrapy crawl lianhe -o lianhe.csv