Scrapy通过sqlite3保存数据

以爬取当当网作为实例 http://bj.ganji.com/fang1/chaoyang/

通过xpath获取title和price

分别贴出spider, items, pipelines的code

 1 # -*- coding: utf-8 -*-
 2 import scrapy
 3 from ..items import RenthouseItem
 4 
 5 class GanjiSpider(scrapy.Spider):
 6     name = 'ganji'
 7     # allowed_domains = ['bj.ganji.com']
 8     start_urls = ['http://bj.ganji.com/fang1/chaoyang/']
 9 
10     def parse(self, response):
11         #print(response)
12         rh = RenthouseItem()
13         title_list = response.xpath('//*[@class="f-list-item ershoufang-list"]/dl/dd[1]/a/text()').extract()
14         price_list = response.xpath('//*[@class="f-list-item ershoufang-list"]/dl/dd[5]/div[1]/span[1]/text()').extract()
15         # d = {}
16         for i, j in zip(title_list, price_list):
17             rh['title'] = i
18             rh['price'] = j
19             yield rh
20             # d['title'] = i
21             # d['price'] = j
22             # yield d
23         #     print(i, ':', j)
 1 # -*- coding: utf-8 -*-
 2 
 3 # Define here the models for your scraped items
 4 #
 5 # See documentation in:
 6 # https://doc.scrapy.org/en/latest/topics/items.html
 7 
 8 import scrapy
 9 
10 
11 class RenthouseItem(scrapy.Item):
12     # define the fields for your item here like:
13     # name = scrapy.Field()
14     title = scrapy.Field()
15     price = scrapy.Field()
16     # pass
 1 # -*- coding: utf-8 -*-
 2 
 3 # Define your item pipelines here
 4 #
 5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
 6 # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html
 7 import sqlite3
 8 
 9 class RenthousePipeline(object):
10     def open_spider(self, spider):
11         self.con = sqlite3.connect('renthouse.sqlite')
12         self.cu = self.con.cursor()    
13 
14     def process_item(self, item, spider):
15         #print(spider.name)
16         insert_sql = 'insert into renthouse (title, price) values ("{}", "{}")'.format(item['title'], item['price'])
17         #print(insert_sql)
18         self.cu.execute(insert_sql)
19         self.con.commit()
20         return item
21 
22     def spider_close(self, spider):
23         self.con.close() 

spider通过 rh = RenthouseItem() 这一句话初始化一个rh的实例,使我们可以通过这个rh传到pipelines进行处理

所以这里我们每次通过rh传一个字典给pipelines(标题titile,价格price)然后通过sql语句插入到sqlite3

open_spider是打开spider的时候做的,所以这个时候我们连接数据库,个人觉得这篇文章关于cursor光标及sqlite的应用讲的很清楚https://www.cnblogs.com/qq78292959/archive/2013/04/01/2993327.html

注意insert等这种修改数据execute(执行)以后一定要commit(提交)!!!

close_spider就是关闭spider的时候做的,所以这个时候我们关闭与数据库的连接

 

插入成功

关键词:spider item scrapy self price sqlite 通过 title list rh

相关推荐:

Python的scrapy之爬取链家网房价信息并保存到本地

Python Scrapy 爬虫(二):scrapy 初试

Scrape the web for top rated movies on TV

Scrapy实战二

python爬虫爬取赶集网数据

Python3 Srcapy 爬虫

爬虫框架之Scrapy

使用scrapy 把爬到的数据保存到mysql 防止重复

爬虫Scrapy框架运用----房天下二手房数据采集

scrapy框架系列 (2) 一个简单案例