DDR爱好者之家 Design By 杰米
本文实例讲述了python实现爬取千万淘宝商品的方法。分享给大家供大家参考。具体实现方法如下:
import time import leveldb from urllib.parse import quote_plus import re import json import itertools import sys import requests from queue import Queue from threading import Thread URL_BASE = 'http://s.m.taobao.com/search?q={}&n=200&m=api4h5&style=list&page={}' def url_get(url): # print('GET ' + url) header = dict() header['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' header['Accept-Encoding'] = 'gzip,deflate,sdch' header['Accept-Language'] = 'en-US,en;q=0.8' header['Connection'] = 'keep-alive' header['DNT'] = '1' #header['User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36' header['User-Agent'] = 'Mozilla/12.0 (compatible; MSIE 8.0; Windows NT)' return requests.get(url, timeout = 5, headers = header).text def item_thread(cate_queue, db_cate, db_item): while True: try: cate = cate_queue.get() post_exist = True try: state = db_cate.Get(cate.encode('utf-8')) if state != b'OK': post_exist = False except: post_exist = False if post_exist == True: print('cate-{}: {} already exists ... Ignore'.format(cate, title)) continue db_cate.Put(cate.encode('utf-8'), b'crawling') for item_page in itertools.count(1): url = URL_BASE.format(quote_plus(cate), item_page) for tr in range(5): try: items_obj = json.loads(url_get(url)) break except KeyboardInterrupt: quit() except Exception as e: if tr == 4: raise e if len(items_obj['listItem']) == 0: break for item in items_obj['listItem']: item_obj = dict( _id = int(item['itemNumId']), name = item['name'], price = float(item['price']), query = cate, category = int(item['category']) if item['category'] != '' else 0, nick = item['nick'], area = item['area']) db_item.Put(str(item_obj['_id']).encode('utf-8'), json.dumps(item_obj, ensure_ascii = False).encode('utf-8')) print('Get {} items from {}: {}'.format(len(items_obj['listItem']), cate, item_page)) if 'nav' in items_obj: for na in items_obj['nav']['navCatList']: try: db_cate.Get(na['name'].encode('utf-8')) except: db_cate.Put(na['name'].encode('utf-8'), b'waiting') db_cate.Put(cate.encode('utf-8'), b'OK') print(cate, 'OK') except KeyboardInterrupt: break except Exception as e: print('An {} exception occured'.format(e)) def cate_thread(cate_queue, db_cate): while True: try: for key, value in db_cate.RangeIter(): if value != b'OK': print('CateThread: put {} into queue'.format(key.decode('utf-8'))) cate_queue.put(key.decode('utf-8')) time.sleep(10) except KeyboardInterrupt: break except Exception as e: print('CateThread: {}'.format(e)) if __name__ == '__main__': db_cate = leveldb.LevelDB('./taobao-cate') db_item = leveldb.LevelDB('./taobao-item') orig_cate = '正装' try: db_cate.Get(orig_cate.encode('utf-8')) except: db_cate.Put(orig_cate.encode('utf-8'), b'waiting') cate_queue = Queue(maxsize = 1000) cate_th = Thread(target = cate_thread, args = (cate_queue, db_cate)) cate_th.start() item_th = [Thread(target = item_thread, args = (cate_queue, db_cate, db_item)) for _ in range(5)] for item_t in item_th: item_t.start() cate_th.join()
希望本文所述对大家的Python程序设计有所帮助。
DDR爱好者之家 Design By 杰米
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
DDR爱好者之家 Design By 杰米
暂无评论...
更新日志
2024年11月25日
2024年11月25日
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]