DDR爱好者之家 Design By 杰米
上周用了一周的时间学习了Python和Scrapy,实现了从0到1完整的网页爬虫实现。研究的时候很痛苦,但是很享受,做技术的嘛。
首先,安装Python,坑太多了,一个个爬。由于我是windows环境,没钱买mac, 在安装的时候遇到各种各样的问题,确实各种各样的依赖。
安装教程不再赘述。如果在安装的过程中遇到 ERROR:需要windows c/c++问题,一般是由于缺少windows开发编译环境,晚上大多数教程是安装一个VisualStudio,太不靠谱了,事实上只要安装一个WindowsSDK就可以了。
下面贴上我的爬虫代码:
爬虫主程序:
# -*- coding: utf-8 -*- import scrapy from scrapy.http import Request from zjf.FsmzItems import FsmzItem from scrapy.selector import Selector # 圈圈:情感生活 class MySpider(scrapy.Spider): #爬虫名 name = "MySpider" #设定域名 allowed_domains = ["nvsheng.com"] #爬取地址 start_urls = [] #flag x = 0 #爬取方法 def parse(self, response): item = FsmzItem() sel = Selector(response) item['title'] = sel.xpath('//h1/text()').extract() item['text'] = sel.xpath('//*[@class="content"]/p/text()').extract() item['imags'] = sel.xpath('//div[@id="content"]/p/a/img/@src|//div[@id="content"]/p/img/@src').extract() if MySpider.x == 0: page_list = MySpider.getUrl(self,response) for page_single in page_list: yield Request(page_single) MySpider.x += 1 yield item #init: 动态传入参数 #命令行传参写法: scrapy crawl MySpider -a start_url="http://some_url" def __init__(self,*args,**kwargs): super(MySpider,self).__init__(*args,**kwargs) self.start_urls = [kwargs.get('start_url')] def getUrl(self, response): url_list = [] select = Selector(response) page_list_tmp = select.xpath('//div[@class="viewnewpages"]/a[not(@class="next")]/@href').extract() for page_tmp in page_list_tmp: if page_tmp not in url_list: url_list.append("http://www.nvsheng.com/emotion/px/" + page_tmp) return url_list
PipeLines类
# -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html from zjf import settings import json,os,re,random import urllib.request import requests, json from requests_toolbelt.multipart.encoder import MultipartEncoder class MyPipeline(object): flag = 1 post_title = '' post_text = [] post_text_imageUrl_list = [] cs = [] user_id= '' def __init__(self): MyPipeline.user_id = MyPipeline.getRandomUser('37619,18441390,18441391') #process the data def process_item(self, item, spider): #获取随机user_id,模拟发帖 user_id = MyPipeline.user_id #获取正文text_str_tmp text = item['text'] text_str_tmp = "" for str in text: text_str_tmp = text_str_tmp + str # print(text_str_tmp) #获取标题 if MyPipeline.flag == 1: title = item['title'] MyPipeline.post_title = MyPipeline.post_title + title[0] #保存并上传图片 text_insert_pic = '' text_insert_pic_w = '' text_insert_pic_h = '' for imag_url in item['imags']: img_name = imag_url.replace('/','').replace('.','').replace('|','').replace(':','') pic_dir = settings.IMAGES_STORE + '%s.jpg' %(img_name) urllib.request.urlretrieve(imag_url,pic_dir) #图片上传,返回json upload_img_result = MyPipeline.uploadImage(pic_dir,'image/jpeg') #获取json中保存图片路径 text_insert_pic = upload_img_result['result']['image_url'] text_insert_pic_w = upload_img_result['result']['w'] text_insert_pic_h = upload_img_result['result']['h'] #拼接json if MyPipeline.flag == 1: cs_json = {"c":text_str_tmp,"i":"","w":text_insert_pic_w,"h":text_insert_pic_h} else: cs_json = {"c":text_str_tmp,"i":text_insert_pic,"w":text_insert_pic_w,"h":text_insert_pic_h} MyPipeline.cs.append(cs_json) MyPipeline.flag += 1 return item #spider开启时被调用 def open_spider(self,spider): pass #sipder 关闭时被调用 def close_spider(self,spider): strcs = json.dumps(MyPipeline.cs) jsonData = {"apisign":"99ea3eda4b45549162c4a741d58baa60","user_id":MyPipeline.user_id,"gid":30,"t":MyPipeline.post_title,"cs":strcs} MyPipeline.uploadPost(jsonData) #上传图片 def uploadImage(img_path,content_type): "uploadImage functions" #UPLOAD_IMG_URL = "http://api.qa.douguo.net/robot/uploadpostimage" UPLOAD_IMG_URL = "http://api.douguo.net/robot/uploadpostimage" # 传图片 #imgPath = 'D:\pics\http___img_nvsheng_com_uploads_allimg_170119_18-1f1191g440_jpg.jpg' m = MultipartEncoder( # fields={'user_id': '192323', # 'images': ('filename', open(imgPath, 'rb'), 'image/JPEG')} fields={'user_id': MyPipeline.user_id, 'apisign':'99ea3eda4b45549162c4a741d58baa60', 'image': ('filename', open(img_path , 'rb'),'image/jpeg')} ) r = requests.post(UPLOAD_IMG_URL,data=m,headers={'Content-Type': m.content_type}) return r.json() def uploadPost(jsonData): CREATE_POST_URL = http://api.douguo.net/robot/uploadimagespost
reqPost = requests.post(CREATE_POST_URL,data=jsonData)
def getRandomUser(userStr): user_list = [] user_chooesd = '' for user_id in str(userStr).split(','): user_list.append(user_id) userId_idx = random.randint(1,len(user_list)) user_chooesd = user_list[userId_idx-1] return user_chooesd
字段保存Items类
# -*- coding: utf-8 -*- # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html import scrapy class FsmzItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() title = scrapy.Field() #tutor = scrapy.Field() #strongText = scrapy.Field() text = scrapy.Field() imags = scrapy.Field()
在命令行里键入
scrapy crawl MySpider -a start_url=www.aaa.com
这样就可以爬取aaa.com下的内容了
以上这篇Python下使用Scrapy爬取网页内容的实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持。
DDR爱好者之家 Design By 杰米
广告合作:本站广告合作请联系QQ:858582 申请时备注:广告合作(否则不回)
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
免责声明:本站资源来自互联网收集,仅供用于学习和交流,请遵循相关法律法规,本站一切资源不代表本站立场,如有侵权、后门、不妥请联系本站删除!
DDR爱好者之家 Design By 杰米
暂无评论...
稳了!魔兽国服回归的3条重磅消息!官宣时间再确认!
昨天有一位朋友在大神群里分享,自己亚服账号被封号之后居然弹出了国服的封号信息对话框。
这里面让他访问的是一个国服的战网网址,com.cn和后面的zh都非常明白地表明这就是国服战网。
而他在复制这个网址并且进行登录之后,确实是网易的网址,也就是我们熟悉的停服之后国服发布的暴雪游戏产品运营到期开放退款的说明。这是一件比较奇怪的事情,因为以前都没有出现这样的情况,现在突然提示跳转到国服战网的网址,是不是说明了简体中文客户端已经开始进行更新了呢?
更新日志
2024年11月26日
2024年11月26日
- 凤飞飞《我们的主题曲》飞跃制作[正版原抓WAV+CUE]
- 刘嘉亮《亮情歌2》[WAV+CUE][1G]
- 红馆40·谭咏麟《歌者恋歌浓情30年演唱会》3CD[低速原抓WAV+CUE][1.8G]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[320K/MP3][193.25MB]
- 【轻音乐】曼托凡尼乐团《精选辑》2CD.1998[FLAC+CUE整轨]
- 邝美云《心中有爱》1989年香港DMIJP版1MTO东芝首版[WAV+CUE]
- 群星《情叹-发烧女声DSD》天籁女声发烧碟[WAV+CUE]
- 刘纬武《睡眠宝宝竖琴童谣 吉卜力工作室 白噪音安抚》[FLAC/分轨][748.03MB]
- 理想混蛋《Origin Sessions》[320K/MP3][37.47MB]
- 公馆青少年《我其实一点都不酷》[320K/MP3][78.78MB]
- 群星《情叹-发烧男声DSD》最值得珍藏的完美男声[WAV+CUE]
- 群星《国韵飘香·贵妃醉酒HQCD黑胶王》2CD[WAV]
- 卫兰《DAUGHTER》【低速原抓WAV+CUE】
- 公馆青少年《我其实一点都不酷》[FLAC/分轨][398.22MB]
- ZWEI《迟暮的花 (Explicit)》[320K/MP3][57.16MB]