美文网首页
如何修改Windows上Docker的镜像源

如何修改Windows上Docker的镜像源

作者: 鲨宇 | 来源:发表于2019-01-06 19:20 被阅读0次

Docker Toolbox

1.在Windows命令行执行docker-machine ssh [machine-name]进入VM bash

2.sudo vi /var/lib/boot2docker/profile

3.在--label provider=virtualbox的下一行添加--registry-mirror https://xxxxxxxx.mirror.aliyuncs.com

(阿里云 - 容器Hub服务控制台:https://cr.console.aliyun.com/  可以得到一个专属的镜像加速地址,类似于“https://1234abcd.mirror.aliyuncs.com”。)

4.重启docker服务:sudo /etc/init.d/docker restart或者重启VM:exit退出VM bash,在Windows命令行中执行docker-machine restart

---------------------

作者:王旦东

来源:CSDN

原文:https://blog.csdn.net/wangdandong/article/details/68958210

版权声明:本文为博主原创文章,转载请附上博文链接!

"""

This modules implements the CrawlSpider which is the recommended spider to use

for scraping typical web sites that requires crawling pages.

See documentation in docs/topics/spiders.rst

"""

import copy

import six

from scrapy.httpimport Request, HtmlResponse

from scrapy.utils.spiderimport iterate_spider_output

from scrapy.spidersimport Spider

def identity(x):

return x

class Rule(object):

def __init__(self, link_extractor, callback=None, cb_kwargs=None, follow=None, process_links=None, process_request=identity):

self.link_extractor = link_extractor

self.callback = callback

self.cb_kwargs = cb_kwargsor {}

self.process_links = process_links

self.process_request = process_request

if followis None:

self.follow =False if callbackelse True

        else:

self.follow = follow

class CrawlSpider(Spider):

rules = ()

def __init__(self, *a, **kw):

super(CrawlSpider,self).__init__(*a, **kw)

self._compile_rules()

def parse(self, response):

return self._parse_response(response,self.parse_start_url,cb_kwargs={},follow=True)

def parse_start_url(self, response):

return []

def process_results(self, response, results):

return results

def _build_request(self, rule, link):

r = Request(url=link.url,callback=self._response_downloaded)

r.meta.update(rule=rule,link_text=link.text)

return r

def _requests_to_follow(self, response):

if not isinstance(response, HtmlResponse):

return

        seen =set()

for n, rulein enumerate(self._rules):

links = [lnkfor lnkin rule.link_extractor.extract_links(response)

if lnknot in seen]

if linksand rule.process_links:

links = rule.process_links(links)

for linkin links:

seen.add(link)

r =self._build_request(n, link)

yield rule.process_request(r)

def _response_downloaded(self, response):

rule =self._rules[response.meta['rule']]

return self._parse_response(response, rule.callback, rule.cb_kwargs, rule.follow)

def _parse_response(self, response, callback, cb_kwargs, follow=True):

if callback:

cb_res = callback(response, **cb_kwargs)or ()

cb_res =self.process_results(response, cb_res)

for requests_or_itemin iterate_spider_output(cb_res):

yield requests_or_item

if followand self._follow_links:

for request_or_itemin self._requests_to_follow(response):

yield request_or_item

def _compile_rules(self):

def get_method(method):

if callable(method):

return method

elif isinstance(method, six.string_types):

return getattr(self, method,None)

self._rules = [copy.copy(r)for rin self.rules]

for rulein self._rules:

rule.callback = get_method(rule.callback)

rule.process_links = get_method(rule.process_links)

rule.process_request = get_method(rule.process_request)

@classmethod

    def from_crawler(cls, crawler, *args, **kwargs):

spider =super(CrawlSpider,cls).from_crawler(crawler, *args, **kwargs)

spider._follow_links = crawler.settings.getbool(

'CRAWLSPIDER_FOLLOW_LINKS',True)

return spider

def set_crawler(self, crawler):

super(CrawlSpider,self).set_crawler(crawler)

self._follow_links = crawler.settings.getbool('CRAWLSPIDER_FOLLOW_LINKS',True)

相关文章

网友评论

      本文标题:如何修改Windows上Docker的镜像源

      本文链接:https://www.haomeiwen.com/subject/ygxerqtx.html