0. 为什么要写docker源码的分析
docker的概念大家都懂,但是只知道docker怎么用似乎并没有卵用,毕竟现在对容器技术炒得火热,但是漫山遍野都是怎么用docker,少有对docker的细节的表述。最完整的是《Docker源码分析》一书,但是时间久远,随着docker版本的迅速迭代,很多地方已经对不上了。
正好在准备秋招,于是趁着自己有找工作的动力和压力,把源码撸一遍,讲的不清楚,也不细,好歹也算对得起自己,要死也要站着死。
docker daemon的初始化
docker首先会进行daemon的初始化,入口在cmd/dockerd/docker.go的main()
函数,主要完成了标准输出流/错误流的设置,通过newDaemonCommand()
接口生成一个新的命令并执行。
函数的最开始有reexec.Init()
判断语句,参考stackoverflow上的提问,大概意思是这个初始化程序只对daemon有效,有了这个就不用docker -d
了。
func main() {
if reexec.Init() {
return
}
// Set terminal emulation based on platform as required.
_, stdout, stderr := term.StdStreams()
// @jhowardmsft - maybe there is a historic reason why on non-Windows, stderr is used
// here. However, on Windows it makes no sense and there is no need.
if runtime.GOOS == "windows" {
logrus.SetOutput(stdout)
} else {
logrus.SetOutput(stderr)
}
cmd := newDaemonCommand()
cmd.SetOutput(stdout)
if err := cmd.Execute(); err != nil {
fmt.Fprintf(stderr, "%s\n", err)
os.Exit(1)
}
}
再到newDaemonCommand()
里看,可以看到docker采用了cobra这样一个API管理框架,并且定义了runDaemon
这个函数来运行docker daemon。那么定义的RunE
是在哪儿执行的呢?实际上在将cmd返回之后会进行Execute()
的操作,其中就执行了一系列的PreRunE
, RunE
, PostRunE
等等一系列操作,而这些操作都是在cobra的设置中注册的。
func newDaemonCommand() *cobra.Command {
opts := newDaemonOptions(config.New())
cmd := &cobra.Command{
Use: "dockerd [OPTIONS]",
Short: "A self-sufficient runtime for containers.",
SilenceUsage: true,
SilenceErrors: true,
Args: cli.NoArgs,
RunE: func(cmd *cobra.Command, args []string) error {
opts.flags = cmd.Flags()
return runDaemon(opts)
},
DisableFlagsInUseLine: true,
Version: fmt.Sprintf("%s, build %s", dockerversion.Version, dockerversion.GitCommit),
}
...
return cmd
}
runDaemon
的内容非常简单,对于UNIX系统而言,只需要新建daemonCli,然后启动它。对于windows会复杂一些,暂时不介绍windows的方式。NewDaemonCli()
返回了一个DaemonCli的结构体,里面有配置信息、flag、API server、daemon本身和鉴权有关的一些内容。真正的建立一个daemon的过程集中在start()
中,其中初始化了daemon的一些组件,如API server, router, registry, pluginStore等,并通过NewDaemon()
初始化了daemon,至此DaemonCli的内容已经初始化完毕。
func runDaemon(opts *daemonOptions) error {
daemonCli := NewDaemonCli()
return daemonCli.start(opts)
}
// DaemonCli represents the daemon CLI.
type DaemonCli struct {
*config.Config
configFile *string
flags *pflag.FlagSet
api *apiserver.Server
d *daemon.Daemon
authzMiddleware *authorization.Middleware // authzMiddleware enables to dynamically reload the authorization plugins
}
func (cli *DaemonCli) start(opts *daemonOptions) (err error) {
// 只贴出关键的代码
opts.SetDefaultOptions(opts.flags)
loadDaemonCliConfig(opts) // 配置信息
setDefaultUmask()
// Create the daemon root before we create ANY other files (PID, or migrate keys)
daemon.CreateDaemonRoot(cli.Config)
pidfile.New(cli.Pidfile) // process ID
newAPIServerConfig(cli) // 新建API server
registry.NewService(cli.Config.ServiceOptions) // 新建registry
libcontainerd.New // 新建libcontainerd对象
// Notify that the API is active, but before daemon is set up.
preNotifySystem()
plugin.NewStore() // plugin
// 重点!!!通过config, registry, containerd, pluginStore来真正创建了daemon
daemon.NewDaemon(cli.Config, registryService, containerdRemote, pluginStore)
validateAuthzPlugins() // 鉴权相关
startMetricsServer()
createAndStartCluster()
RestartSwarmContainers()
newRouterOptions(cli.Config, d) // 配置router
go cli.api.Wait(serveAPIWait) // APIServer开始监听、提供服务
// after the daemon is done setting up we can notify systemd api
notifySystem()
}
需要额外关注一下NewDaemon()
的流程,这是一个守护进程的真正创建者。这段代码非常的长,不过注释挺多的,还比较容易看得出它大概在干啥。整体流程:
- 设置MTU
- 检查RootKeyLimit(和能启动的容器数量有关)
- 检查daemon配置
- 检查网络环境(linux bridge)
- 检查platform是否支持(比如darwin, linux, windows等等)
- 检查当前系统是否满足要求
- setupRemappedRoot(User namespaces的隔离,将容器内的用户映射为宿主机上的普通用户)
- 设置路径名的环境变量
- 设置异常处理
- 一些底层的和堆栈有关的设置
- 设置Seccomp机制(安全机制)
- 设置AppArmor(同样是安全机制,所以docker就是简单粗暴的把安全机制都堆在一起用?)
- 初始化与镜像存储相关的目录及Store,默认在/var/lib/docker/containers
- 设置graphDriver,参考文档
- 设置registry service, layer store, image store, volumn service,对应镜像、卷
- 日志的设置
- 调用
NewClient()
生成一个libcontainerd实例和daemon通信
而NewClient()
被定义在libcontainerd模块中,从代码里能够看到采用的通信方式是GRPC,一旦client建立,就会并发的处理event stream。
func NewDaemon(config *config.Config, registryService registry.Service, containerdRemote libcontainerd.Remote, pluginStore *plugin.Store) (daemon *Daemon, err error) {
setDefaultMtu(config)
// Ensure that we have a correct root key limit for launching containers.
if err := ModifyRootKeyLimit(); err != nil {
logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err)
}
// Ensure we have compatible and valid configuration options
if err := verifyDaemonSettings(config); err != nil {
return nil, err
}
// Do we have a disabled network?
config.DisableBridge = isBridgeNetworkDisabled(config)
// Verify the platform is supported as a daemon
if !platformSupported {
return nil, errSystemNotSupported
}
// Validate platform-specific requirements
if err := checkSystem(); err != nil {
return nil, err
}
idMappings, err := setupRemappedRoot(config)
if err != nil {
return nil, err
}
rootIDs := idMappings.RootPair()
if err := setupDaemonProcess(config); err != nil {
return nil, err
}
// set up the tmpDir to use a canonical path
tmp, err := prepareTempDir(config.Root, rootIDs)
if err != nil {
return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err)
}
realTmp, err := getRealPath(tmp)
if err != nil {
return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err)
}
if runtime.GOOS == "windows" {
if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) {
if err := system.MkdirAll(realTmp, 0700, ""); err != nil {
return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err)
}
}
os.Setenv("TEMP", realTmp)
os.Setenv("TMP", realTmp)
} else {
os.Setenv("TMPDIR", realTmp)
}
d := &Daemon{
configStore: config,
PluginStore: pluginStore,
startupDone: make(chan struct{}),
}
// Ensure the daemon is properly shutdown if there is a failure during
// initialization
defer func() {
if err != nil {
if err := d.Shutdown(); err != nil {
logrus.Error(err)
}
}
}()
if err := d.setGenericResources(config); err != nil {
return nil, err
}
// set up SIGUSR1 handler on Unix-like systems, or a Win32 global event
// on Windows to dump Go routine stacks
stackDumpDir := config.Root
if execRoot := config.GetExecRoot(); execRoot != "" {
stackDumpDir = execRoot
}
d.setupDumpStackTrap(stackDumpDir)
if err := d.setupSeccompProfile(); err != nil {
return nil, err
}
// Set the default isolation mode (only applicable on Windows)
if err := d.setDefaultIsolation(); err != nil {
return nil, fmt.Errorf("error setting default isolation mode: %v", err)
}
if err := configureMaxThreads(config); err != nil {
logrus.Warnf("Failed to configure golang's threads limit: %v", err)
}
if err := ensureDefaultAppArmorProfile(); err != nil {
logrus.Errorf(err.Error())
}
daemonRepo := filepath.Join(config.Root, "containers")
if err := idtools.MkdirAllAndChown(daemonRepo, 0700, rootIDs); err != nil {
return nil, err
}
// Create the directory where we'll store the runtime scripts (i.e. in
// order to support runtimeArgs)
daemonRuntimes := filepath.Join(config.Root, "runtimes")
if err := system.MkdirAll(daemonRuntimes, 0700, ""); err != nil {
return nil, err
}
if err := d.loadRuntimes(); err != nil {
return nil, err
}
if runtime.GOOS == "windows" {
if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0, ""); err != nil {
return nil, err
}
}
// On Windows we don't support the environment variable, or a user supplied graphdriver
// as Windows has no choice in terms of which graphdrivers to use. It's a case of
// running Windows containers on Windows - windowsfilter, running Linux containers on Windows,
// lcow. Unix platforms however run a single graphdriver for all containers, and it can
// be set through an environment variable, a daemon start parameter, or chosen through
// initialization of the layerstore through driver priority order for example.
d.graphDrivers = make(map[string]string)
layerStores := make(map[string]layer.Store)
if runtime.GOOS == "windows" {
d.graphDrivers[runtime.GOOS] = "windowsfilter"
if system.LCOWSupported() {
d.graphDrivers["linux"] = "lcow"
}
} else {
driverName := os.Getenv("DOCKER_DRIVER")
if driverName == "" {
driverName = config.GraphDriver
} else {
logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName)
}
d.graphDrivers[runtime.GOOS] = driverName // May still be empty. Layerstore init determines instead.
}
d.RegistryService = registryService
logger.RegisterPluginGetter(d.PluginStore)
metricsSockPath, err := d.listenMetricsSock()
if err != nil {
return nil, err
}
registerMetricsPluginCallback(d.PluginStore, metricsSockPath)
createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) {
return pluginexec.New(getPluginExecRoot(config.Root), containerdRemote, m)
}
// Plugin system initialization should happen before restore. Do not change order.
d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{
Root: filepath.Join(config.Root, "plugins"),
ExecRoot: getPluginExecRoot(config.Root),
Store: d.PluginStore,
CreateExecutor: createPluginExec,
RegistryService: registryService,
LiveRestoreEnabled: config.LiveRestoreEnabled,
LogPluginEvent: d.LogPluginEvent, // todo: make private
AuthzMiddleware: config.AuthzMiddleware,
})
if err != nil {
return nil, errors.Wrap(err, "couldn't create plugin manager")
}
if err := d.setupDefaultLogConfig(); err != nil {
return nil, err
}
for operatingSystem, gd := range d.graphDrivers {
layerStores[operatingSystem], err = layer.NewStoreFromOptions(layer.StoreOptions{
Root: config.Root,
MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"),
GraphDriver: gd,
GraphDriverOptions: config.GraphOptions,
IDMappings: idMappings,
PluginGetter: d.PluginStore,
ExperimentalEnabled: config.Experimental,
OS: operatingSystem,
})
if err != nil {
return nil, err
}
}
// As layerstore initialization may set the driver
for os := range d.graphDrivers {
d.graphDrivers[os] = layerStores[os].DriverName()
}
// Configure and validate the kernels security support. Note this is a Linux/FreeBSD
// operation only, so it is safe to pass *just* the runtime OS graphdriver.
if err := configureKernelSecuritySupport(config, d.graphDrivers[runtime.GOOS]); err != nil {
return nil, err
}
imageRoot := filepath.Join(config.Root, "image", d.graphDrivers[runtime.GOOS])
ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb"))
if err != nil {
return nil, err
}
lgrMap := make(map[string]image.LayerGetReleaser)
for os, ls := range layerStores {
lgrMap[os] = ls
}
imageStore, err := image.NewImageStore(ifs, lgrMap)
if err != nil {
return nil, err
}
d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d)
if err != nil {
return nil, err
}
trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath)
if err != nil {
return nil, err
}
trustDir := filepath.Join(config.Root, "trust")
if err := system.MkdirAll(trustDir, 0700, ""); err != nil {
return nil, err
}
// We have a single tag/reference store for the daemon globally. However, it's
// stored under the graphdriver. On host platforms which only support a single
// container OS, but multiple selectable graphdrivers, this means depending on which
// graphdriver is chosen, the global reference store is under there. For
// platforms which support multiple container operating systems, this is slightly
// more problematic as where does the global ref store get located? Fortunately,
// for Windows, which is currently the only daemon supporting multiple container
// operating systems, the list of graphdrivers available isn't user configurable.
// For backwards compatibility, we just put it under the windowsfilter
// directory regardless.
refStoreLocation := filepath.Join(imageRoot, `repositories.json`)
rs, err := refstore.NewReferenceStore(refStoreLocation)
if err != nil {
return nil, fmt.Errorf("Couldn't create reference store repository: %s", err)
}
distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution"))
if err != nil {
return nil, err
}
// No content-addressability migration on Windows as it never supported pre-CA
if runtime.GOOS != "windows" {
migrationStart := time.Now()
if err := v1.Migrate(config.Root, d.graphDrivers[runtime.GOOS], layerStores[runtime.GOOS], imageStore, rs, distributionMetadataStore); err != nil {
logrus.Errorf("Graph migration failed: %q. Your old graph data was found to be too inconsistent for upgrading to content-addressable storage. Some of the old data was probably not upgraded. We recommend starting over with a clean storage directory if possible.", err)
}
logrus.Infof("Graph migration to content-addressability took %.2f seconds", time.Since(migrationStart).Seconds())
}
// Discovery is only enabled when the daemon is launched with an address to advertise. When
// initialized, the daemon is registered and we can store the discovery backend as it's read-only
if err := d.initDiscovery(config); err != nil {
return nil, err
}
sysInfo := sysinfo.New(false)
// Check if Devices cgroup is mounted, it is hard requirement for container security,
// on Linux.
if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled {
return nil, errors.New("Devices cgroup isn't mounted")
}
d.ID = trustKey.PublicKey().KeyID()
d.repository = daemonRepo
d.containers = container.NewMemoryStore()
if d.containersReplica, err = container.NewViewDB(); err != nil {
return nil, err
}
d.execCommands = exec.NewStore()
d.idIndex = truncindex.NewTruncIndex([]string{})
d.statsCollector = d.newStatsCollector(1 * time.Second)
d.EventsService = events.New()
d.root = config.Root
d.idMappings = idMappings
d.seccompEnabled = sysInfo.Seccomp
d.apparmorEnabled = sysInfo.AppArmor
d.linkIndex = newLinkIndex()
// TODO: imageStore, distributionMetadataStore, and ReferenceStore are only
// used above to run migration. They could be initialized in ImageService
// if migration is called from daemon/images. layerStore might move as well.
d.imageService = images.NewImageService(images.ImageServiceConfig{
ContainerStore: d.containers,
DistributionMetadataStore: distributionMetadataStore,
EventsService: d.EventsService,
ImageStore: imageStore,
LayerStores: layerStores,
MaxConcurrentDownloads: *config.MaxConcurrentDownloads,
MaxConcurrentUploads: *config.MaxConcurrentUploads,
ReferenceStore: rs,
RegistryService: registryService,
TrustKey: trustKey,
})
go d.execCommandGC()
d.containerd, err = containerdRemote.NewClient(ContainersNamespace, d)
if err != nil {
return nil, err
}
if err := d.restore(); err != nil {
return nil, err
}
close(d.startupDone)
// FIXME: this method never returns an error
info, _ := d.SystemInfo()
engineInfo.WithValues(
dockerversion.Version,
dockerversion.GitCommit,
info.Architecture,
info.Driver,
info.KernelVersion,
info.OperatingSystem,
info.OSType,
info.ID,
).Set(1)
engineCpus.Set(float64(info.NCPU))
engineMemory.Set(float64(info.MemTotal))
gd := ""
for os, driver := range d.graphDrivers {
if len(gd) > 0 {
gd += ", "
}
gd += driver
if len(d.graphDrivers) > 1 {
gd = fmt.Sprintf("%s (%s)", gd, os)
}
}
logrus.WithFields(logrus.Fields{
"version": dockerversion.Version,
"commit": dockerversion.GitCommit,
"graphdriver(s)": gd,
}).Info("Docker daemon")
return d, nil
}
func (r *remote) NewClient(ns string, b Backend) (Client, error) {
c := &client{
stateDir: r.stateDir,
logger: r.logger.WithField("namespace", ns),
namespace: ns,
backend: b,
containers: make(map[string]*container),
}
rclient, err := containerd.New(r.GRPC.Address, containerd.WithDefaultNamespace(ns))
if err != nil {
return nil, err
}
c.remote = rclient
go c.processEventStream(r.shutdownContext)
r.Lock()
r.clients = append(r.clients, c)
r.Unlock()
return c, nil
}
至此,docker daemon初始化完毕。
docker client的初始化
发现一个坑,moby中的代码并不全,在目录cmd/下只有dockerd文件夹,而没有docker文件夹。对比docker/docker-ce可以发现,后者在components目录下包含了cli和engine两部分,engine对应的是moby项目,cli对应的是docker-ce和docker-ee采用的客户端的代码。docker将moby从docker的整体中分离出去,docker-ce作为独立的产品存在。而moby甚至没有包括docker client的代码。
client的初始化相对简单,在cli/cmd/docker/docker.go里面。许多版本之前daemon和client是共用一个可执行文件的,现在已经分开了,但是基本流程还是比较类似。上文提到过reexec只在daemon初始化才执行,在client的初始化中,首先进行标准输入输出流的配置,之后通过NewDockerCli
新建client,再解析出命令并Execute()
,完成一个request的生命周期。
func main() {
// Set terminal emulation based on platform as required.
stdin, stdout, stderr := term.StdStreams()
logrus.SetOutput(stderr)
dockerCli := command.NewDockerCli(stdin, stdout, stderr, contentTrustEnabled())
cmd := newDockerCommand(dockerCli)
if err := cmd.Execute(); err != nil {
if sterr, ok := err.(cli.StatusError); ok {
if sterr.Status != "" {
fmt.Fprintln(stderr, sterr.Status)
}
// StatusError should only be used for errors, and all errors should
// have a non-zero exit status, so never exit with 0
if sterr.StatusCode == 0 {
os.Exit(1)
}
os.Exit(sterr.StatusCode)
}
fmt.Fprintln(stderr, err)
os.Exit(1)
}
}
NewDockerCli
的作用是返回一个DockerCli的实例,并设置好它的IO
// NewDockerCli returns a DockerCli instance with IO output and error streams set by in, out and err.
DockerCli实例包含的内容不多,除了配置文件外,有I/O和标准错误输出,还包含了server和client的info以及一个client instance。配置文件在~/.docker/config.json中;APIClient包含了common和experimental两类,也符合我们对docker功能的认知。
type APIClient interface {
CommonAPIClient
apiClientExperimental
}
type DockerCli struct {
configFile *configfile.ConfigFile
in *InStream
out *OutStream
err io.Writer
client client.APIClient
serverInfo ServerInfo
clientInfo ClientInfo
contentTrust bool
}
NewDockerCommand
同时生成一个可供server执行的docker命令。和daemon的分析类似,这里定义的PersistentPreRunE
是之后在执行时调用的handler,它根据flags进行参数的设置,之后的dockerPreRun
也只是参数设置的环节。真正将client初始化是在Initialize
里,它通过NewAPIClientFromFlags
中的NewClientWithOpts
真正新建了client,并判断是否需要启用experimental的功能,最后调用initializeFromClient()
用ping的方式判断和daemon的连接是否建立。之后return的是一个bool类型,判断command是否被client支持。简而言之,这个函数做了docker命令被送往server执行前的准备和检查工作。
另一方面,cmd的生成在之后进行,通过flags来进行cmd的设置,将cmd的输出绑定到dockerCli的输出等等,最后返回了cmd本身。
func newDockerCommand(dockerCli *command.DockerCli) *cobra.Command {
opts := cliflags.NewClientOptions()
var flags *pflag.FlagSet
cmd := &cobra.Command{
Use: "docker [OPTIONS] COMMAND [ARG...]",
Short: "A self-sufficient runtime for containers",
SilenceUsage: true,
SilenceErrors: true,
TraverseChildren: true,
Args: noArgs,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
// flags must be the top-level command flags, not cmd.Flags()
opts.Common.SetDefaultOptions(flags)
dockerPreRun(opts)
if err := dockerCli.Initialize(opts); err != nil {
return err
}
return isSupported(cmd, dockerCli)
},
Version: fmt.Sprintf("%s, build %s", cli.Version, cli.GitCommit),
DisableFlagsInUseLine: true,
}
cli.SetupRootCommand(cmd)
flags = cmd.Flags()
flags.BoolP("version", "v", false, "Print version information and quit")
flags.StringVar(&opts.ConfigDir, "config", cliconfig.Dir(), "Location of client config files")
opts.Common.InstallFlags(flags)
setFlagErrorFunc(dockerCli, cmd, flags, opts)
setHelpFunc(dockerCli, cmd, flags, opts)
cmd.SetOutput(dockerCli.Out())
commands.AddCommands(cmd, dockerCli)
disableFlagsInUseLine(cmd)
setValidateArgs(dockerCli, cmd, flags, opts)
return cmd
}
还有一个要注意的地方,docker如何知道用户的什么命令代表什么操作呢,这些处理函数的注册在SetupRootCommand()
里,会把interface.go里定义的操作注册到cmd里,这样server就知道什么cmd对应着什么样的处理函数。而command最终被发送到server的逻辑也都包装在interface的函数中,比如ContainerCreate
中有一句serverResp, err := cli.post(ctx, "/containers/create", query, body, nil)
。简而言之,docker client在初始化的过程中为cmd注册了不同的handler,根据用户输入的flags生成正确的cmd,并查找到应该执行哪个handler,在handler中负责将cmd发送到server,而server收到后就由daemon进行处理(具体步骤以后分析)。
func SetupRootCommand(rootCmd *cobra.Command) {
cobra.AddTemplateFunc("hasSubCommands", hasSubCommands)
cobra.AddTemplateFunc("hasManagementSubCommands", hasManagementSubCommands)
cobra.AddTemplateFunc("operationSubCommands", operationSubCommands)
cobra.AddTemplateFunc("managementSubCommands", managementSubCommands)
cobra.AddTemplateFunc("wrappedFlagUsages", wrappedFlagUsages)
rootCmd.SetUsageTemplate(usageTemplate)
rootCmd.SetHelpTemplate(helpTemplate)
rootCmd.SetFlagErrorFunc(FlagErrorFunc)
rootCmd.SetHelpCommand(helpCommand)
rootCmd.SetVersionTemplate("Docker version {{.Version}}\n")
rootCmd.PersistentFlags().BoolP("help", "h", false, "Print usage")
rootCmd.PersistentFlags().MarkShorthandDeprecated("help", "please use --help")
rootCmd.PersistentFlags().Lookup("help").Hidden = true
}
而moby中的client这个文件夹下定义了所有docker client的API操作,常用的docker都可以找到具体实现。按照代码的说明:
You use the library by creating a client object and calling methods on it. The
client can be created either from environment variables with NewEnvClient, or
configured manually with NewClient.
docker为我们提供了两种初始化client的方法,一种是通过环境变量,一种是通过手动配置。首先看一下client结构体的定义。client包含了下列成员:
- scheme: HTTP or HTTPS
- host: server的地址
- proto: client与server的协议,如unix socket等
- addr: client的地址
- basePath
- client: 真正收发消息的客户端
- version: server的版本号,兼容性?
- customHTTPHeaders: 用户配置的HTTP头
- manualOverride: 用户自己设置version时设为true
新建client的API有两种,分别是NewEnvClient()
和NewClient()
,但是可以看到第一种已经作废,这两种高层的API都将调用NewClientWithOpts()
,从而将新建client的API统一。具体为用FromEnv
获取环境变量配置client。支持的环境变量包括四种,分别是:
Supported environment variables:
DOCKER_HOST to set the url to the docker server.
DOCKER_API_VERSION to set the version of the API to reach, leave empty for latest.
DOCKER_CERT_PATH to load the TLS certificates from.
DOCKER_TLS_VERIFY to enable or disable TLS verification, off by default.
在NewClientWithOpts
中,调用了defaultHTTPClient
来初始化client,再根据输入的变长参数将配置覆盖。
// Deprecated: use NewClientWithOpts(FromEnv)
func NewEnvClient() (*Client, error) {
return NewClientWithOpts(FromEnv)
}
func NewClient(host string, version string, client *http.Client, httpHeaders map[string]string) (*Client, error) {
return NewClientWithOpts(WithHost(host), WithVersion(version), WithHTTPClient(client), WithHTTPHeaders(httpHeaders))
}
func NewClientWithOpts(ops ...func(*Client) error) (*Client, error) {
client, err := defaultHTTPClient(DefaultDockerHost)
if err != nil {
return nil, err
}
c := &Client{
host: DefaultDockerHost,
version: api.DefaultVersion,
scheme: "http",
client: client,
proto: defaultProto,
addr: defaultAddr,
}
for _, op := range ops {
if err := op(c); err != nil {
return nil, err
}
}
if _, ok := c.client.Transport.(http.RoundTripper); !ok {
return nil, fmt.Errorf("unable to verify TLS configuration, invalid transport %v", c.client.Transport)
}
tlsConfig := resolveTLSConfig(c.client.Transport)
if tlsConfig != nil {
// TODO(stevvooe): This isn't really the right way to write clients in Go.
// `NewClient` should probably only take an `*http.Client` and work from there.
// Unfortunately, the model of having a host-ish/url-thingy as the connection
// string has us confusing protocol and transport layers. We continue doing
// this to avoid breaking existing clients but this should be addressed.
c.scheme = "https"
}
return c, nil
}
网友评论