美文网首页
常用的kubectl的命令

常用的kubectl的命令

作者: 乙腾 | 来源:发表于2020-10-15 20:22 被阅读0次

1.kubectl create

root@kubernets-master:/usr/local/kubernetes/volumes# kubectl create -h
Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
  # Create a pod using the data in pod.json.
  kubectl create -f ./pod.json

  # Create a pod based on the JSON passed into stdin.
  cat pod.json | kubectl create -f -

  # Edit the data in docker-registry.yaml in JSON then create the resource using the edited data.
  kubectl create -f docker-registry.yaml --edit -o json

Available Commands:
  clusterrole         Create a ClusterRole.
  clusterrolebinding  Create a ClusterRoleBinding for a particular ClusterRole
  configmap           Create a configmap from a local file, directory or literal value
  cronjob             Create a cronjob with the specified name.
  deployment          Create a deployment with the specified name.
  job                 Create a job with the specified name.
  namespace           Create a namespace with the specified name
  poddisruptionbudget Create a pod disruption budget with the specified name.
  priorityclass       Create a priorityclass with the specified name.
  quota               Create a quota with the specified name.
  role                Create a role with single rule.
  rolebinding         Create a RoleBinding for a particular Role or ClusterRole
  secret              Create a secret using specified subcommand
  service             Create a service using specified subcommand.
  serviceaccount      Create a service account with the specified name

Options:
      --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or
map key is missing in the template. Only applies to golang and jsonpath output formats.
      --dry-run=false: If true, only print the object that would be sent, without sending it.
      --edit=false: Edit the API resource before creating
  -f, --filename=[]: Filename, directory, or URL to files to use to create the resource
  要用于创建资源的文件的文件名、目录或URL   通过指定文件来创建资源。
  -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f
or -R.
  -o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
      --raw='': Raw URI to POST to the server.  Uses the transport specified by the kubeconfig file.
      --record=false: Record current kubectl command in the resource annotation. If set to false, do
not record the command. If set to true, record the command. If not set, default to updating the
existing annotation value only if one already exists.
  -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you
want to manage related manifests organized within the same directory.
      --save-config=false: If true, the configuration of current object will be saved in its
annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to
perform kubectl apply on this object in the future.
  -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l
key1=value1,key2=value2)
      --template='': Template string or path to template file to use when -o=go-template,
-o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
      --validate=true: If true, use a schema to validate the input before sending it
      --windows-line-endings=false: Only relevant if --edit=true. Defaults to the line ending native
to your platform.

Usage:
  kubectl create -f FILENAME [options]

2.kubectl delete

root@kubernets-master:/usr/local/kubernetes/volumes# kubectl get pv
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                          STORAGECLASS   REASON   AGE
nfs-pv-mysql   3Gi        RWX            Recycle          Bound    default/nfs-pvc-mysql-myshop                           95s
root@kubernets-master:/usr/local/kubernetes/volumes# kubectl delete -h
Delete resources by filenames, stdin, resources and names, or by resources and label selector.

 JSON and YAML formats are accepted. Only one type of the arguments may be specified: filenames, resources and names, or
resources and label selector.

 Some resources, such as pods, support graceful deletion. These resources define a default period before they are
forcibly terminated (the grace period) but you may override that value with the --grace-period flag, or pass --now to
set a grace-period of 1. Because these resources often represent entities in the cluster, deletion may not be
acknowledged immediately. If the node hosting a pod is down or cannot reach the API server, termination may take
significantly longer than the grace period. To force delete a resource, you must pass a grace period of 0 and specify
the --force flag. Note: only a subset of resources support graceful deletion. In absence of the support, --grace-period
is ignored.

 IMPORTANT: Force deleting pods does not wait for confirmation that the pod's processes have been terminated, which can
leave those processes running until the node detects the deletion and completes graceful deletion. If your processes use
shared storage or talk to a remote API and depend on the name of the pod to identify themselves, force deleting those
pods may result in multiple processes running on different machines using the same identification which may lead to data
corruption or inconsistency. Only force delete pods when you are sure the pod is terminated, or if your application can
tolerate multiple copies of the same pod running at once. Also, if you force delete pods the scheduler may place new
pods on those nodes before the node has released those resources and causing those pods to be evicted immediately.

 Note that the delete command does NOT do resource version checks, so if someone submits an update to a resource right
when you submit a delete, their update will be lost along with the rest of the resource.

Examples:
  # Delete a pod using the type and name specified in pod.json.
  kubectl delete -f ./pod.json

  # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml.
  kubectl delete -k dir

  # Delete a pod based on the type and name in the JSON passed into stdin.
  cat pod.json | kubectl delete -f -

  # Delete pods and services with same names "baz" and "foo"
  kubectl delete pod,service baz foo

  # Delete pods and services with label name=myLabel.
  kubectl delete pods,services -l name=myLabel

  # Delete a pod with minimal delay
  kubectl delete pod foo --now

  # Force delete a pod on a dead node
  kubectl delete pod foo --grace-period=0 --force

  # Delete all pods
  kubectl delete pods --all

Options:
      --all=false: Delete all resources, including uninitialized ones, in the namespace of the specified resource types.
  -A, --all-namespaces=false: If present, list the requested object(s) across all namespaces. Namespace in current
context is ignored even if specified with --namespace.
      --cascade=true: If true, cascade the deletion of the resources managed by this resource (e.g. Pods created by a
ReplicationController).  Default true.
      --field-selector='': Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector
key1=value1,key2=value2). The server only supports a limited number of field queries per type.
  -f, --filename=[]: containing the resource to delete.按照profile删除
      --force=false: Only used when grace-period=0. If true, immediately remove resources from API and bypass graceful
deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires
confirmation.
      --grace-period=-1: Period of time in seconds given to the resource to terminate gracefully. Ignored if negative.
Set to 1 for immediate shutdown. Can only be set to 0 when --force is true (force deletion).
      --ignore-not-found=false: Treat "resource not found" as a successful delete. Defaults to "true" when --all is
specified.
  -k, --kustomize='': Process a kustomization directory. This flag can't be used together with -f or -R.
      --now=false: If true, resources are signaled for immediate shutdown (same as --grace-period=1).
  -o, --output='': Output mode. Use "-o name" for shorter output (resource/name).
      --raw='': Raw URI to DELETE to the server.  Uses the transport specified by the kubeconfig file.
  -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
related manifests organized within the same directory.
  -l, --selector='': Selector (label query) to filter on, not including uninitialized ones.
      --timeout=0s: The length of time to wait before giving up on a delete, zero means determine a timeout from the
size of the object
      --wait=true: If true, wait for resources to be gone before returning. This waits for finalizers.

Usage:
  kubectl delete ([-f FILENAME] | [-k DIRECTORY] | TYPE [(NAME | -l label | --all)]) [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).




 kubectl delete pods tidb-cluster-tikv-3  -n tidb

3.kubectl describe cm mysql-myshop-config

查看k8s对象中的指定资源名的描述
排错的时候主要看EVENTS信息

root@kubernets-master:~# kubectl get po -n tidb-admin
NAME                                       READY   STATUS    RESTARTS   AGE
tidb-controller-manager-7dd5c59f4f-whtf4   1/1     Running   6          28h
tidb-scheduler-5f5958d476-tmdjw            2/2     Running   6          28h
root@kubernets-master:~# kubectl describe po -n tidb-admin
Name:         tidb-controller-manager-7dd5c59f4f-whtf4
Namespace:    tidb-admin
Priority:     0
Node:         kubernets-node2/192.168.16.136
Start Time:   Mon, 09 Mar 2020 19:52:08 +0800
Labels:       app.kubernetes.io/component=controller-manager
              app.kubernetes.io/instance=tidb-operator
              app.kubernetes.io/name=tidb-operator
              pod-template-hash=7dd5c59f4f
Annotations:  cni.projectcalico.org/podIP: 10.244.102.120/32
Status:       Running
IP:           10.244.102.120
IPs:
  IP:           10.244.102.120
Controlled By:  ReplicaSet/tidb-controller-manager-7dd5c59f4f
Containers:
  tidb-operator:
    Container ID:  docker://c5eb677aa30e4919755ca13924269b5d82d242729a3b13c2314a12cd24274651
    Image:         pingcap/tidb-operator:v1.1.0-beta.2
    Image ID:      docker-pullable://pingcap/tidb-operator@sha256:6ae9c87b80e442f13a03d493807db61b0ed753b9d313b91f629fdca4be8efaeb
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/tidb-controller-manager
      -tidb-backup-manager-image=pingcap/tidb-backup-manager:v1.1.0-beta.2
      -tidb-discovery-image=pingcap/tidb-operator:v1.1.0-beta.2
      -cluster-scoped=true
      -auto-failover=true
      -pd-failover-period=5m
      -tikv-failover-period=5m
      -tidb-failover-period=5m
      -v=2
    State:          Running
      Started:      Tue, 10 Mar 2020 23:27:17 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Tue, 10 Mar 2020 23:11:06 +0800
      Finished:     Tue, 10 Mar 2020 23:27:14 +0800
    Ready:          True
    Restart Count:  6
    Limits:
      cpu:     250m
      memory:  150Mi
    Requests:
      cpu:     80m
      memory:  50Mi
    Environment:
      NAMESPACE:  tidb-admin (v1:metadata.namespace)
      TZ:         UTC
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tidb-controller-manager-token-sb6px (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  tidb-controller-manager-token-sb6px:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tidb-controller-manager-token-sb6px
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>


Name:         tidb-scheduler-5f5958d476-tmdjw
Namespace:    tidb-admin
Priority:     0
Node:         kubernets-node1/192.168.16.137
Start Time:   Mon, 09 Mar 2020 19:52:08 +0800
Labels:       app.kubernetes.io/component=scheduler
              app.kubernetes.io/instance=tidb-operator
              app.kubernetes.io/name=tidb-operator
              pod-template-hash=5f5958d476
Annotations:  cni.projectcalico.org/podIP: 10.244.122.212/32
Status:       Running
IP:           10.244.122.212
IPs:
  IP:           10.244.122.212
Controlled By:  ReplicaSet/tidb-scheduler-5f5958d476
Containers:
  tidb-scheduler:
    Container ID:  docker://afb8730e08d95508fcae4e6bc625b732adadb9bb6190d2605e4e5eb3136e0796
    Image:         pingcap/tidb-operator:v1.1.0-beta.2
    Image ID:      docker-pullable://pingcap/tidb-operator@sha256:6ae9c87b80e442f13a03d493807db61b0ed753b9d313b91f629fdca4be8efaeb
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/tidb-scheduler
      -v=2
      -port=10262
    State:          Running
      Started:      Tue, 10 Mar 2020 22:58:34 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Mon, 09 Mar 2020 20:59:19 +0800
      Finished:     Tue, 10 Mar 2020 22:57:48 +0800
    Ready:          True
    Restart Count:  2
    Limits:
      cpu:     250m
      memory:  150Mi
    Requests:
      cpu:        80m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tidb-scheduler-token-kkfb6 (ro)
  kube-scheduler:
    Container ID:  docker://8f3e2fcf82770871660529fbf4927db7726621fb260228b8ce1c4c0ff6c8d93b
    Image:         registry.aliyuncs.com/google_containers/kube-scheduler:v1.17.0
    Image ID:      docker-pullable://registry.aliyuncs.com/google_containers/kube-scheduler@sha256:e35a9ec92da008d88fbcf97b5f0945ff52a912ba5c11e7ad641edb8d4668fc1a
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --port=10261
      --leader-elect=true
      --lock-object-name=tidb-scheduler
      --lock-object-namespace=tidb-admin
      --scheduler-name=tidb-scheduler
      --v=2
      --policy-configmap=tidb-scheduler-policy
      --policy-configmap-namespace=tidb-admin
    State:          Running
      Started:      Tue, 10 Mar 2020 23:11:08 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Tue, 10 Mar 2020 23:03:47 +0800
      Finished:     Tue, 10 Mar 2020 23:10:41 +0800
    Ready:          True
    Restart Count:  4
    Limits:
      cpu:     250m
      memory:  150Mi
    Requests:
      cpu:        80m
      memory:     50Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tidb-scheduler-token-kkfb6 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  tidb-scheduler-token-kkfb6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tidb-scheduler-token-kkfb6
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
 FirstSeen LastSeen Count From SubObjectPath Type Reason Message
 --------- -------- ----- ---- ------------- -------- ------ -------
 51m 51m 1 kubelet, ubuntu-k8s-3 Normal NodeNotSchedulable Node ubuntu-k8s-3 status is now: NodeNotSchedulable
 9d 51m 49428 kubelet, ubuntu-k8s-3 Warning EvictionThresholdMet Attempting to reclaim nodefs
 5m 5m 1 kubelet, ubuntu-k8s-3 Normal Starting Starting kubelet.
 5m 5m 2 kubelet, ubuntu-k8s-3 Normal NodeHasSufficientDisk Node ubuntu-k8s-3 status is now: NodeHasSufficientDisk
 5m 5m 2 kubelet, ubuntu-k8s-3 Normal NodeHasSufficientMemory Node ubuntu-k8s-3 status is now: NodeHasSufficientMemory
 5m 5m 2 kubelet, ubuntu-k8s-3 Normal NodeHasNoDiskPressure Node ubuntu-k8s-3 status is now: NodeHasNoDiskPressure
 5m 5m 1 kubelet, ubuntu-k8s-3 Normal NodeAllocatableEnforced Updated Node Allocatable limit across pods
 5m 5m 1 kubelet, ubuntu-k8s-3 Normal NodeHasDiskPressure Node ubuntu-k8s-3 status is now: NodeHasDiskPressure
 5m 14s 23 kubelet, ubuntu-k8s-3 Warning EvictionThresholdMet Attempting to reclaim nodefs
 #######events前面类似这样,当时没粘贴信息  但是如下信息保留下来了
  Warning  Failed   99s (x270 over 65m)   kubelet, kubernets-node2  kube-scheduler not fing ImagePullBackOff  #这段没记录下来反正大概就是说没找到tidb-scheduler的镜像
  Warning  Failed   99s (x270 over 65m)   kubelet, kubernets-node2  Error: ImagePullBackOff

4.kubectl apply -f xxx.yaml

按照资源文件部署

root@kubernets-master:/usr/local/kubernetes/dashboard# kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created  #基于角色的访问控制
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

5.kubectl get pods -n namespace

查看指定命名空间的pods,不加-n 查看default命名空间

root@kubernets-master:/usr/local/kubernetes/dashboard# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7b8b58dc8b-mkj4b   1/1     Running   0          3m47s
kubernetes-dashboard-866f987876-s9dfx        1/1     Running   0          3m47s

其中这个pods可以简写为po,pod
kubectl get po -n kubernetes-dashboard

6.kubectl get deployment -n namespace

查看指定命名空间的部署

root@kubernets-master:/usr/local/kubernetes/dashboard# kubectl get deployment -n kubernetes-dashboard
NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
dashboard-metrics-scraper   1/1     1            1           8m20s
kubernetes-dashboard        1/1     1            1           8m20s

7.kubectl get pods -n namespace -o wide

查看应用pod部署到那个node上

root@kubernets-master:/usr/local/kubernetes/dashboard# kubectl get pods -n kubernetes-dashboard  -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP              NODE              NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7b8b58dc8b-mkj4b   1/1     Running   0          9m26s   10.244.102.82   kubernets-node2   <none>           <none>
kubernetes-dashboard-866f987876-s9dfx        1/1     Running   0          9m26s   10.244.102.81   kubernets-node2   <none>           <none>
    
 如果想实时观察
watch kubectl get pods -n kubernetes-dashboard  -o wide
    

8.kubectl get service --all-namespaces

查看服务

root@kubernets-master:/usr/local/kubernetes/dashboard# kubectl get service --all-namespaces
NAMESPACE              NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP      10.96.0.1       <none>        443/TCP                  41d
default                mysql-myshop                LoadBalancer   10.96.123.106   <pending>     3306:30108/TCP           2d21h
default                tomcat-http                 ClusterIP      10.96.230.10    <none>        8080/TCP                 5d8h
kube-system            kube-dns                    ClusterIP      10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   41d
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP      10.96.119.48    <none>        8000/TCP                 5m27s
kubernetes-dashboard   kubernetes-dashboard        NodePort       10.96.220.240   <none>        443:30001/TCP            5m27s

如果服务对外暴露端口并可以观察到服务的端口,和对外部http可以访问的端口
比如这里dashboard 服务端口是443,对外部http暴露的端口是30001(注意不是对pod暴露的端口 targetpod)

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard   #注意命名空间是这个
spec:
  type: NodePort
  ports:
    - port: 443   #服务本身的端口
      targetPort: 8443  #对pods暴露的端口
      nodePort: 30001  #d对外
  selector:
    k8s-app: kubernetes-dashboard

9. kubectl logs local-volume-provisioner-7nmrx -n kube-system

root@kubernets-master:~# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS      RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
calico-kube-controllers-5c45f5bd9f-k56ml   1/1     Running     2          12h     10.244.221.209   volumes            <none>           <none>
calico-node-ch4z7                          1/1     Running     2          12h     192.168.16.140   volumes            <none>           <none>
calico-node-m6whh                          1/1     Running     1          12h     192.168.16.136   kubernets-node2    <none>           <none>
calico-node-nbw9n                          1/1     Running     1          12h     192.168.16.138   kubernets-master   <none>           <none>
calico-node-tmzkz                          1/1     Running     1          12h     192.168.16.137   kubernets-node1    <none>           <none>
coredns-9d85f5447-glq9v                    1/1     Running     16         46d     10.244.122.205   kubernets-node1    <none>           <none>
coredns-9d85f5447-xhz8d                    1/1     Running     18         46d     10.244.122.204   kubernets-node1    <none>           <none>
etcd-kubernets-master                      1/1     Running     15         46d     192.168.16.138   kubernets-master   <none>           <none>
kube-apiserver-kubernets-master            1/1     Running     17         46d     192.168.16.138   kubernets-master   <none>           <none>
kube-controller-manager-kubernets-master   1/1     Running     16         46d     192.168.16.138   kubernets-master   <none>           <none>
kube-proxy-brvf6                           1/1     Running     6          3d10h   192.168.16.140   volumes            <none>           <none>
kube-proxy-f6tt7                           1/1     Running     15         46d     192.168.16.138   kubernets-master   <none>           <none>
kube-proxy-h8vxc                           1/1     Running     15         15d     192.168.16.136   kubernets-node2    <none>           <none>
kube-proxy-jpjqd                           1/1     Running     16         15d     192.168.16.137   kubernets-node1    <none>           <none>
kube-scheduler-kubernets-master            1/1     Running     17         46d     192.168.16.138   kubernets-master   <none>           <none>
kuboard-756d46c4d4-8rtbp                   0/1     Completed   0          5d10h   <none>           kubernets-node2    <none>           <none>
local-volume-provisioner-7nmrx             1/1     Running     5          3d13h   10.244.122.212   kubernets-node1    <none>           <none>
local-volume-provisioner-mtlfz             1/1     Running     4          3d13h   10.244.102.79    kubernets-node2    <none>           <none>
local-volume-provisioner-xmtdc             1/1     Running     11         3d10h   10.244.221.206   volumes            <none>           <none>
tiller-deploy-7c7b67c9fd-fnwqx             1/1     Running     7          4d10h   10.244.122.206   kubernets-node1    <none>           <none>


root@kubernets-master:~# kubectl logs  local-volume-provisioner-7nmrx -n kube-system
I0312 23:58:16.288388       1 common.go:334] StorageClass "local-storage" configured with MountDir "/mnt/disks", HostDir "/mnt/disks", VolumeMode "Filesystem", FsType "", BlockCleanerCommand ["/scripts/quick_reset.sh"]
I0312 23:58:16.288736       1 main.go:63] Loaded configuration: {StorageClassConfig:map[local-storage:{HostDir:/mnt/disks MountDir:/mnt/disks BlockCleanerCommand:[/scripts/quick_reset.sh] VolumeMode:Filesystem FsType:}] NodeLabelsForPV:[kubernetes.io/hostname] UseAlphaAPI:false UseJobForCleaning:false MinResyncPeriod:{Duration:5m0s} UseNodeNameOnly:false LabelsForPV:map[] SetPVOwnerRef:false}
I0312 23:58:16.289597       1 main.go:64] Ready to run...
I0312 23:58:16.291198       1 common.go:396] Creating client using in-cluster config
I0312 23:58:16.549671       1 main.go:85] Starting controller
I0312 23:58:16.551116       1 main.go:101] Starting metrics server at :8080
I0312 23:58:16.551290       1 controller.go:47] Initializing volume cache
I0312 23:58:16.751819       1 cache.go:55] Added pv "local-pv-9c16b49c" to cache
I0312 23:58:16.752017       1 cache.go:55] Added pv "local-pv-8b593dbb" to cache
I0312 23:58:16.752406       1 cache.go:55] Added pv "local-pv-1780d5ae" to cache
I0312 23:58:16.752472       1 cache.go:55] Added pv "local-pv-33de4a22" to cache
I0312 23:58:16.752497       1 cache.go:55] Added pv "local-pv-4701521d" to cache
I0312 23:58:16.752518       1 cache.go:55] Added pv "local-pv-ffeb7e65" to cache
I0312 23:58:16.752538       1 cache.go:55] Added pv "local-pv-c6414297" to cache
I0312 23:58:16.752560       1 cache.go:55] Added pv "local-pv-46a238d0" to cache
I0312 23:58:16.752581       1 cache.go:55] Added pv "local-pv-7a25135f" to cache
I0312 23:58:16.752602       1 cache.go:55] Added pv "local-pv-a40c5b51" to cache
I0312 23:58:16.847715       1 controller.go:111] Controller started

```
#10.kubectl port-forward <资源类型>/<资源名> <本机端口>:<资源端口> &
```
kubectl port-forward svc/tidb-cluster-grafana 3000:3000 --namespace=tidb --address 0.0.0.0 &
```

相关文章

网友评论

      本文标题:常用的kubectl的命令

      本文链接:https://www.haomeiwen.com/subject/wegopktx.html