参考:
示例:
# kubectl rollout status deploy/sise-deploy
deployment "sise-deploy" successfully rolled out
# kubectl rollout history deploy/sise-deploy
deployments "sise-deploy"
REVISION CHANGE-CAUSE
3 kubectl apply --filename=https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/deployments/d09.yaml --record=true
4 kubectl apply --filename=https://raw.githubusercontent.com/mhausenblas/kbe/master/specs/deployments/d10.yaml --record=true
# kubectl rollout undo deploy/sise-deploy --to-revision=3
deployment "sise-deploy"
# curl http://127.0.0.1:8080
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/apps/v1beta1",
"/apis/apps/v1beta2",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/ping",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger-ui/",
"/swagger.json",
"/swaggerapi",
"/ui",
"/ui/",
"/version"
]
}
-
Print the supported API versions on the server, in the form of "group/version"
# kubectl api-versions
admissionregistration.k8s.io/v1beta1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
certificates.k8s.io/v1beta1
events.k8s.io/v1beta1
extensions/v1beta1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1beta1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
唯一区别就是:
Replica Sets 支持基于集合的Label selector ,而 Replication Controller 只支持基于等式的Label selector。
kubectl scale deployment frontend --replicas=5
kubectl create secret generic mysql-pass --from-file=password.txt
echo -n MTIzNDU2Cg== |base64 --decode
$ kubectl explain po.spec.containers.imagePullPolicy
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
$ kubectl port-forward nginx22-677b7cc6fd-lv22z :80
Forwarding from 127.0.0.1:43067 -> 80
Handling connection for 43067
$ kubectl get nodes --selector hello=world
或
$ kubectl get nodes -l hello=world
NAME STATUS ROLES AGE VERSION
node1 Ready master 18m v1.8.4
$ kubectl label nodes node1 hello=world
node "node1" labeled
$ kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1 Ready master 12m v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1,node-role.kubernetes.io/master=
node2 Ready <none> 10m v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2
node3 Ready <none> 10m v1.8.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node3
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready master 2h v1.8.4 <none> CentOS Linux 7 (Core) 4.4.0-101-generic docker://Unknown
node2 Ready <none> 2h v1.8.4 <none> CentOS Linux 7 (Core) 4.4.0-101-generic docker://Unknown
node3 Ready <none> 2h v1.8.4 <none> CentOS Linux 7 (Core) 4.4.0-101-generic docker://Unknown
$ kubectl describe node node1
Name: node1
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=node1
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: node-role.kubernetes.io/master:NoSchedule
CreationTimestamp: Wed, 07 Feb 2018 05:13:38 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Wed, 07 Feb 2018 05:20:59 +0000 Wed, 07 Feb 2018 05:13:34 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 07 Feb 2018 05:20:59 +0000 Wed, 07 Feb 2018 05:13:34 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 07 Feb 2018 05:20:59 +0000 Wed, 07 Feb 2018 05:13:34 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Wed, 07 Feb 2018 05:20:59 +0000 Wed, 07 Feb 2018 05:14:49 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.0.13
Hostname: node1
Capacity:
cpu: 8
memory: 32929700Ki
pods: 110
Allocatable:
cpu: 8
memory: 32827300Ki
pods: 110
System Info:
Machine ID: d52681308b2b4c01bb4fd2e886d33f66
System UUID: B7181B96-02B6-1C4F-AE59-FE7788B468C1
Boot ID: ceddd7f5-8286-47f5-9c82-7873b3d15f7b
Kernel Version: 4.4.0-101-generic
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.8.4
Kube-Proxy Version: v1.8.4
ExternalID: node1
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-node1 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-node1 250m (3%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-node1 200m (2%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-545bc4bfd4-7ww2g 260m (3%) 0 (0%) 110Mi (0%) 170Mi (0%)
kube-system kube-proxy-7pr7s 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-node1 100m (1%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-ztmpq 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
830m (10%) 0 (0%) 110Mi (0%) 170Mi (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientDisk 7m (x8 over 7m) kubelet, node1 Node node1 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 7m (x8 over 7m) kubelet, node1 Node node1 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m (x7 over 7m) kubelet, node1 Node node1 status is now: NodeHasNoDiskPressure
Normal Starting 6m kube-proxy, node1 Starting kube-proxy.
网友评论