美文网首页
使用 kubeadm 部署 kubernetes 1.14.2

使用 kubeadm 部署 kubernetes 1.14.2

作者: Dmego | 来源:发表于2020-07-14 20:59 被阅读0次

    之前在研究spring cloud Data Flow时有部署过 kubernetes,不过使用的是minikube搭建的,正好这几天在极客时间学习《深入剖析 kubernetes》专栏,依据专栏里的搭建步骤和网上的相关教程搭建了1.14.2版本的集群。

    环境规划

    由于没有足够的云主机,我在本地使用虚拟机搭建的,具体各个节点的配置如下:

    IP地址 系统版本 内存 角色
    10.211.55.8 CentOS 7.6 4GB Master(主节点)
    10.211.55.9 CentOS 7.6 4GB Worker(从节点)

    每个节点需要安装的软件及版本如下:

    软件名称 版本 说明
    Docker 18.09.6 负责创建、拉取容器
    kubeadm 1.14.2-0 负责初始化k8s集群
    kubelet 1.14.2-0 运行在所有节点上,负责启动容器和Pod
    kubectl 1.14.2-0 k8s命令行工具,负责与k8s集群交互,例如部署应用

    环境准备

    使用Vmware或其他虚拟机管理工具安装两台CentOS7.6的步骤这里就不介绍了,下面是为了保证我们能正常使用kubeadm启动kubernetes所需要做到前提工作。

    系统配置

    为了方便之后能快速辨识哪台主机是什么类型的节点,我们需要为每台主机设置代表其节点身份主机名,具体如下:

    # 为 10.211.55.8 主节点设置主机名
    hostnamectl --static set-hostname  master
    # 为 10.211.55.9 从节点设置主机名
    hostnamectl --static set-hostname  worker
    

    另外还需要在每台主机/etc/hosts文件中加上解析配置

    vim /etc/hosts
    #写入如下类型,IP根据自己实际情况修改
    
    10.211.55.8 master
    10.211.55.9 worker
    

    如果各个主机启用了防火墙,则我们需要开放kubernetes各个组件所需要的端口,这里搭建是测试环境,只需要禁用各个节点的防火墙即可:

    # 注意以下命令是下次生效
    systemctl disable firewalld.service
    systemctl stop firewalld.service
    
    # 关闭防火墙立即生效
    iptables -F
    
    # 防火墙关闭后可以使用以下命令查看防火墙状态
    systemctl status firewalld  
    

    禁用SELINUX(它是一个 Linux 内核模块,也是 Linux 的一个安全子系统)

    ##设置SELinux 成为permissive模式(不用重启机器)
    setenforce 0
    
    # 修改配置文件 (重启机器生效)
    vim /etc/selinux/config
    SELINUX=disabled
    

    关闭Swap分区

    swapoff -a
    

    修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认Swap已经关闭。

    创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

    # 配置路由
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    # swappiness 参数设为 0 
    vm.swappiness=0
    

    执行下面命令使修改生效

    modprobe br_netfilter
    sysctl -p /etc/sysctl.d/k8s.conf
    

    软件安装

    安装Docker。这里安装步骤完成依据官网,可以参考官网了解更多

    • 如果之前安装过,卸载旧版本

      sudo yum remove docker \
                        docker-client \
                        docker-client-latest \
                        docker-common \
                        docker-latest \
                        docker-latest-logrotate \
                        docker-logrotate \
                        docker-engine
      
    • 安装所需的包

      sudo yum install -y yum-utils \
        device-mapper-persistent-data \
        lvm2
      
    • 设置稳定版本(stable)存储库

      sudo yum-config-manager \
          --add-repo \
          https://download.docker.com/linux/centos/docker-ce.repo
      
    • 由于kubernetes 1.4支持的Docker版本最高为18.09,所以这里我们安装的Docker版本为18.09.7

      sudo yum install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
      
    • 启动Docker和设置开机自启

      systemctl start docker
      systemctl enable docker
      
    • 修改docker cgroup driversystemd

      vim /etc/docker/daemon.json
      # 写入如下内容
      {
        "exec-opts": ["native.cgroupdriver=systemd"]
      }
      
    • 重启Docker

      systemctl restart docker
      

    安装kubeadmkubectlkubelet三个组件

    • 安装之前我们需要先添加yum源,命令如下:

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      
    • 接着使用如下命令安装1.14.2版本的各个组件

      yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2
      

    镜像拉取

    由于k8s的许多镜像如果我们直接下是下载不下来的,所以我们需要想办法先将集群所需的所有镜像都下载好。好在阿里云提供了镜像文件,我们只需要执行下面命令即可

    • 拉取所需镜像文件

      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
      docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
      
    • 给镜像打标签

      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
      docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
      docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
      
    • 删除拉取的初始镜像,只留打上标签的镜像

      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2           
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2  
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2          
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2               
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1                        
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10                      
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
      docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
      docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
      

      上面步骤执行完成之后,我们使用docker images命令就能看到所有镜像了,显示如下

      [root@Node1 ~]# docker images
      REPOSITORY                              TAG                 IMAGE ID            CREATED             SIZE
      k8s.gcr.io/kube-proxy                   v1.14.2             5c24210246bb        10 months ago       82.1MB
      k8s.gcr.io/kube-apiserver               v1.14.2             5eeff402b659        10 months ago       210MB
      k8s.gcr.io/kube-controller-manager      v1.14.2             8be94bdae139        10 months ago       158MB
      k8s.gcr.io/kube-scheduler               v1.14.2             ee18f350636d        10 months ago       81.6MB
      k8s.gcr.io/coredns                      1.3.1               eb516548c180        14 months ago       40.3MB
      k8s.gcr.io/kubernetes-dashboard-amd64   v1.10.1             f9aed6605b81        15 months ago       122MB
      k8s.gcr.io/etcd                         3.3.10              2c4adeb21b4f        16 months ago       258MB
      quay.io/coreos/flannel                  v0.10.0-amd64       f0fad859c909        2 years ago         44.6MB
      k8s.gcr.io/pause     
      

    安装配置Kubernetes集群

    Master 节点初始化

    我们使用如下命令来初始化kubernetes

    # --kubernetes-version=v1.14.2 指定安装的k8s版本
    # --apiserver-advertise-address 指定 Master 节点的 advertise address,也就是 IP 地址
    # --pod-network-cidr 用于指定Pod的网络范围,下面采用的是flannel方案(https://github.com/coreos/flannel/blob/master/Documentation/kubernetes.md)
    kubeadm init --kubernetes-version=v1.14.2 --apiserver-advertise-address 10.211.55.8 --pod-network-cidr=10.244.0.0/16
    

    执行之后,会有如下输出

    [init] Using Kubernetes version: v1.14.2
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.211.55.8 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.211.55.8 127.0.0.1 ::1]
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.8]
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 16.501690 seconds
    [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --experimental-upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: y6awgp.6bvxt8l3rie2du5s
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.211.55.8:6443 --token br6a7j.79aimgudzio73jy5 \
        --discovery-token-ca-cert-hash sha256:0623e22780c5a25138208fc417f874a0c70ca28543acf52be52ee445ec0c1dd3 
    

    我们使用如下命令来配置kubectl

    # root 模式下导入环境变量
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    # 重启 kubelet
    systemctl restart kubelet
    

    添加连接集群的 config 配置

    正如初始化之后打印出来的命令,我们需要执行下面命令操作:

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    

    安装 flannel 网络插件

    我们可以使用如下命令来安装。如果wget下载kube-flannel.yml失败了,建议到浏览器下载好之后再试

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    kubectl apply -f kube-flannel.yml
    

    安装完成之后,我们可以使用kubectl get pods -o wide --all-namespaces来查看目前集群中的所有pod信息;使用kubectl get node来查看目前集群中的节点信息,例如:

    [root@Node1 ~]# kubectl get node
    NAME    STATUS   ROLES    AGE    VERSION
    master   Ready    master   7d1h   v1.14.2
    

    Worker 节点加入集群

    注意,Worker节点需要安装配置环境准备里的所有内容。

    其他节点加入集群的方式在集群初始化之后最后也打印出来了,如果我们忘记了或找不到了,可以在Master节点使用如下命令来获取 join token

    kubeadm token create --print-join-command
    

    执行完成之后打印如下类似信息:

    kubeadm join 10.211.55.8:6443 --token e6gc7z.t52g39w7mxww18gn \
        --discovery-token-ca-cert-hash sha256:603fbd109caf000c0cffe286c7b2eeebaf88d0540e1ea226d7f1b239d0695f1e
    
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config filewith 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubeelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    于是我们就可以再Worker节点执行如下命令来加入到集群之中

    # 基础命令示例 kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
    
    kubeadm join 10.211.55.8:6443 --token e6gc7z.t52g39w7mxww18gn \
        --discovery-token-ca-cert-hash sha256:603fbd109caf000c0cffe286c7b2eeebaf88d0540e1ea226d7f1b239d0695f1e
    

    加入之后,我们就可以使用kubectl get node -o wide来查看集群节点的状态信息

    [root@Node1 ~]# kubectl get node -o wide
    NAME    STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
    node1   Ready    master   7d1h   v1.14.2   10.211.55.8   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://18.9.7
    node2   Ready    <none>   7d1h   v1.14.2   10.211.55.9   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://18.9.7
    

    到这里,一个具有两个节点的kubernetes集群就搭建好了。

    集群重置或删除

    如果想要重置或删除集群,我们可以结合下面几个命令来实现

    • 删除子节点

      # 查询k8s集群所以节点
      kubectl get nodes
      
      # 删除子节点 ,<node name> 代表子节点名称
      kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
      kubectl delete node <node name>
      
    • 重置节点

      # 主节点和子节点都能使用该命令重置
      kubeadm reset
      
      # 如果想要删除节点上的集群文件,可以使用如下命令
      rm -f $HOME/.kube/
      

    安装配置 Dashborad

    安装 Dashborad

    Kubernetes Dashborad是一个集群可视化管理工具,我们在上面的镜像拉取中已经拉取了镜像,现在我们需要在集群中安装Dashborad。

    先使用如下命令下载YAML配置文件

    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    

    为了实现外网访问,我们首先需要使用vim kubernetes-dashboard.yaml来编辑下载的YAML文件,在Service控制器中添加NodePort类型,具体内容如下:

    # ------------------- Dashboard Service ------------------- #
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 31234
      selector:
        k8s-app: kubernetes-dashboard
      type: NodePort
    

    编辑完成之后,保存退出,然后使用下面命令来创建kubernetes-dashborad

    kubectl apply -f kubernetes-dashboard.yaml
    

    完成之后,我们就可以使用如下命令来查看dashborad的状态和service端口

    [root@Node1 ~]# kubectl get service -n kube-system
    NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
    kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   7d2h
    kubernetes-dashboard   NodePort    10.102.72.239   <none>        443:31234/TCP            7d
    [root@Node1 ~]# kubectl get pods -n kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-fb8b8dccf-pdq9j                 1/1     Running   1          7d2h
    coredns-fb8b8dccf-pj6lk                 1/1     Running   1          7d2h
    etcd-node1                              1/1     Running   1          7d2h
    kube-apiserver-node1                    1/1     Running   1          7d2h
    kube-controller-manager-node1           1/1     Running   2          7d2h
    kube-flannel-ds-amd64-jtwgg             1/1     Running   1          7d1h
    kube-flannel-ds-amd64-vl5hn             1/1     Running   1          7d2h
    kube-proxy-22746                        1/1     Running   1          7d2h
    kube-proxy-4tx62                        1/1     Running   1          7d1h
    kube-scheduler-node1                    1/1     Running   3          7d2h
    kubernetes-dashboard-5f7b999d65-59k42   1/1     Running   1          7d
    

    配置Dashborad

    为了能在浏览器中访问dashboard,我们必须要先配置https证书。首先我们使用如下命令来生成私钥和证书签名

    openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
    openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
    rm dashboard.pass.key
    # 下面命令,一路回车
    openssl req -new -key dashboard.key -out dashboard.csr
    

    然后使用命令生成SSL证书

    openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
    

    所有节点使用如下命令创建证书的挂载目录

    mkdir -p /var/share/certs
    

    然后回到Master节点将生成的dashboard.keydashboard.crt拷贝到/var/share/certs目录下

    创建admin-token.yaml文件,写入如下内容并保存

    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: adm
      annotations:
        rbac.authorization.kubernetes.io/autoupdateee: "true"
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    subjects:
    - kind: ServiceAccount
      name: admin
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin
      namespace: kube-system
      labels:
        kubernetes.io/cluster-service: "true"
        addonmanager.kubernetes.io/mode: Reconcile
    

    执行kubectl create -f admin-token.yaml命令后,我们使用如下命令即可查看登入dashboradtoken

    kubectl get secret $(kubectl get secret -n kube-system|grep admin-token|awk '{print $1}') -n kube-system -o jsonpath={.data.token}|base64 -d |xargs echo
    

    在浏览器输入:https://IP:31234,然后选择使用令牌登录,将上一步命令输出的token粘入即可登录

    参考

    相关文章

      网友评论

          本文标题:使用 kubeadm 部署 kubernetes 1.14.2

          本文链接:https://www.haomeiwen.com/subject/roqqhktx.html