美文网首页
libvirt与ceph

libvirt与ceph

作者: Arteezy_Xie | 来源:发表于2017-05-11 11:48 被阅读0次

    libvirt三种接口:

    • 命令行:virsh
    • 图形化:virt-manager
    • Web:webvirtmgr

    命令行工具:virsh

    1,确认宿主机是否支持KVM虚拟化:

    egrep '(vmx|svm)' --color /proc/cpuinfo

    2,安装libvirt相关工具包:

    apt-get install -y qemu-kvm libvirt-bin virtinst

    3,配置桥接网卡:

    apt-get install bridge-utils
    vim /etc/network/interface:
    
    #allow-hotplug eth0
    #auto eth0
    #iface eth0 inet dhcp
    
    auto br0
    iface br0 inet static
    address 172.16.17.195
    netmask 255.255.254.0
    gateway 172.16.16.1
    dns-nameservers 172.16.0.9
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    
    systemctl disable NetworkManager
    systemctl stop NetworkManager
    /etc/init.d/networking restart
    

    (Maybe you need to restart the machine.)

    4,新建qcow2格式的磁盘并创建虚拟机:

    qemu-img create -f qcow2 test02.img 7G #KVM磁盘默认为raw格式
    virt-install --name=guest01 --ram 512 --vcpus=1 --disk path=/home/vhost/test01.img,size=10,bus=virtio --accelerate --cdrom /root/debian.iso --vnc --vncport=5920 --vnclisten=0.0.0.0 --network bridge=br0,model=virtio --noautoconsole
    

    5,virsh管理工具:

    The following lists common usages of virsh command.

    To create a new guest domain and start a VM:
    $ virsh create alice.xml

    To stop a VM and destroy a guest domain:
    $ virsh destroy alice

    To shutdown a VM (without destroying a domain):
    $ virsh shutdown alice

    To suspend a VM:
    $ virsh suspend alice

    To resume a suspended VM:
    $ virsh resume alice

    To access login console of a running VM:
    $ virsh console alice

    To autostart a VM upon host booting:
    $ virsh autostart alice

    To get domain information of a VM:
    $ virsh dominfo alice

    To edit domain XML of a VM:
    $ virsh edit alice

    参考文档:http://xmodulo.com/use-kvm-command-line-debian-ubuntu.html

    图形化管理工具:virt-manager

    apt-get install virt-manager

    略···

    Web端管理工具:webvirtmgr

    1,installation:

    apt-get install git python-pip python-libvirt python-libxml2 novnc supervisor nginx
    

    2,拉取代码及Django相关环境配置(记得使用豆瓣源):

    git clone git://github.com/retspen/webvirtmgr.git
    cd webvirtmgr
    pip install -r requirements.txt -i http://pypi.douban.com/simple/
    ./manage.py syncdb
    ./manage.py collectstatic #创建用户密码并保存
    

    3,设置nginx反代:

    cd ..
    mv webvirtmgr /var/www/
    

    Add file webvirtmgr.conf in /etc/nginx/conf.d:

    server {
        listen 80 default_server;
        server_name $hostname;
        #access_log /var/log/nginx/webvirtmgr_access_log; 
    
        location /static/ {
            root /var/www/webvirtmgr/webvirtmgr; # or /srv instead of /var
            expires max;
        }
    
        location / {
            proxy_pass http://127.0.0.1:8000;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-for $proxy_add_x_forwarded_for;
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_connect_timeout 600;
            proxy_read_timeout 600;
            proxy_send_timeout 600;
            client_max_body_size 1024M; # Set higher depending on your needs 
        }
    }
    
    cd /etc/nginx/sites-available
    mv default default.bak
    chown -R www-data:www-data /var/www/webvirtmgr
    /etc/init.d/nginx restart
    

    4,设置supervisor:

    vim /etc/insserv/overrides/novnc:

    #!/bin/sh
    ### BEGIN INIT INFO
    # Provides:          nova-novncproxy
    # Required-Start:    $network $local_fs $remote_fs $syslog
    # Required-Stop:     $remote_fs
    # Default-Start:     
    # Default-Stop:      
    # Short-Description: Nova NoVNC proxy
    # Description:       Nova NoVNC proxy
    ### END INIT INFO
    

    Add file webvirtmgr.conf in /etc/supervisor/conf.d:

    [program:webvirtmgr]
    command=/usr/bin/python /var/www/webvirtmgr/manage.py run_gunicorn -c /var/www/webvirtmgr/conf/gunicorn.conf.py
    directory=/var/www/webvirtmgr
    autostart=true
    autorestart=true
    logfile=/var/log/supervisor/webvirtmgr.log
    log_stderr=true
    user=nginx
    
    [program:webvirtmgr-console]
    command=/usr/bin/python /var/www/webvirtmgr/console/webvirtmgr-console
    directory=/var/www/webvirtmgr
    autostart=true
    autorestart=true
    stdout_logfile=/var/log/supervisor/webvirtmgr-console.log
    redirect_stderr=true
    user=nginx
    
    /etc/init.d/supervisor restart
    

    5,update:

    cd /var/www/webvirtmgr
    git pull
    ./manage.py collectstatic
    /etc/init.d/supervisor restart    
    

    6,设置SSH认证,nginx的用户www-data通过用户webvirtmgr免密ssh到libvirt服务器:

    • 切换到nginx用户 (On system where WebVirtMgr is installed):

    su - www-data -s /bin/bash

    • 为www-data创建.ssh配置文件:
    sudo mkdir /var/www/.ssh
    sudo chmod 700 /var/www/.ssh
    sudo vim /var/www/.ssh/config
    StrictHostKeyChecking=no
    UserKnownHostsFile=/dev/null 
    
    • Create SSH public key:
    sudo ssh-keygen
    Enter file in which to save the key (/root/.ssh/id_rsa): /var/www/.ssh/id_rsa
    
    • change owner and permission for folder /var/www/.ssh:
    sudo chmod -R 0600 /var/www/.ssh/config
    sudo chown -R www-data:www-data /var/www/.ssh
    
    • Add webvirtmgr user (on qemu-kvm/libvirt host server) and add it to the proper group :
    useradd webvirtmgr
    passwd webvirtngr
    usermod -G libvirt-qemu -a webvirtmgr
    
    • 为用户webvirtmgr配置.ssh目录并拷贝www-data的公钥到本目录:
    mkdir /home/webvirtmgr/.ssh
    chmod 700 /home/webvirtmgr/.ssh
    
    • Back to webvirtmgr host and copy public key to qemu-kvm/libvirt host server:
    su - nginx -s /bin/bash   
    ssh-copy-id webvirtmgr@qemu-kvm-libvirt-host
    
    • On qemu-kvm-libvirt-host:
    chmod 0600 /home/webvirtmgr/.ssh/authorized_keys
    chown -R webvirtmgr:webvirtmgr /home/webvirtmgr/.ssh
    
    • You should connect without entering a password:

    ssh webvirtmgr@qemu-kvm-libvirt-host

    • Set up permissions to manage libvirt:

      Create file /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla (permissions for user webvirtmgr):

    [Remote libvirt SSH access]
    Identity=unix-user:webvirtmgr
    Action=org.libvirt.unix.manage
    ResultAny=yes
    ResultInactive=yes
    ResultActive=yes
    

    /etc/init.d/libvirtd restart

    • 通过SSH连接虚拟机会出现web页面过了20s就自动断开连接,git上找到一个临时性的解决方法:
    vim /usr/lib/python2.7/dist-packages/websockify/websocket.py
    #注释掉以下4行配置:
    if not multiprocessing:
            # os.fork() (python 2.4) child reaper
            signal.signal(signal.SIGCHLD, self.fallback_SIGCHLD)
        else:
            # make sure that _cleanup is called when children die
            # by calling active_children on SIGCHLD
            signal.signal(signal.SIGCHLD, self.multiprocessing_SIGCHLD)
    

    7,web管理界面登录配置:

    SSH连接guest 虚拟机实例

    github参考文档:https://github.com/retspen/webvirtmgr/wiki/Install-WebVirtMgr

    ceph块设备快速部署:

    1,设置admin节点root免密码登录其他节点:

    • 使用ssh-keygen生成密钥,位于~/.ssh/id_rsa.pub,拷贝id_rsa.pub文件到所有节点/root/.ssh/authorized_keys
    • 为所有节点配置/etc/hosts文件以互相信任:
    172.16.1.10 osd1
    172.16.1.20 osd2
    172.16.1.30 osd3
    

    2,使用国内的镜像源并同步时间:

    3,添加ceph源并安装ceph-deploy:

    • 添加release key:

    wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

    • 添加官方ceph源:

    echo deb http://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
    (deb https://download.ceph.com/debian-jewel/ jessie main)

    • 更新仓库并安装

    apt-get update && apt-get install ceph-deploy

    4,创建目录统一存放ceph的所有配置文件:

    mkdir /cluster
    cd /cluster
    

    5,创建集群:

    ceph-deploy new node1

    node1是mon节点,执行该命令会生成ceph配置文件、monitor密钥文件以及日志文件。

    6,修改默认冗余参数:

    echo "osd pool default size = 2" >> /cluster/ceph.conf

    由于我们目前只有两个osd节点,而默认的冗余份数是3,因此我们需要设置为2,如果osd节点大于2,则此步骤省略。

    7,配置网卡和网络:

    如果你有多个网卡,可以把 public network 写入 Ceph 配置文件的 [global] 段下。

    8,安装ceph:

    ceph-deploy install node1 node2 node3

    9,配置初始 monitor、并收集所有密钥:

    ceph-deploy mon create-initial

    10,配置osd节点:

    • 格式化osd节点磁盘:
    ceph-deploy disk zap node2:vdb
    ceph-deploy disk zap node3:vdb
    
    • 以上步骤会清空磁盘的所有数据。 接下来创建osd,注意由于我们只是测试,故没有使用单独的磁盘作为journal,实际在生产环境下,需要配备SSD分区作为journal,能够最大化IO吞吐量.
    ceph-deploy osd create node2:vdb
    ceph-deploy osd create node3:vdb
    

    11,配置admin节点:

    ceph-deploy admin node1 node2 node3
    chmod +r /etc/ceph/ceph.client.admin.keyring # 保证具有读取的密钥的权限

    12,检查集群的健康状况:

    ceph health

    官方文档:http://docs.ceph.org.cn/start/quick-rbd/

    RBD快速入门(以NFS共享为例):

    1,在管理节点上,通过 ceph-deploy 把 Ceph 安装到 ceph-client 节点:

    ceph-deploy install ceph-client

    2,在管理节点上,用 ceph-deploy 把 Ceph 配置文件和 ceph.client.admin.keyring 拷贝到 ceph-client:

    ceph-deploy admin ceph-client

    ceph-deploy 工具会把密钥环复制到客户端 /etc/ceph 目录,要确保此密钥环文件有读权限(如 chmod +r /etc/ceph/ceph.client.admin.keyring )。

    3,在ceph-client上创建rbd块设备:

    ceph osd pool create nfs-pool 128 128
    rbd create nfs-pool/share1 --size 2048
    rbd map nfs-pool/share1 --id admin --keyfile /etc/ceph/ceph.client.admin.keyring
    rbd showmapped
    mkfs.ext4 -m0 /dev/rbd/nfs-pool/share1
    mkdir /mnt/nfs-share
    mount -t ext4 /dev/rbd/nfs-pool/share1 /mnt/nfs-share/
    

    4,NFS服务配置:

    apt-get install -y nfs-server
    vim /etc/exports 
    
    /mnt/nfs-share 172.16.*.*(rw,no_root_squash,no_all_squash,sync)
    
    /etc/init.d/nfs-kernel-server restart
    /etc/init.d/nfs-common restart
    /etc/init.d/rpcbind restart
    showmount -e localhost
    

    5,NFS客户端挂载:

    mkdir /nfs-test
    showmount -e NFS-SERVER-IP
    mount -t nfs NFS-SERVER-IP:/mnt/nfs-share /nfs-test/
    

    参考文档:https://ztjlovejava.github.io/2015/04/02/rbd-nfs/

    通过libvirt使用ceph RBD

    1,创建存储池 libvirt-pool ,设定了 128 个归置组。

    ceph osd pool create libvirt-pool 128 128
    ceph osd lspools
    

    2,创建 Ceph 用户 client.libvirt ,且权限限制到 libvirt-pool 。

    ceph auth get-or-create client.libvirt mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=libvirt-pool'
    ceph auth list
    

    3,用 QEMU 在 RBD 存储池中创建映像 image01 、存储池为 libvirt-pool 。

    qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 10G
    

    或者用RBD创建映像:

    rbd create libvirt-pool/image02 --size 10240 [--object-size 8M]
    

    4,配置VM。

    virsh edit guest01
    

    在< devices > 下应该有 <disk> 条目:

    <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2'/>
          <source file='/home/vhost/test.img'/>
          <target dev='vda' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </disk>
    

    将你创建的RBD映像配置为< disk >条目:

        <disk type='network' device='disk'>
            <driver name='qemu' type='raw'/>
            <source protocol='rbd' name='libvirt-pool/image01'>
                <host name='mon1' port='6789'/>
            </source>
            <target dev='vda' bus='virtio'/>
        </disk>
    

    5,如果你的 Ceph 存储集群启用了 Ceph 认证(默认已启用),那么必须生成一个 secret,并将其加入配置文件。

    cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
            <usage type='ceph'>
                    <name>client.libvirt secret</name>
            </usage>
    </secret>
    EOF
    
    virsh secret-define --file secret.xml
    <uuid of secret is output here>
    
    ceph auth get-key client.libvirt | sudo tee client.libvirt.key
    virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
    

    加入配置文件中:

    ...
    </source>
    <auth username='libvirt'>
            <secret type='ceph' uuid='9ec59067-fdbc-a6c0-03ff-df165c0587b8'/>
    </auth>
    <target ...
    

    6,在webvirtmgr里接入Ceph块设备。

    创建ceph存储池 ceph存储池 VM接入ceph

    官方中文参考文档:http://docs.ceph.org.cn/rbd/libvirt/


    以上。

    相关文章

      网友评论

          本文标题:libvirt与ceph

          本文链接:https://www.haomeiwen.com/subject/sqkotxtx.html