美文网首页
Ceph vstart 虚拟环境使用

Ceph vstart 虚拟环境使用

作者: IvanGuan | 来源:发表于2019-01-18 20:27 被阅读0次

    简介

     虽然ceph官方给出了ceph的部署文档,但是部署一个集群还是要花上一些时间,而且对于我们做个小实验做功能验证或者bug fix验证这类需求根本没有必要去部署一套新集群。这样不但浪费时间而且对于公司的资源也是一种浪费(土豪公司除外)。然后ceph 社区针对这个问题提供了一个非常友好,快速的方法。利用vstart我们以迅速创建一个几乎跟实际环境一样的虚拟环境,我们今天介绍一下如何使用。

    启动start

     在我的上一篇文章Ceph 编译构建rpm包里面已经见过如何编译ceph源代码了,先按照文章里面的cmake方式编译好源码。
    我们先看一下Luminous version:12.2.10 vstart的usage

    usage: ../src/vstart.sh [option]...
    ex: ../src/vstart.sh -n -d --mon_num 3 --osd_num 3 --mds_num 1 --rgw_num 1
    options:
        -d, --debug
        -s, --standby_mds: Generate standby-replay MDS for each active
        -l, --localhost: use localhost instead of hostname
        -i <ip>: bind to specific ip
        -n, --new
        -N, --not-new: reuse existing cluster config (default)
        --valgrind[_{osd,mds,mon,rgw}] 'toolname args...'
        --nodaemon: use ceph-run as wrapper for mon/osd/mds
        --smallmds: limit mds cache size
        -m ip:port      specify monitor address
        -k keep old configuration files
        -x enable cephx (on by default)
        -X disable cephx
        --hitset <pool> <hit_set_type>: enable hitset tracking
        -e : create an erasure pool
        -o config        add extra config parameters to all sections
        --mon_num specify ceph monitor count
        --osd_num specify ceph osd count
        --mds_num specify ceph mds count
        --rgw_num specify ceph rgw count
        --mgr_num specify ceph mgr count
        --rgw_port specify ceph rgw http listen port
        --rgw_frontend specify the rgw frontend configuration
        --rgw_compression specify the rgw compression plugin
        -b, --bluestore use bluestore as the osd objectstore backend
        --memstore use memstore as the osd objectstore backend
        --cache <pool>: enable cache tiering on pool
        --short: short object names only; necessary for ext4 dev
        --nolockdep disable lockdep
        --multimds <count> allow multimds with maximum active count
    

    开始创建新集群

    使用下面的命令vstart会给我们创建一个3 monitor,3 mds(1 active,2 stanby),3 osd(3 副本)的集群。

    [root@localhost build]#  sh ../src/vstart.sh -n -d
    
    ** going verbose **
    rm -f core*
    hostname localhost
    ip 192.168.12.200
    port 40385
    /var/ws/ceph-12.2.10/build/bin/ceph-authtool --create-keyring --gen-key --name=mon. /var/ws/ceph-12.2.10/build/keyring --cap mon allow *
    creating /var/ws/ceph-12.2.10/build/keyring
    /var/ws/ceph-12.2.10/build/bin/ceph-authtool --gen-key --name=client.admin --set-uid=0 --cap mon allow * --cap osd allow * --cap mds allow * --cap mgr allow * /var/ws/ceph-12.2.10/build/keyring
    /var/ws/ceph-12.2.10/build/bin/ceph-authtool --gen-key --name=client.rgw --cap mon allow rw --cap osd allow rwx --cap mgr allow rw /var/ws/ceph-12.2.10/build/keyring
    /var/ws/ceph-12.2.10/build/bin/monmaptool --create --clobber --add a 192.168.12.200:40385 --add b 192.168.12.200:40386 --add c 192.168.12.200:40387 --print /tmp/ceph_monmap.31812
    /var/ws/ceph-12.2.10/build/bin/monmaptool: monmap file /tmp/ceph_monmap.31812
    /var/ws/ceph-12.2.10/build/bin/monmaptool: generated fsid 053bf1c1-5bea-466f-bad8-18de4e7f18cf
    epoch 0
    fsid 053bf1c1-5bea-466f-bad8-18de4e7f18cf
    last_changed 2019-01-18 11:22:57.763781
    created 2019-01-18 11:22:57.763781
    0: 192.168.12.200:40385/0 mon.a
    1: 192.168.12.200:40386/0 mon.b
    2: 192.168.12.200:40387/0 mon.c
    ...
    
    /var/ws/ceph-12.2.10/build/bin/ceph-authtool --create-keyring --gen-key --name=mds.c /var/ws/ceph-12.2.10/build/dev/mds.c/keyring
    creating /var/ws/ceph-12.2.10/build/dev/mds.c/keyring
    /var/ws/ceph-12.2.10/build/bin/ceph -c /var/ws/ceph-12.2.10/build/ceph.conf -k /var/ws/ceph-12.2.10/build/keyring -i /var/ws/ceph-12.2.10/build/dev/mds.c/keyring auth add mds.c mon allow profile mds osd allow * mds allow mgr allow profile mds
    added key for mds.c
    /var/ws/ceph-12.2.10/build/bin/ceph-mds -i c -c /var/ws/ceph-12.2.10/build/ceph.conf
    starting mds.c at -
    started.  stop.sh to stop.  see out/ (e.g. 'tail -f out/????') for debug output.
    
    dashboard urls: http://192.168.12.200:41385/
      restful urls: https://192.168.12.200:42385
      w/ user/pass: admin / 84eb0e02-0034-4809-aa81-0c21a468180d
    
    export PYTHONPATH=./pybind:/var/ws/ceph-12.2.10/src/pybind:/var/ws/ceph-12.2.10/build/lib/cython_modules/lib.2:
    export LD_LIBRARY_PATH=/var/ws/ceph-12.2.10/build/lib
    CEPH_DEV=1
    
    • 检查集群状态
    [root@localhost build]# ./bin/ceph -s
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:07:02.847675 7fcc629b4700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:07:02.907931 7fcc629b4700 -1 WARNING: all dangerous and experimental features are enabled.
      cluster:
        id:     32d792fd-1035-4ad5-a237-0d4851efd5cb
        health: HEALTH_WARN
                no active mgr
    
      services:
        mon: 3 daemons, quorum a,b,c
        mgr: no daemons active
        mds: cephfs_a-1/1/1 up  {0=a=up:active}, 2 up:standby
        osd: 3 osds: 3 up, 3 in
    
      data:
        pools:   2 pools, 16 pgs
        objects: 21 objects, 2.19KiB
        usage:   1.99TiB used, 363GiB / 2.34TiB avail
        pgs:     16 active+clean
    
    
    • 检查mds
    [root@localhost build]# ./bin/ceph mds dump --format=json-pretty
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:15:45.537644 7f7834519700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:15:45.594255 7f7834519700 -1 WARNING: all dangerous and experimental features are enabled.
    dumped fsmap epoch 8
    
    {
        "epoch": 7,
        "flags": 12,
        "ever_allowed_features": 0,
        "explicitly_allowed_features": 0,
        "created": "2019-01-18 11:24:14.522559",
        "modified": "2019-01-18 11:24:52.769470",
        "tableserver": 0,
        "root": 0,
        "session_timeout": 60,
        "session_autoclose": 300,
        "max_file_size": 1099511627776,
        "last_failure": 0,
        "last_failure_osd_epoch": 0,
        "compat": {
            "compat": {},
            "ro_compat": {},
            "incompat": {
                "feature_1": "base v0.20",
                "feature_2": "client writeable ranges",
                "feature_3": "default file layouts on dirs",
                "feature_4": "dir inode in separate object",
                "feature_5": "mds uses versioned encoding",
                "feature_6": "dirfrag is stored in omap",
                "feature_8": "no anchor table",
                "feature_9": "file layout v2"
            }
        },
        "max_mds": 1,
        "in": [
            0
        ],
        "up": {
            "mds_0": 4140
        },
        "failed": [],
        "damaged": [],
        "stopped": [],
        "info": {
            "gid_4140": {
                "gid": 4140,
                "name": "a",
                "rank": 0,
                "incarnation": 4,
                "state": "up:active",
                "state_seq": 7,
                "addr": "192.168.12.200:6813/3759861531",
                "standby_for_rank": -1,
                "standby_for_fscid": -1,
                "standby_for_name": "",
                "standby_replay": false,
                "export_targets": [],
                "features": 4611087853745930235
            }
        },
        "data_pools": [
            1
        ],
        "metadata_pool": 2,
        "enabled": true,
        "fs_name": "cephfs_a",
        "balancer": "",
        "standby_count_wanted": 1
    }
    
    • 检查pool状态
    [root@localhost build]# ./bin/ceph osd pool ls detail
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:17:28.173025 7ff840996700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:17:28.230196 7ff840996700 -1 WARNING: all dangerous and experimental features are enabled.
    pool 1 'cephfs_data_a' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 flags hashpspool stripe_width 0 application cephfs
    pool 2 'cephfs_metadata_a' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 flags hashpspool stripe_width 0 application cephfs
    
    • 集群配置文件
      我们可以手动修改下面的配置文件,比如日志级别来达到我们想要的效果。
    [root@localhost build]# cat ./ceph.conf
    ; generated by vstart.sh on Fri Jan 18 11:22:57 UTC 2019
    [client.vstart.sh]
            num mon = 3
            num osd = 3
            num mds = 3
            num mgr = 1
            num rgw = 0
    
    [global]
            fsid = 32d792fd-1035-4ad5-a237-0d4851efd5cb
            osd pg bits = 3
            osd pgp bits = 5  ; (invalid, but ceph should cope!)
            osd pool default size = 3
            osd crush chooseleaf type = 0
            osd pool default min size = 1
            osd failsafe full ratio = .99
            mon osd nearfull ratio = .99
            mon osd backfillfull ratio = .99
            mon osd reporter subtree level = osd
            mon osd full ratio = .99
            mon data avail warn = 2
            mon data avail crit = 1
            erasure code dir = /var/ws/ceph-12.2.10/build/lib
            plugin dir = /var/ws/ceph-12.2.10/build/lib
            osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd
            rgw frontends = civetweb port=8000
            ; needed for s3tests
            rgw crypt s3 kms encryption keys = testkey-1=YmluCmJvb3N0CmJvb3N0LWJ1aWxkCmNlcGguY29uZgo= testkey-2=aWIKTWFrZWZpbGUKbWFuCm91dApzcmMKVGVzdGluZwo=
            rgw crypt require ssl = false
            rgw lc debug interval = 10
            filestore fd cache size = 32
            run dir = /var/ws/ceph-12.2.10/build/out
            enable experimental unrecoverable data corrupting features = *
            lockdep = true
            auth cluster required = cephx
            auth service required = cephx
            auth client required = cephx
    [client]
            keyring = /var/ws/ceph-12.2.10/build/keyring
            log file = /var/ws/ceph-12.2.10/build/out/$name.$pid.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.$pid.asok
    
    [client.rgw]
    
    [mds]
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug ms = 1
            debug mds = 20
            debug auth = 20
            debug monc = 20
            debug mgrc = 20
            mds debug scatterstat = true
            mds verify scatter = true
            mds log max segments = 2
            mds debug frag = true
            mds debug auth pins = true
            mds debug subtrees = true
            mds data = /var/ws/ceph-12.2.10/build/dev/mds.$id
            mds root ino uid = 0
            mds root ino gid = 0
    
    [mgr]
            mgr data = /var/ws/ceph-12.2.10/build/dev/mgr.$id
            mgr module path = /var/ws/ceph-12.2.10/src/pybind/mgr
            mon reweight min pgs per osd = 4
            mon pg warn min per osd = 3
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug ms = 1
            debug monc = 20
        debug mon = 20
            debug mgr = 20
    
    [osd]
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
            osd_check_max_object_name_len_on_startup = false
            osd data = /var/ws/ceph-12.2.10/build/dev/osd$id
            osd journal = /var/ws/ceph-12.2.10/build/dev/osd$id/journal
            osd journal size = 100
            osd class tmp = out
            osd class dir = /var/ws/ceph-12.2.10/build/lib
            osd class load list = *
            osd class default list = *
            osd scrub load threshold = 2000.0
            osd debug op order = true
            osd debug misdirected ops = true
            filestore wbthrottle xfs ios start flusher = 10
            filestore wbthrottle xfs ios hard limit = 20
            filestore wbthrottle xfs inodes hard limit = 30
            filestore wbthrottle btrfs ios start flusher = 10
            filestore wbthrottle btrfs ios hard limit = 20
            filestore wbthrottle btrfs inodes hard limit = 30
            osd copyfrom max chunk = 524288
            bluestore fsck on mount = true
            bluestore block create = true
        bluestore block db path = /var/ws/ceph-12.2.10/build/dev/osd$id/block.db.file
            bluestore block db size = 67108864
            bluestore block db create = true
        bluestore block wal path = /var/ws/ceph-12.2.10/build/dev/osd$id/block.wal.file
            bluestore block wal size = 1048576000
            bluestore block wal create = true
    
            debug ms = 1
            debug osd = 25
            debug objecter = 20
            debug monc = 20
            debug mgrc = 20
            debug journal = 20
            debug filestore = 20
            debug bluestore = 30
            debug bluefs = 20
            debug rocksdb = 10
            debug bdev = 20
            debug rgw = 20
        debug reserver = 10
            debug objclass = 20
    
    
    
    [mon]
            mgr initial modules = restful status dashboard balancer
            mon pg warn min per osd = 3
            mon osd allow primary affinity = true
            mon reweight min pgs per osd = 4
            mon osd prime pg temp = true
            crushtool = /var/ws/ceph-12.2.10/build/bin/crushtool
            mon allow pool delete = true
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug mon = 20
            debug paxos = 20
            debug auth = 20
        debug mgrc = 20
            debug ms = 1
    
            mon cluster log file = /var/ws/ceph-12.2.10/build/out/cluster.mon.$id.log
    [global]
    
    [mon.a]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.a
            mon addr = 192.168.12.200:40385
    [mon.b]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.b
            mon addr = 192.168.12.200:40386
    [mon.c]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.c
            mon addr = 192.168.12.200:40387
    [mgr.x]
            host = localhost
    [osd.0]
            host = localhost
    [osd.1]
            host = localhost
    [osd.2]
            host = localhost
    [mds.a]
            host = localhost
    [mds.b]
            host = localhost
    [mds.c]
            host = localhost
    
    • 日志文件
      日志文件都保存在 build/out/下面,我们借助日志来分析我们所进行的实验原理或者分析问题。

    环境创建好了之后你就可以进行自己的实验项目了,这个只是最简单的集群,我们还可以使用vstart来测试multimds, bluesotre,目录分片等功能

    停止环境

    [root@localhost build]# sh ../src/stop.sh --
    usage: ../src/stop.sh [all] [mon] [mds] [osd] [rgw]
    

    根据上面的usage 运行一下上面的脚本就可以了。

    相关文章

      网友评论

          本文标题:Ceph vstart 虚拟环境使用

          本文链接:https://www.haomeiwen.com/subject/bclodqtx.html