重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
小编给大家分享一下ceph中mds如何配置,相信大部分人都还不怎么了解,因此分享这篇文章给大家参考一下,希望大家阅读完这篇文章后大有收获,下面让我们一起去了解一下吧!
成都创新互联公司坚持“要么做到,要么别承诺”的工作理念,服务领域包括:网站建设、网站制作、企业官网、英文网站、手机端网站、网站推广等服务,满足客户于互联网时代的合肥网站设计、移动媒体设计的需求,帮助企业找到有效的互联网解决方案。努力成为您成熟可靠的网络建设合作伙伴!
# vi ceph.conf [mds.a] host = hostname
# mkdir –p /var/lib/ceph/mds/ceph-a
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-mds/ceph.keyring --gen-key -n client.bootstrap-mds
# ceph auth add client.bootstrap-mds mon 'allow profile bootstrap-mds' -i /var/lib/ceph/bootstrap-mds/ceph.keyring
# ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.a osd 'allow rwx' mds 'allow' mon 'allow profile mds' -o /var/lib/ceph/mds/ceph-a/keyring
# service ceph start mds.a === mds.a === Starting Ceph mds.a on DEV-L0003542... starting mds.a at :/0 # ceph mds stat #查看mds节点状态 e4: 1/1/1 up {0=a=up:active}
# ceph -s cluster 1c7ec934-1595-11e5-aa3f-06aed00006d5 health HEALTH_WARN 1360 pgs degraded; 4820 pgs stuck unclean; recovery 62423/138288 objects degraded (45.140%) monmap e1: 1 mons at {mon1=10.20.15.156:6789/0}, election epoch 2, quorum 0 mon1 mdsmap e4: 1/1/1 up {0=a=up:active} osdmap e143: 7 osds: 7 up, 7 in pgmap v5095: 4820 pgs, 14 pools, 9919 kB data, 46096 objects 9251 MB used, 150 GB / 159 GB avail 62423/138288 objects degraded (45.140%) 667 active 1360 active+degraded 2793 active+remapped client io 3058 B/s wr, 5 op/s
# yum install ceph-fuse –y # mkdir /mycephfs # ceph-fuse -m 10.20.15.156:6789 ~/mycephfs # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/Volgroup00-LV_root 16G 3.9G 11G 27% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/vda1 194M 34M 150M 19% /boot /dev/sda 100G 1.9G 99G 2% /mnt/share /dev/vdb 10G 1.3G 8.8G 13% /var/lib/ceph/osd/ceph-0 /dev/vdc 10G 1.2G 8.8G 12% /var/lib/ceph/osd/ceph-1 /dev/vdd 10G 1.2G 8.9G 12% /var/lib/ceph/osd/ceph-5 /dev/sda 100G 1.9G 99G 2% /var/lib/ceph/osd/ceph-6 ceph-fuse 160G 9.1G 151G 6% /root/mycephfs
# mount -t ceph 10.20.15.156:6789://mycephfs -v -oname=admin,secretfile=/etc/ceph/ceph.client.admin.keyring
cephfs的元数据信息默认在metadata pool中 # rados ls -p metadata609.00000000mds0_sessionmap608.00000000601.00000000602.00000000mds0_inotable1.00000000.inode200.00000000604.00000000605.00000000mds_anchortablemds_snaptable600.00000000603.00000000100.00000000200.00000001606.00000000607.00000000100.00000000.inode1.00000000
以上是“ceph中mds如何配置”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注创新互联行业资讯频道!