Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Single Node Ceph Cluster Sandbox

DZone's Guide to

Single Node Ceph Cluster Sandbox

I was asked about making something work with the Ceph S3API. I was able to proceed in creating a single node version of what was supposed to be six nodes.

· Database Zone ·
Free Resource

Compliant Database DevOps: Deliver software faster while keeping your data safe. This new whitepaper guides you through 4 key ways Database DevOps supports your data protection strategy. Read free now

Well, I had not heard of the Ceph project until recently. I was asked about making something work with the Ceph S3API. After looking at these few different sites, I was able to proceed in creating a single node version of what was supposed to be six nodes. I wanted to be sure to storeshare the code for this.

sudo -i
useradd -d /home/cephuser -m cephuser
passwd cephuser
echo "cephuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephuser
sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers
chmod 0440 /etc/sudoers.d/cephuser
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

yum install -y ntp ntpdate ntp-doc
ntpdate 0.us.pool.ntp.org
hwclock --systohc
systemctl enable ntpd.service
systemctl start ntpd.service
su - cephuser
ssh-keygen

#create a volume in AWS GUI - attach to instance
sudo blkid
ls -l /dev/xvd*

After the preparation steps, I was into the actual installation of Ceph. I found a few packages were missing both things that I wanted and a few requirements.

#install CEPH
yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install wget mlocate 
yum-config-manager --enable rhui-REGION-rhel-server-optional

wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-itsdangerous-0.23-2.el7.noarch.rpm
wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-werkzeug-0.9.1-2.el7.noarch.rpm
wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/python-flask-0.10.1-4.el7.noarch.rpm
rpm -i python-itsdangerous python-werkzeug python-flask

Now, let's get down to business.

sudo rpm -Uhv http://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm
sudo yum update -y && sudo yum install ceph-deploy -y
mkdir cluster
cd cluster/
ceph-deploy new hostname
ceph-deploy install ceph-admin hostname
ceph-deploy mon create-initial
#didnt see this work BUT it didnt seem to matter
ceph-deploy gatherkeys hostname
#adding ODS
ceph-deploy disk list hostname
ceph-deploy disk zap hostname:/dev/xvdf
ceph-deploy osd prepare hostname:/dev/xvdf
ceph-deploy osd activate hostname:/dev/xvdf1
ceph-deploy disk list hostname

Probably one of the most important parts was the editing of the ceph.conf as described here. As well as the fact that I missed the 1 in the above ceph-deploy osd activate command. You have to look closely. To use the S3 endpoint you have to set up the ceph gateway (Rados server).

ceph-deploy gatherkeys hostname
ceph-deploy install --rgw hostanme
ceph-deploy rgw create hostname
netstat -plnt|grep 7480
http://client-node:7480

Once I had done that, I simply needed a new user to make a connection the S3 commandline.

sudo radosgw-admin user create --uid="testuser" --display-name="First User”

{
    "user_id": "testuser",
    "display_name": "First User",
    "email": "",
    "suspended": 0,
    "max_buckets": 1000,
    "auid": 0,
    "subusers": [],
    "keys": [
        {
            "user": "testuser",
            "access_key": "***",
            "secret_key": "*****"
        }
    ],
    "swift_keys": [],
    "caps": [],
    "op_mask": "read, write, delete",
    "default_placement": "",
    "placement_tags": [],
    "bucket_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "user_quota": {
        "enabled": false,
        "max_size_kb": -1,
        "max_objects": -1
    },
    "temp_url_keys": []
}

I then tested the Python program suggested by the docs:

sudo yum -y install python-boto

vi s3test.py

import boto.s3.connection

access_key = 'OZAOAWU1M55TGOWXF8F5'
secret_key = 'ApZR412bWipTK7eKHB2yyiJq5VmCOhfFcQ1qifjv'
conn = boto.connect_s3(
        aws_access_key_id=access_key,
        aws_secret_access_key=secret_key,
        host='172.31.41.128', port=7480,
        is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
       )

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
    print "{name} {created}".format(
        name=bucket.name,
        created=bucket.creation_date,
    )

Then S3. Of course, you need to install the S3 command line

aws s3 ls s3://my-new-bucket/ --endpoint-url http://XXX.XX.XX.XXX:7480

As you can see most calls to the gateway are supported:

Image title

Read this new Compliant Database DevOps whitepaper now and see how Database DevOps complements data privacy and protection without sacrificing development efficiency. Download free.

Topics:
database ,ceph ,cluster ,sandbox ,api

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}