Thursday, January 23, 2014

How to remove GlusterFS Volumes

To remove the GlusterFS Volume from the servers following steps need to be followed:

These Steps need to be followed on all the servers which are part of gluster Cluster:


[root@ip-10-138-150-225 ~]# setfattr -x trusted.glusterfs.volume-id /data/share
[root@ip-10-138-150-225 ~]# setfattr -x trusted.gfid /data/share
[root@ip-10-138-150-225 ~]# service glusterd stop
       [  OK  ]
[root@ip-10-138-150-225 ~]# cd /data/share/                [root@ip-10-138-150-225 share]# ls -a .  ..  a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3  .glusterfs [root@ip-10-138-150-225 share]# rm -rf .glusterfs [root@ip-10-138-150-225 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3 [root@ip-10-138-150-225 share]# cd /var/lib/glusterd/ [root@ip-10-138-150-225 glusterd]# ls glusterd.info  glustershd  groups  hooks  nfs  options  peers  vols [root@ip-10-138-150-225 glusterd]# rm -rf * [root@ip-10-138-150-225 glusterd]# service glusterd start Starting glusterd:                                         [  OK  ]

 

/data/share : Is the directory for which gluster is setup..
.glusterfs : Is the directory where all meta data is stored so it need to be removed..
/var/lib/glusterd/ : Directory where volume information is stored..

Note don't delete  the all files in /var/lib/glusterd/ if you have more than one gluster volume configured in server..


 

How to configure raid 0 in AWS with glusterFS to have high availability

Here is the procedure to configure Raid 0 on AWS's EBS to have high performance and GlusterFS to get High Availability..

This process can be used to get Central Storage in AWS as well as physical servers, as some of the Application needs central storage..

We are using 2 amazon instances with 4 EBS attached in each to configure Raid 0 on them to have good throughput..

Server 1 : ip-10-128-50-246
Server 2 : ip-10-138-150-225

Check the Attached EBS in each of the server:
 
[ec2-user@ip-10-128-50-246 ~]$ hostname
ip-10-128-50-246
[ec2-user@ip-10-128-50-246 ~]$ lsblk 
NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb  202:16   0   5G  0 disk 
xvdc  202:32   0   5G  0 disk 
xvdd  202:48   0   5G  0 disk 
xvde  202:64   0   5G  0 disk 
xvda1 202:1    0   8G  0 disk /

[root@ip-10-138-150-225 ec2-user]# hostname 
ip-10-138-150-225
[root@ip-10-138-150-225 ec2-user]# lsblk 
NAME  MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb  202:16   0   5G  0 disk 
xvdc  202:32   0   5G  0 disk 
xvdd  202:48   0   5G  0 disk 
xvde  202:64   0   5G  0 disk 
xvda1 202:1    0   8G  0 disk /

Configure the Raid 0 in both the servers with disks available:

[root@ip-10-128-50-246 ec2-user]# mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@ip-10-128-50-246 ec2-user]# mdadm --detail /dev/md0 /dev/md0:         Version : 1.2   Creation Time : Thu Jan 23 05:34:02 2014      Raid Level : raid0      Array Size : 20969472 (20.00 GiB 21.47 GB)    Raid Devices : 4   Total Devices : 4     Persistence : Superblock is persistent
    Update Time : Thu Jan 23 05:34:02 2014           State : clean  Active Devices : 4 Working Devices : 4  Failed Devices : 0   Spare Devices : 0
     Chunk Size : 512K
           Name : ip-10-128-50-246:0  (local to host ip-10-128-50-246)            UUID : 58fb9324:9aba25e7:db3a049d:525c8a98          Events : 0
    Number   Major   Minor   RaidDevice State        0     202       16        0      active sync   /dev/sdb        1     202       32        1      active sync   /dev/sdc        2     202       48        2      active sync   /dev/sdd        3     202       64        3      active sync   /dev/sde


[root@ip-10-138-150-225 ec2-user]# mdadm --create /dev/md0 --level=0 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvdd /dev/xvde
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[root@ip-10-138-150-225 ec2-user]# mdadm --detail /dev/md0 /dev/md0:         Version : 1.2   Creation Time : Thu Jan 23 05:35:28 2014      Raid Level : raid0      Array Size : 20969472 (20.00 GiB 21.47 GB)    Raid Devices : 4   Total Devices : 4     Persistence : Superblock is persistent
    Update Time : Thu Jan 23 05:35:28 2014           State : clean  Active Devices : 4 Working Devices : 4  Failed Devices : 0   Spare Devices : 0
     Chunk Size : 512K
           Name : ip-10-138-150-225:0  (local to host ip-10-138-150-225)            UUID : 85c9c957:d7d808d8:3a8344a2:9e994f83          Events : 0
    Number   Major   Minor   RaidDevice State        0     202       16        0      active sync   /dev/sdb        1     202       32        1      active sync   /dev/sdc        2     202       48        2      active sync   /dev/sdd        3     202       64        3      active sync   /dev/sde

 /dev/md0 : the single mount point for our Raid.


Format the mount point to desired File System, we are formatting it in EXT4:

[root@ip-10-128-50-246 ec2-user]# mkfs.ext4 /dev/md0
mke2fs 1.42.3 (14-May-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=512 blocks
1310720 inodes, 5242368 blocks
262118 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000
Allocating group tables: done                            Writing inode tables: done                            Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done  


[root@ip-10-138-150-225 ec2-user]# mkfs.ext4 /dev/md0
mke2fs 1.42.3 (14-May-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=512 blocks
1310720 inodes, 5242368 blocks
262118 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
160 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000
Allocating group tables: done                            Writing inode tables: done                            Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done  

 Mount the Raid mount point to desired directory:

[root@ip-10-128-50-246 ec2-user]# mkdir /data
[root@ip-10-128-50-246 ec2-user]# mount /dev/md0 /data
[root@ip-10-128-50-246 ec2-user]# df -h /data/
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               20G  172M   19G   1% /data

[root@ip-10-138-150-225 ec2-user]# mkdir /data
[root@ip-10-138-150-225 ec2-user]# mount /dev/md0 /data
[root@ip-10-138-150-225 ec2-user]# df -h /data
Filesystem            Size  Used Avail Use% Mounted on
/dev/md0               20G  172M   19G   1% /data


Now configure the GlusterFS for /data directory for each of the servers:
To get full details of GlusterFS configure please follow following post http://simplyopensource.blogspot.com/2013/11/how-to-implement-glusterfs-in-amazon.html

[root@ip-10-128-50-246 ec2-user]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo; sed -i 's/$releasever/6/g' /etc/yum.repos.d/glusterfs-epel.repo ; yum install libibverbs-devel fuse-devel -y ; yum install -y glusterfs{-fuse,-server} ; service glusterd start ; modprobe fuse

You may have similar error while creating Gluster for /data as Gluster can't be implemented on Raid parent directory so we have created a sub directory /data/share and configured Gluster for it..


[root@ip-10-128-50-246 ec2-user]# gluster peer probe  ec2-54-255-3-126.ap-southeast-1.compute.amazonaws.com
peer probe: success
[root@ip-10-128-50-246 ec2-user]# gluster volume create Test-Volume replica 2 transport tcp ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:/data ec2-54-255-3-126.ap-southeast-1.compute.amazonaws.com:/data volume create: Test-Volume: failed: The brick ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:/data is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use 'force' at the end of the command if you want to override this behavior.
[root@ip-10-128-50-246 ec2-user]# mkdir -p /data/share
[root@ip-10-128-50-246 ec2-user]# gluster volume create Test-Volume replica 2 transport tcp ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:/data/share ec2-54-255-3-126.ap-southeast-1.compute.amazonaws.com:/data/share volume create: Test-Volume: success: please start the volume to access data
[root@ip-10-128-50-246 ec2-user]# gluster volume start Test-Volume volume start: Test-Volume: success
[root@ip-10-128-50-246 ec2-user]# gluster volume info   Volume Name: Test-Volume Type: Replicate Volume ID: 47477eaa-423b-4200-9f55-2966ae079a79 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:/data/share

Configure the glusterFS for /data/share and start the volume..

Mount the Gluster volume to desired directory to make single Data access point on both the servers:

[root@ip-10-128-50-246 ec2-user]# mount -t glusterfs ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:Test-Volume /mnt
[root@ip-10-128-50-246 ec2-user]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:Test-Volume
                       20G  172M   19G   1% /mnt

[root@ip-10-138-150-225 ec2-user]# mount -t glusterfs ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:Test-Volume /mnt
[root@ip-10-138-150-225 ec2-user]# df -h /mnt
Filesystem            Size  Used Avail Use% Mounted on
ec2-122-248-213-231.ap-southeast-1.compute.amazonaws.com:Test-Volume
                       20G  172M   19G   1% /mnt

Now checking is our Gluster working good or not:


[root@ip-10-138-150-225 mnt]# touch a1 a2 a3
[root@ip-10-138-150-225 mnt]# ls
a1  a2  a3
[root@ip-10-128-50-246 ec2-user]# cd /mnt [root@ip-10-128-50-246 mnt]# ls a1  a2  a3 [root@ip-10-128-50-246 mnt]# touch b1 b2 b3 [root@ip-10-128-50-246 mnt]# ls a1  a2  a3  b1  b2  b3

Now lets try to break our mount and see if Gluster is working fine or not via un mounting the /data directory in one of the servers:

[root@ip-10-128-50-246 mnt]# umount -l /data
[root@ip-10-128-50-246 mnt]# cd /mnt [root@ip-10-128-50-246 mnt]# ls a1  a2  a3  b1  b2  b3 [root@ip-10-128-50-246 mnt]# touch c1 c2 c3 c4 [root@ip-10-128-50-246 mnt]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4
[root@ip-10-138-150-225 mnt]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4 [root@ip-10-138-150-225 mnt]# cd /data/share/ [root@ip-10-138-150-225 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4

Now lets mount back the directory to check we have consistent data:

[root@ip-10-128-50-246 mnt]# mount /dev/md0 /data
[root@ip-10-128-50-246 share]# pwd /data/share [root@ip-10-128-50-246 share]# ls -l total 0 -rw-r--r-- 2 root root 0 Jan 23 05:52 a1 -rw-r--r-- 2 root root 0 Jan 23 05:52 a2 -rw-r--r-- 2 root root 0 Jan 23 05:52 a3 -rw-r--r-- 2 root root 0 Jan 23 05:53 b1 -rw-r--r-- 2 root root 0 Jan 23 05:53 b2 -rw-r--r-- 2 root root 0 Jan 23 05:53 b3 [root@ip-10-128-50-246 share]# touch d1 d2 d3 [root@ip-10-128-50-246 share]# ls a1  a2  a3  b1  b2  b3  d1  d2  d3 [root@ip-10-128-50-246 share]# cd /mnt/ [root@ip-10-128-50-246 mnt]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3 [root@ip-10-128-50-246 mnt]# cd /data/share [root@ip-10-128-50-246 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3

[root@ip-10-138-150-225 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3 [root@ip-10-138-150-225 share]# cd /data/share [root@ip-10-138-150-225 share]# ls a1  a2  a3  b1  b2  b3  c1  c2  c3  c4  d1  d2  d3

The data replicated very well across the servers and the data also gets updated in remounted directory as soon as we create any new content in servers..
 

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...