Wednesday, November 6, 2013

How to implement GlusterFS in Amazon AMI (AWS)

The Gluster file system is used to have high avaibility via use of data replication in servers, following are the steps to implement it in Amazon AMI's:

As Amazon AMI don't have GlusterFS repo enabled in it so we need to enable it..


[root@ip-10-144-143-144 ec2-user]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo



[root@ip-10-144-143-144 ec2-user]# sed -i 's/$releasever/6/g' /etc/yum.repos.d/glusterfs-epel.repo

 
Install the required dependencies..
 

[root@ip-10-144-143-144 ec2-user]# yum install libibverbs-devel fuse-devel -y 

 
Install the Gluster Server and Fuse packets in master server..
 

[root@ip-10-144-143-144 ec2-user]# yum install -y glusterfs{-fuse,-server} 

 
Start the Gluster service in server..
 

[root@ip-10-144-143-144 ec2-user]# service glusterd start
Starting glusterd:                                         [  OK  ]

 
Follow the same procedure in client server and install the GlusterFS packages and start the service..
 
Add file system to Kernal..
 

[root@ip-10-144-143-144 ec2-user]# modprobe fuse

 
 
Add a truster peer pool storage..
 
 

[root@ip-10-144-143-144 ec2-user]# gluster peer probe  ec2-54-254-58-214.ap-southeast-1.compute.amazonaws.com
peer probe: success

 
 
Check the status of the peer..


[root@ip-10-144-143-144 ec2-user]# gluster peer probe  ec2-54-254-58-214.ap-southeast-1.compute.amazonaws.com
peer probe: success
[root@ip-10-144-143-144 ec2-user]# gluster peer status
Number of Peers: 1

Hostname: ec2-54-254-58-214.ap-southeast-1.compute.amazonaws.com
Port: 24007
Uuid: 51c7c768-b046-46ac-a4ad-caa67c1b768d
State: Peer in Cluster (Connected)


 
 Create and start volume in both the servers..


[root@ip-10-144-143-144 ec2-user]# gluster volume create Test-Volume replica 2 transport tcp ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:/data1 ec2-54-254-58-214.ap-southeast-1.compute.amazonaws.com:/data2
volume create: Test-Volume: success: please start the volume to access data

[root@ip-10-144-143-144 ec2-user]# gluster volume start Test-Volume
volume start: Test-Volume: success


Check Gluster volume..



[root@ip-10-144-143-144 ec2-user]# gluster volume info   Volume Name: Test-Volume Type: Replicate Volume ID: e5a8a24d-bf47-4770-a2e8-f13993998a51 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:/data1 Brick2: ec2-54-254-58-214.ap-southeast-1.compute.amazonaws.com:/data2

data1 and data2 will both be having same data in between them its just taken to show that its not neccesary to take same mount name..

create a /data directory in both master and client box so it can have common mount point..



[root@ip-10-144-143-144 ec2-user]# mkdir /data [root@ip-10-144-143-144 ec2-user]# mount -t glusterfs ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:Test-Volume /data [root@ip-10-144-143-144 ec2-user]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/xvda1            7.9G  983M  6.9G  13% / tmpfs                 298M     0  298M   0% /dev/shm ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:Test-Volume                       7.9G  983M  6.9G  13% /data

[root@ip-10-146-2-125 ec2-user]# mkdir /data [root@ip-10-146-2-125 ec2-user]#  mount -t glusterfs ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:Test-Volume /data [root@ip-10-146-2-125 ec2-user]# df -h Filesystem            Size  Used Avail Use% Mounted on /dev/xvda1            7.9G  981M  6.9G  13% / tmpfs                 298M     0  298M   0% /dev/shm ec2-122-248-202-153.ap-southeast-1.compute.amazonaws.com:Test-Volume                       7.9G  983M  6.9G  13% /data

 
 To test files are replicating as required or not..


[root@ip-10-144-143-144 ec2-user]# cd /data
[root@ip-10-144-143-144 data]# touch a1
[root@ip-10-144-143-144 data]# ls
a1  a2
[root@ip-10-146-2-125 ec2-user]# cd /data [root@ip-10-146-2-125 data]# ls a1 [root@ip-10-146-2-125 data]# touch a2 [root@ip-10-146-2-125 data]# ls a1  a2
 

Wednesday, October 30, 2013

Script to copy content from server or local storage to S3


Following script can be used to copy data from linux server to S3 bucket:

#!/bin/bash
## Script to copy data from stage to S3 bucket

s3cmd sync /tmp/code-backup/ s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log


s3cmd : Is the command to run S3 command
sync : It will sync data from server to S3 bucket
/var/log/daily-backup.log : Destination of logs

If you want to keep Backup of specific days in server and S3 following script can be used, in this 90 days are taken in consideration:
#!/bin/bash
## Script to copy data from stage to S3 bucket

find /tmp/code-backup/ -type f -mtime +90 -exec rm -f {} \;
s3cmd sync /tmp/code-backup/ s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log


If daily backup needs to be taken of new files following script can be used:

#!/bin/bash
## Script to copy data from stage to S3 bucket

s3cmd put `find /tmp/temp-backups/ -type f -mtime -1` s3://s3-bucket-name/backup/ >> /var/log/daily-backup.log



 

How to proxy pass in Nginx

Following configuration can be used to proxy pass one URL to other in Nginx

location /abc {
    rewrite /abc(.*) /$1 break;
    proxy_pass http://redirect.com;
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

 /abc : Directory you want to proxy pass
/abc(.*) : will work for both /abc and /abc/
$1 : Will not pass the /abc in the end of redirect.com

Monday, August 5, 2013

How to install Apache, Java, Tomcat and Solr

Here is the procedure to install Apache, Tomcat, Java and Solr:

First of all we need Apache in our box:
Download the Apache source.

tar -zxvf httpd-2.2.24.tar.gz
cd httpd-2.2.24
./configure --enable-so --enable-expires --enable-file-cache --enable-cache --enable-disk-cache --enable-mem-cache --enable-headers --enable-ssl --enable-http --disable-userdir --enable-rewrite --enable-deflate --enable-proxy --enable-proxy-connect --enable-proxy-ftp --enable-proxy-http --enable-proxy-ajp --enable-proxy-balancer --enable-cgi --disable-dbd --enable-modules=most --with-mpm=worker --prefix=/usr/local/apache2
make; make install
/usr/local/apache2/bin/apachectl -t

./configure : Specify the modules which you want to install for your Apache ( ./configure with out specified modules will install all the modules )
--prefix= :  Its used to specify the path where you need to install Apache.
apachectl -t : To check the syntax of Apache

Download Java from oracle site as Tomcat and Solr will be needing Java to work, after that follow these steps:

chmod 777 jdk-6u17-linux-i586.bin
./jdk-6u17-linux-i586.bin
mkdir -p /usr/java
cp -r  jdk1.6.0_17 /usr/java/
ln -s /usr/java/jdk1.6.0_17/bin/java /usr/bin/java
java -version

Change the .bin permission to executable by +x or 755 or 777.
ln -s :  Created the soft link for java to be executed from any where, you can also create environment variable for java according to you need.
java -version will display the java version installed.

Now once your java is installed correctly and enviornment is set download the Tomcat:

tar -zxvf apache-tomcat-7.0.39.tar.gz
mkdir  -p  /usr/tomcat
cp -rf apache-tomcat-7.0.39/* /usr/tomcat/
ls /usr/tomcat/
/usr/tomcat/bin/version.sh

Now we need Tomcat Native to link Apache and Tomcat with each other so download the tomcat-connectors:

tar -zxvf tomcat-connectors-1.2.37-src.tar.gz
cd tomcat-connectors-1.2.37-src/native/
./configure --with-apxs=/usr/local/apache2/bin/apxs  --with-java-home=/usr/java/jdk1.6.0_17 --prefix=/usr
make
make install

Now add the configuration which you want to setup for your Tomcat server in httpd.conf and worker.properties:

cd /usr/local/apache2/conf/
cp httpd.conf httpd.conf.original
vi worker.properties
vi httpd.conf
../bin/apachectl stop
../bin/apachectl start

Here is demo configuration for httpd.conf and workers.properties:

configure httpd.conf. 

JkWorkersFile /usr/local/apache2/conf/worker.properties
JkLogFile logs/jk-log
# Set the jk log level debug/error/info
JkLogLevel info
# Select the log format
JkLogStampFormat "%a %b %d %H:%M:%S %Y "
# JkOptions indicate to send SSL KEY SIZE,
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
# JkRequestLogFormat set the request format
JkRequestLogFormat "%w %V %T"
# Mount your applications
JkMount /rest/* node1 

configure workers.properties. 

# Define 1 real worker using ajp13
worker.list=worker1
# Set properties for worker1 (ajp13)
worker.worker1.type=ajp13
worker.worker1.host=localhost
worker.worker1.port=8009
# modifications after using mpm worker
worker.worker1.connection_pool_size=128
worker.worker1.connection_pool_timeout=600
worker.worker1.socket_keepalive=1  

Now download the Solr source:

tar -zxvf solr-4.2.1.tgz
mkdir -p /usr/solr/
cp -rf solr-4.2.1/* /usr/solr/


How to delete multiple user in linux

 To delete multiple user in Linux, refer the following video.



If you want to delete multiple system users in Linux following command can be used..
for user in `cat del.user`;do userdel $user;done

user : its the variable used to have values from del.user.
del.user : Its the file having name of the users which you want to delete.
userdel : Command used to delete user.
userdel -r : If you want to delete users home directory as well use this.

File having user names which need to be removed:
# cat del.user
ravi
roma
ben
honey
chin


How to check logs hour wise and know the maximum requests in respecitive hours..

If you want to check logs per hours from a days log file use the following command:

Here time.list is the file having time period of which logs is needed by hours..

for  time in `cat  time.list`;do cat access.log | grep -ir 30/Jul/2013:$time: > logs_$time:00-$time:59.txt;done  

It will create the separate text files having logs entry with in the time periods ex "logs_12:00-12:59.txt".


If you need to know maximum requests within specific hours use the following command:

for  time in `cat  time.list`;do cat access.log | grep -ir 30/Jul/2013:$time: | awk ' { print $7 } ' | sort | uniq -cd | sort -nr  |head -15 > maxhit_$time:00-$time:59.txt;done  

awk ' { print $7 } ' : It will take the 7th entry from logs.
sort : It will sort the hits.
uniq -cd : It will count the unique entry and not display the ones having only single entry.
sort -nr : It will count all the unique entries.
head -15 : It will display the top 15 results.

Command will create the separate text file having maximum hits in an hour ex "maxhits_12:00-12:59.txt"


For example it you want logs or hits from 10:00 AM to 20:00 PM use following time.list file.
# cat time.list
10
11
12
13
14
15
16
17
18
19
20


script to backup log files..

If there are multiple logs files which needed to be compressed and than Null following script can be used:

# vi log_compress.sh

echo $1_`date +%d%B%y`

cat $1 | gzip > $1_`date +%d%B%y`.gz
cp /dev/null $1

# chmod 755 log_compress.sh

For example you want to compress error.log use the following command:

# sh log_compress.sh error.log

It will compress the error log in .gz format and null the original error.log



How to install or update newrelic to latest version

Use the following steps to update Relic to latest version:

Url to track latest update:
http://download.newrelic.com/php_agent/release/(external link)

Steps:
  1. wget http://download.newrelic.com/php_agent/release/newrelic-php5-3.7.5.7-linux.tar.gz(external link)
  2. tar –xvf newrelic-php5-3.7.5.7-linux.tar.gz
  3. cp –r newrelic-php5-3.7.5.7-linux /usr/local/
  4. cd /usr/local/newrelic-php5-3.7.5.7-linux/
  5. ls
agent daemon LICENSE MD5SUMS newrelic-install README scripts
  1. ./newrelic-install
  2. /usr/local/apache2/bin/apachectl stop
  3. /usr/local/apache2/bin/apachectl start
  4. php -i | grep Relic
New Relic RPM Monitoring => enabled
New Relic Version => 3.7.5.7 ("hadrosaurus")

Sunday, February 17, 2013

How change permissions of files and directories using find command

Find is a very powerful tool in linux which is use to search files and than can perform any specific action on them as per required.

To Find the files having specific permission:

# find /tmp/ -type f  -perm 666

> /tmp/ : Its the path of the directory in which you want to search.
> -type f : Here we are searching files only.
> -perm 666 : Searching files having permission 666.

To Find the files having specific name and permission:

# find /tmp/ -type f -name '*.php' -perm 666

> -name '*.php' : Its used to search files with name ending from '.php'.

To exclude any folder from find and change the permission of any specific file:

# find /tmp/ -name "extras" -prune -o -type f -iname '*.cgi' -print -exec chmod 774 {} \;

> -name "extra" -prune -o : It will exclude the "extra" folder from search.
> -iname '*.cgi' : It will search the all files with '.cgi' extension.
> -print : It will print all the files which will be find with '.cgi' extension.
> -exec 774 {}\; : it will change teh permission of all the '.cgi' files found to 774.

To change the permission of all the files and directories under a folder:

# find /tmp/ -type f | xargs chmod 664

> xargs chmod 664 : Changing permission of all the files under /tmp/ to 664

# find . -type d | xargs chmod 755

> xargs chmod 775 : Changing permission of all the directories under /tmp/ to 664 





Friday, February 15, 2013

How to create Amazon web services (AWS) EC2 Instances..

Amazon web services are the most used and the best Cloud computing platform out there in market today. Following is the way to create EC2 (Elastic cloud compute) instance..

Login into your AWS account and go into EC2 under Compute and Networking..

In the EC2 dash board you will find the following screen which has info about all the current running instances, elastic IP's, Load balancers, etc..
To create an Instance click the Launch Insatnce..

After clicking the Launch Instance you will get the following screen..
Classic wizard: It gives you many customization option regarding the instance..
Quick Launch Wizard: It has predefined parameters which is used to get images up and running quickly..
AWS marketplace: Its where you can buy other images rather than the default provided by the Amazon like Centos, Debian, etc

We will be creating the image via Classic wizard..


Here you see the images which are provided by the AWS, star indicates that the images are free to use without any additional charges..
My AMIs: If you have created your own image than that will be available here to use.
Community AMIs: These are images which are created by people and can be used by any body at certain price get by the owner.

 I am selecting here Ubuntu 12.04 64-bit..

By Number of Instances you can decide how many instances you wanna launch of configuration we will be configuring.
Instance Type is what type of instance you want depending upon cores and RAM..
Launch Instance is where we decide where we want our image in EC2 or VPN which we can set according to our need.

Star indicates that this instance can be used for free and rest are defined according to there computing powers, select which one fulfills your needs, I am selecting the T1 Micro instance.

Availability Zones are use so that in which zone of word you wanna host your images, As i have selected northeast zone there is further classification if i want to give priority to any specific zone in northeast as well..

Request Spot Instances: These  are used in case where you want to bet for a instance at a low price, whenever amazon have a server free it will provide you with that but if in case it get a request than it will terminate your instance and use that computing power for instances which aren't betting at low price..

In following we select the Kernel ID, RAM Disk ID if you wanna use any specific one, i am proceeding with Default..
You can use special Amazon monitoring if you want it costs a bit but can help you maintain your instances better.
User Data: You can fill details about your instance here..
Shutdown Behavior: It help in deciding what you wanna do when you shutdown the image(reboot, terminate, shutdown).

This is used to do disk partitioning according to your need..

Here you can name your instance and give description about it, as i will be using my instance as web server so did naming as such...

Key Pairs is very important in EC2 as it grants you access to your instances, if you have a already existing one can use that or can create new one.

Add the name here by which you wanna create and than download to your local system to login in your system in future..

Firewall configuration is used to set rules for your instance like which port you want to open for services like ssh, apache, etc...
You can use your existing security groups or create new one..

Here we are creating new Security group which i will be using for my we server..
Group name: Name of the group which u wanna create
Group Description: Description about group
Inbound Rules: By default EC2 images have no inbound and all outbound, so we will be setting ports(services) which we wanna give access to our instance..

Here i have added basic web servers ports access to my instance, keep in mind you need to enable ssh also else you will not be able to login in you instance..

These are the details of the instance which will be created just go and Launch..:)

Conformation of launching instance..

Now you will be able to see your instance running in your EC2 dashboard..

To login to your Instance you can use putty in windows but you need to convert your .pem key to ppk so that putty can recognize it,
Dowload the puttygen from http://the.earth.li/~sgtatham/putty/0.62/x86/puttygen.exe so that you can convert it, than  load that file..

Import your key..

saving the new key...

Adde key to your ssh logi by going to SSH under connections than auth and add the key..
Add the public name of the instance to ssh..

For Linux users just use the following command from terminal:
ssh -i path of key ubuntu@public name of instance

Use Ubuntu user to login with key..

Installed the Apache and hosted the site as demo..

Run any web service as you want..;)



Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...