Tuesday, August 16, 2016

How to analyze AWS RDS Slow Query

Amazon RDS has the feature in which we can log the slow queries and than analyze them to find the queries which are making our Database slow.
There are few prerequisites to this:
1. Slow log query should be enabled in RDS.
2. AWS RDS CLI should be present.
3. Percona Toolkit should be present.

Once you have all the prerequisites than following script will help us in rest.
[root@ip-10-0-1-220 ravi]# cat rds-slowlog.sh 
#!/bin/bash
#Script to Analyze AWS RDS MySQL logs via Percona Toolkit
#By Ravi Gadgil

#To get list of all slow logs available.
/opt/aws/apitools/rds/bin/rds-describe-db-log-files --db-instance-identifier teamie-production --aws-credential-file /opt/aws/apitools/rds/credential | awk '{print $2 }' | grep slow > /home/ravi/slowlog.txt

logfile=$(echo -e "slowlog-`date +%F-%H-%M`")
resultfile=$(echo -e "resultlog-`date +%F-%H-%M`")

for i in `cat /home/ravi/slowlog.txt` ; do

#To download Slow Log files and add them to single file.
/opt/aws/apitools/rds/bin/rds-download-db-logfile teamie-production --log-file-name $i --debug --connection-timeout 3600 --aws-credential-file /opt/aws/apitools/rds/credential >> /data/rds-logs/$logfile

done

#To run analysis on slowlog file.

pt-query-digest /data/rds-logs/$logfile > /data/rds-logs/$resultfile

rm -rf /data/rds-logs/$logfile

Note: I have not set the system environment paths for RDS commands and credentials file so using the whole relative path in command, If you have set system environment variable than whole relative path is not needed.





How to install Percona MySQL tools

Persona has lots of tools to analyses MySQL data and very useful information can be extracted from them.

Installing via Yum:
[root@ip-10-0-1-220 ravi]# yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
Retrieving http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
Preparing...                ########################################### [100%]
   1:percona-release        ########################################### [100%]
[root@ip-10-0-1-220 ravi]# yum install percona-toolkit
Loaded plugins: auto-update-debuginfo, priorities, update-motd, upgrade-helper
1123 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package percona-toolkit.noarch 0:2.2.18-1 will be installed
--> Processing Dependency: perl(DBD::mysql) >= 1.0 for package: percona-toolkit-2.2.18-1.noarch
--> Processing Dependency: perl(DBI) >= 1.13 for package: percona-toolkit-2.2.18-1.noarch
--> Running transaction check
---> Package perl-DBD-MySQL55.x86_64 0:4.023-5.23.amzn1 will be installed
---> Package perl-DBI.x86_64 0:1.627-4.8.amzn1 will be installed
--> Processing Dependency: perl(RPC::PlClient) >= 0.2000 for package: perl-DBI-1.627-4.8.amzn1.x86_64
--> Processing Dependency: perl(RPC::PlServer) >= 0.2001 for package: perl-DBI-1.627-4.8.amzn1.x86_64
--> Running transaction check
---> Package perl-PlRPC.noarch 0:0.2020-14.7.amzn1 will be installed
--> Processing Dependency: perl(Net::Daemon) >= 0.13 for package: perl-PlRPC-0.2020-14.7.amzn1.noarch
--> Processing Dependency: perl(Net::Daemon::Test) for package: perl-PlRPC-0.2020-14.7.amzn1.noarch
--> Processing Dependency: perl(Net::Daemon::Log) for package: perl-PlRPC-0.2020-14.7.amzn1.noarch
--> Running transaction check
---> Package perl-Net-Daemon.noarch 0:0.48-5.5.amzn1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================
 Package               Arch        Version                  Repository                   Size
==============================================================================================
Installing:
 percona-toolkit       noarch      2.2.18-1                 percona-release-noarch      1.7 M
Installing for dependencies:
 perl-DBD-MySQL55      x86_64      4.023-5.23.amzn1         amzn-main                   149 k
 perl-DBI              x86_64      1.627-4.8.amzn1          amzn-main                   855 k
 perl-Net-Daemon       noarch      0.48-5.5.amzn1           amzn-main                    58 k
 perl-PlRPC            noarch      0.2020-14.7.amzn1        amzn-main                    39 k

Transaction Summary
==============================================================================================
Install  1 Package (+4 Dependent packages)

Total download size: 2.7 M
Installed size: 4.1 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/latest/percona-release-noarch/packages/percona-toolkit-2.2.18-1.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID cd2efd2a: NOKEY
Public key for percona-toolkit-2.2.18-1.noarch.rpm is not installed
(1/5): percona-toolkit-2.2.18-1.noarch.rpm                             | 1.7 MB     00:01     
(2/5): perl-DBD-MySQL55-4.023-5.23.amzn1.x86_64.rpm                    | 149 kB     00:00     
(3/5): perl-DBI-1.627-4.8.amzn1.x86_64.rpm                             | 855 kB     00:00     
(4/5): perl-Net-Daemon-0.48-5.5.amzn1.noarch.rpm                       |  58 kB     00:00     
(5/5): perl-PlRPC-0.2020-14.7.amzn1.noarch.rpm                         |  39 kB     00:00     
----------------------------------------------------------------------------------------------
Total                                                         1.3 MB/s | 2.7 MB  00:00:02     
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
Importing GPG key 0xCD2EFD2A:
 Userid     : "Percona MySQL Development Team <mysql-dev@percona.com>"
 Fingerprint: 430b df5c 56e7 c94e 848e e60c 1c4c bdcd cd2e fd2a
 Package    : percona-release-0.1-3.noarch (@/percona-release-0.1-3.noarch)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-Percona
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : perl-Net-Daemon-0.48-5.5.amzn1.noarch                                      1/5 
  Installing : perl-PlRPC-0.2020-14.7.amzn1.noarch                                        2/5 
  Installing : perl-DBI-1.627-4.8.amzn1.x86_64                                            3/5 
  Installing : perl-DBD-MySQL55-4.023-5.23.amzn1.x86_64                                   4/5 
  Installing : percona-toolkit-2.2.18-1.noarch                                            5/5 
  Verifying  : percona-toolkit-2.2.18-1.noarch                                            1/5 
  Verifying  : perl-DBD-MySQL55-4.023-5.23.amzn1.x86_64                                   2/5 
  Verifying  : perl-PlRPC-0.2020-14.7.amzn1.noarch                                        3/5 
  Verifying  : perl-Net-Daemon-0.48-5.5.amzn1.noarch                                      4/5 
  Verifying  : perl-DBI-1.627-4.8.amzn1.x86_64                                            5/5 

Installed:
  percona-toolkit.noarch 0:2.2.18-1                                                           

Dependency Installed:
  perl-DBD-MySQL55.x86_64 0:4.023-5.23.amzn1       perl-DBI.x86_64 0:1.627-4.8.amzn1          
  perl-Net-Daemon.noarch 0:0.48-5.5.amzn1          perl-PlRPC.noarch 0:0.2020-14.7.amzn1      

Complete!

If you want to install via RPM go to following link and download the latest version  and install via rpm -ivh package name.

How to setup AWS RDS CLI

AWS provides the its CLI tools but few of RDS functionalities don't work on them so in order to make them work AWS RDS CLI is very helpful.

Download the RDS CLI:
[root@server downloads]# wget http://s3.amazonaws.com/rds-downloads/RDSCli.zip

Unzip the downloaded file and copy it in desired location:
[root@server downloads]# unzip RDSCli.zip
[root@server downloads]# cp -r RDSCli-1.19.004 /opt/aws/apitools/rds
[root@server downloads]# cd /opt/aws/apitools/rds/bin

Check RDS CLI version:
[root@server bin]# ./rds-version 
Relational Database Service CLI version 1.19.004 (API 2014-10-31)

These commands can be added in system environment so you can run them from anywhere:
[root@server bin]# export PATH=/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/apitools/rds/bin:/usr/local/bin

The credential file is also need to be created so that these commands can be run taking permissions in to consideration from that file and file can be added in system environment so we don't need to pass every time in command.
[root@server  rds]# cat credential
AWSAccessKeyId=AKIXXXXXXXXXXXXXXXSYA
AWSSecretKey=FeIXXXXXXXXXXXXXXXXXXXXXXXX06BB
[root@server  rds]# export AWS_CREDENTIAL_FILE=/opt/aws/apitools/rds/credential

If you don't want to add credentials in environment variable than can be passed in command it self with following parameter followed by the path of credentials file: --aws-credential-file.

Wednesday, August 10, 2016

Script to add logs in Logentires in case of Shared hosting Web Server.

Logentries is a very good tool to store and analyze logs in central location but when we are storing logs from a  Nginx/Apache shared hosting environment it gets complex as we need to tag each log to which host it belong. I am using rsyslog to forward to send my logs to Logentries as it gives me more flexibility as all rsyslog functionality works.

First need to create a main configuration file which we will use to create corresponding rsyslog files. It will have your Logentries secret key to whom log need to be followed.

For access log:
[root@ip-10-0-1-220 ravi]# cat access-vanila 
$Modload imfile

$InputFileName access-log-location
$InputFileTag access-tag
$InputFileStateFile filestate-tag
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor

# Only entered once in case of following multiple files
$InputFilePollInterval 1

$template filestate-tag,"6fb8xxxxxxxxxxxxxxxxxxxxxxe8ed %HOSTNAME% %syslogtag% %msg%\n"
if $programname == 'access-tag' then @@data.logentries.com:10000;filestate-tag
& ~

For error log:
[root@ip-10-0-1-220 ravi]# cat error-vanila 
$Modload imfile

$InputFileName error-log-location
$InputFileTag error-tag
$InputFileStateFile filestate-tag
$InputFileSeverity info
$InputFileFacility local7
$InputRunFileMonitor

# Only entered once in case of following multiple files
$InputFilePollInterval 1

$template filestate-tag,"4f71xxxxxxxxxxxxxxxxxxxxxxx138 %HOSTNAME% %syslogtag% %msg%\n"
if $programname == 'error-tag' then @@data.logentries.com:10000;filestate-tag
& ~

Now use the following script to create rsyslog configurations to send logs to Logentries.
[root@ip-10-0-1-220 ravi]# cat logscript.sh 
#!/bin/bash
#Script to add hosts to Logentries
#By Ravi Gadgil

#Path where nginx hosts configurations are placed /etc/nginx/sites-enabled/sites/
for j in `ls /etc/nginx/sites-enabled/sites/`; do cat /etc/nginx/sites-enabled/sites/$j | grep -m 1 access; cat /etc/nginx/sites-enabled/sites/$j | grep -m 1 error ; done | uniq  | grep 'teamieapp\|theteamie' > /home/ravi/data.txt

#Creating files having entries of access and error log, if want to avoid any log place instead of avoid in grep command.
cat data.txt | grep access.log | grep -v 'avoid' | awk '{ print $2 }' | cut -d';' -f1 > /home/ravi/accessfiles.txt
cat data.txt | grep error.log | grep -v 'avoid' | awk '{ print $2 }' | cut -d';' -f1 > /home/ravi/errorfiles.txt

#To create unique Filestate Tag in rsyslog.
COUNTER=50

#To create access log entries.
for i in `cat /home/ravi/accessfiles.txt`
do
logfileaccess=$( echo -e "$i" )
echo -e "Access Log file is : $logfileaccess"

accesstag=$(echo -e $logfileaccess | cut -d'/' -f5 | sed 's/.log//g' | sed "s/_/-/g" | sed "s/\./-/g")
echo -e "Access Tag is : $accesstag"

COUNTER=$[$COUNTER +1]

filestate=$(echo -e "nginx$COUNTER")
echo -e "File state is : $filestate"

cp /home/ravi/access-vanila /home/ravi/tempconf
sed -i "s#access-log-location#$logfileaccess#g" /home/ravi/tempconf
sed -i "s/access-tag/$accesstag/g" /home/ravi/tempconf
sed -i "s/filestate-tag/$filestate/g" /home/ravi/tempconf
mv /home/ravi/tempconf /home/ravi/conf/$accesstag.conf

done

#To create error log entries.
for i in `cat /home/ravi/errorfiles.txt`
do

logfileerror=$( echo -e "$i" )
echo -e "Error Log file is : $logfileerror"

errortag=$(echo -e $logfileerror | cut -d'/' -f5 | sed 's/.log//g' | sed "s/_/-/g" | sed "s/\./-/g")
echo -e "Error Tag is : $errortag"

COUNTER=$[$COUNTER +1]

filestate=$(echo -e "nginx$COUNTER")
echo -e "File state is : $filestate"

cp /home/ravi/error-vanila /home/ravi/tempconf
sed -i "s#error-log-location#$logfileerror#g" /home/ravi/tempconf
sed -i "s/error-tag/$errortag/g" /home/ravi/tempconf
sed -i "s/filestate-tag/$filestate/g" /home/ravi/tempconf
mv /home/ravi/tempconf /home/ravi/conf/$errortag.conf

done

Note:
1. Your log format should be in site_access.log and site_error.log if not in this format do adjust in script.
2. Script was working in /home/ravi directory so adjust script according to your paths.
3. Once all configurations are created add them to your rsyslog directory and restart service.
4. Rsyslog can only forward 100 files so do keep that in mind and check /var/log/messages for
 any errors if any.


Friday, May 27, 2016

How to install python2.7 with pip2.7

Python is one of the most famous and powerful languages used so in order to install or update it to 2.7 version following steps can be used.

Installing the Python2.7.
[root@ip-10-0-1-55 ~]# yum install python27 python27-devel
[root@ip-10-0-1-55 ~]# python --version
Python 2.7.10

If its still showing older version add the new version as default.
[root@ip-10-0-1-55 ~]# alternatives --config python

There are 2 programs which provide 'python'.
 + 1           /usr/bin/python2.6
*  2           /usr/bin/python2.7

Enter to keep the current selection[+], or type selection number: 2
[root@ip-10-0-1-55 ~]# python --version
Python 2.7.10

Installing the pip2.7.
[root@ip-10-0-1-55 ~]# wget https://bootstrap.pypa.io/get-pip.py
--2016-05-26 11:10:33--  https://bootstrap.pypa.io/get-pip.py
Resolving bootstrap.pypa.io (bootstrap.pypa.io)... 103.245.222.175
Connecting to bootstrap.pypa.io (bootstrap.pypa.io)|103.245.222.175|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1524722 (1.5M) [text/x-python]
Saving to: ‘get-pip.py’

100%[=====================================================================>] 1,524,722   --.-K/s   in 0.09s   

2016-05-26 11:10:33 (16.2 MB/s) - ‘get-pip.py’ saved [1524722/1524722]
[root@ip-10-0-1-55 ~]# python2.7 get-pip.py
Collecting pip
  Downloading pip-8.1.2-py2.py3-none-any.whl (1.2MB)
    100% |████████████████████████████████| 1.2MB 675kB/s 
Collecting wheel
  Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
    100% |████████████████████████████████| 71kB 7.1MB/s 
Installing collected packages: pip, wheel
Successfully installed pip-8.1.2 wheel-0.29.0

[root@ip-10-0-1-55 dist-packages]# which pip2.7
/usr/bin/pip2.7

Installing dependencies or packages from pip2.7.
[root@ip-10-0-1-55 ~]# pip2.7 install asciitree
Collecting asciitree
  Downloading asciitree-0.3.1.tar.gz
Building wheels for collected packages: asciitree
  Running setup.py bdist_wheel for asciitree ... done
  Stored in directory: /root/.cache/pip/wheels/10/14/df/88531e6ba23be75397d69921e580464f2850b6f45e982ac644
Successfully built asciitree
Installing collected packages: asciitree
Successfully installed asciitree-0.3.1


How to create and remove swap partition in Linux

If our server is having memory issues and we want to increase it without updating the physical RAM than Swap is a very good option. It's slower than the physical RAM but can do the tasks also if created on faster hard disks the results are good.

Create a Swap partition in place where you wanna create it. I am creating it in /swap with 2GB of size.
[root@ip-10-0-1-38 /]# dd if=/dev/zero of=/swap bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 54.8004 s, 39.2 MB/s

Update the permissions of /swap partition to 600.
[root@ip-10-0-1-38 /]# swapon /swap 
swapon: /swap: insecure permissions 0644, 0600 suggested.
[root@ip-10-0-1-38 /]# chmod 600 /swap 

Start the Swap on /swap partition.
[root@ip-10-0-1-38 /]# swapon /swap 

To check if swap has started or not.
[root@ip-10-0-1-38 /]# swapon -s
Filename                                Type            Size    Used    Priority
/swap                                   file    2097148 0       -1

[root@ip-10-0-1-38 /]# free -m
             total       used       free     shared    buffers     cached
Mem:          3752       2353       1399          0         39       2184
-/+ buffers/cache:        128       3624
Swap:         2047          0       2047

Add the entry in /etc/fstab so that swap can start on server boot.
/swap swap swap defaults 0 0

If in case you face following error stop and start of swap can solve the issue.
[root@ip-10-0-1-38 /]# swapon /swap 
swapon: /swap: swapon failed: Device or resource busy

[root@ip-10-0-1-38 /]# swapoff /swap 
[root@ip-10-0-1-38 /]# swapon /swap
[root@ip-10-0-1-38 /]# free -m
             total       used       free     shared    buffers     cached
Mem:          3752       2353       1399          0         39       2184
-/+ buffers/cache:        128       3624
Swap:         2047          0       2047

In order to remove swap partition following steps can be used.
[root@ip-10-0-1-38 ec2-user]# swapoff /swap
[root@ip-10-0-1-38 ec2-user]# rm -rf /swap 


Thursday, May 26, 2016

How to install SyntaxNet in Linux

SyntaxNet is an open-source neural network framework for TensorFlow that provides a foundation for Natural Language Understanding (NLU) systems.
In order to install it in Linux servers following steps can be used..

It requires Python 2.7 so if you don't have that install the same.
[root@ip-10-0-1-55 ~]# yum install python27 python27-devel
[root@ip-10-0-1-55 ~]# python --version
Python 2.7.10

If its still showing older version add the new version as default.
[root@ip-10-0-1-55 ~]# alternatives --config python

There are 2 programs which provide 'python'.
 + 1           /usr/bin/python2.6
*  2           /usr/bin/python2.7

Enter to keep the current selection[+], or type selection number: 2
[root@ip-10-0-1-55 ~]# python --version
Python 2.7.10

Install the java1.8 version.
[root@ip-10-0-1-55 ~]# yum install java-1.8.0-openjdk*

If old java version is showing up update it via following command.
[root@ip-10-0-1-55 ~]# alternatives --config java

Make sure that your java home is pointing to java directory where javac is placed else it will show the java not found error. In my case it was placed in /usr/lib/jvm/java-1.8.0-openjdk.x86_64/ so updated it to that location.
[root@ip-10-0-1-55 ~]# export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk.x86_64/[root@ip-10-0-1-55 ~]# echo $JAVA_HOME/usr/lib/jvm/java-1.8.0-openjdk.x86_64/
[root@ip-10-0-1-55 ~]# java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

Installing the bazel. As I am installing the Linux version so taking the executable from here.
[root@ip-10-0-1-55 ~]# wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh
[root@ip-10-0-1-55 ~]# chmod +x bazel-0.2.2b-installer-linux-x86_64.sh [root@ip-10-0-1-55 ~]# ./bazel-0.2.2b-installer-linux-x86_64.sh 

Installing the swig package.
[root@ip-10-0-1-55 ~]# yum install swig

If your using the python2.7 than you need to install python packages using the pip2.7 as well so first we need to add that. If your not using python2.7 normal pip install will work fine.
[root@ip-10-0-1-55 ~]# wget https://bootstrap.pypa.io/get-pip.py
--2016-05-26 11:10:33--  https://bootstrap.pypa.io/get-pip.py
Resolving bootstrap.pypa.io (bootstrap.pypa.io)... 103.245.222.175
Connecting to bootstrap.pypa.io (bootstrap.pypa.io)|103.245.222.175|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1524722 (1.5M) [text/x-python]
Saving to: ‘get-pip.py’

100%[=====================================================================>] 1,524,722   --.-K/s   in 0.09s   

2016-05-26 11:10:33 (16.2 MB/s) - ‘get-pip.py’ saved [1524722/1524722]
[root@ip-10-0-1-55 ~]# python2.7 get-pip.py
Collecting pip
  Downloading pip-8.1.2-py2.py3-none-any.whl (1.2MB)
    100% |████████████████████████████████| 1.2MB 675kB/s 
Collecting wheel
  Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
    100% |████████████████████████████████| 71kB 7.1MB/s 
Installing collected packages: pip, wheel
Successfully installed pip-8.1.2 wheel-0.29.0

[root@ip-10-0-1-55 dist-packages]# which pip2.7
/usr/bin/pip2.7

Install the required packages protobuf, asciitree, numpy.
[root@ip-10-0-1-55 ~]# pip2.7 install -U protobuf==3.0.0b2
Collecting protobuf==3.0.0b2
  Downloading protobuf-3.0.0b2-py2.py3-none-any.whl (326kB)
    100% |████████████████████████████████| 327kB 2.2MB/s 
Collecting six>=1.9 (from protobuf==3.0.0b2)
  Downloading six-1.10.0-py2.py3-none-any.whl
Collecting setuptools (from protobuf==3.0.0b2)
  Downloading setuptools-21.2.1-py2.py3-none-any.whl (509kB)
    100% |████████████████████████████████| 512kB 1.6MB/s 
Installing collected packages: six, setuptools, protobuf
  Found existing installation: six 1.8.0
    Uninstalling six-1.8.0:
      Successfully uninstalled six-1.8.0
  Found existing installation: setuptools 12.2
    Uninstalling setuptools-12.2:
      Successfully uninstalled setuptools-12.2
Successfully installed protobuf-3.0.0b2 setuptools-21.2.1 six-1.10.0
[root@ip-10-0-1-55 ~]# pip2.7 install asciitree
Collecting asciitree
  Downloading asciitree-0.3.1.tar.gz
Building wheels for collected packages: asciitree
  Running setup.py bdist_wheel for asciitree ... done
  Stored in directory: /root/.cache/pip/wheels/10/14/df/88531e6ba23be75397d69921e580464f2850b6f45e982ac644
Successfully built asciitree
Installing collected packages: asciitree
Successfully installed asciitree-0.3.1
[root@ip-10-0-1-55 ~]# pip2.7 install numpy
Collecting numpy
  Downloading numpy-1.11.0-cp27-cp27mu-manylinux1_x86_64.whl (15.3MB)
    100% |████████████████████████████████| 15.3MB 48kB/s 
Installing collected packages: numpy
Successfully installed numpy-1.11.0
[root@ip-10-0-1-55 ~]# pip freeze | grep protobuf
protobuf==3.0.0b2
[root@ip-10-0-1-55 ~]# pip freeze | grep numpy
numpy==1.11.0
[root@ip-10-0-1-55 ~]# pip freeze | grep asciitree
asciitree==0.3.1



Building the SyntaxNet from Git.
[root@ip-10-0-1-55 ~]# git clone --recursive https://github.com/tensorflow/models.git 
[root@ip-10-0-1-55 ~]# cd models/syntaxnet/tensorflow 
[root@ip-10-0-1-55 ~]# ./configure 
[root@ip-10-0-1-55 ~]# cd .. 
[root@ip-10-0-1-55 ~]# bazel test syntaxnet/... util/utf8/...

Testing the installation.
[root@ip-10-0-1-55 ~]# echo 'Ravi' | syntaxnet/demo.sh

Thursday, May 19, 2016

Take dump of all the databases from mysql server

To take MySQL dump of all the databases following script can be used it will work for normal MySQL server as well as Amazon RDS.
[root@ip-10-0-1-231 ravi]# cat dump.sh
#!/bin/bash
#Script to get dump of all the databases within the server.

USER="root"
PASSWORD="dbpassword"

databases=`mysql -h prod-XXXXXXXXXXXX.rds.amazonaws.com -u $USER -p$PASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`

for db in $databases; do
    if [[ "$db" != "information_schema" ]] && [[ "$db" != "performance_schema" ]] && [[ "$db" != "mysql" ]] && [[ "$db" != _* ]] ; then
        echo "Dumping database: $db"
        mysqldump -h prod-XXXXXXXXXXXXXX.rds.amazonaws.com -u $USER -p$PASSWORD --databases $db > `date +%Y%m%d`.$db.sql
       # gzip $OUTPUT/`date +%Y%m%d`.$db.sql
    fi
done

How to add numbers via bash

There are cases when we need to add up numbers or the outputs of our commands via bash so in that case following commands are helpful.

[ec2-user@ip-10-0-1-38 ~]$ cat list
45
78
56
67
34
56

To do the sum:
[ec2-user@ip-10-0-1-38 ~]$ cat list | awk '{ SUM += $1} END { print SUM }'
336

Same can be used to get sum of any output for example grep.
[ec2-user@ip-10-0-1-38 ~]$ cat s3data.txt | grep Size | grep  Mi | awk '{ print $3 }' | awk '{ SUM += $1} END { print SUM }'
3405






Find size of S3 Buckets

In order to find the size of S3 buckets we can use following ways:

First Method: via s3api cli
[root@ip-10-0-1-231 ravi]# aws s3api list-objects --bucket bucketname --output json --query "[sum(Contents[].Size), length(Contents[])]"
[
    30864102, 
    608
]

30864102: Is the size in Bytes.
608: No of objects in bucket.

Second Method : via s3 cli
[root@ip-10-0-1-231 ravi]# aws s3 ls s3://bucketname --recursive  | grep -v -E "(Bucket: |Prefix: |LastWriteTime|^$|--)" | awk 'BEGIN {total=0}{total+=$3}END{print total/1024/1024" MB"}'
29.4343 MB

Using bash commands to get output in desired way.

Third Method : via s3 cli with parameters
[root@ip-10-0-1-231 ravi]# aws s3 ls s3://bucketname --recursive --human-readable --summarize
2016-05-04 11:32:00    7.6 KiB prompthooks0.py

Total Objects: 1
   Total Size: 7.6 KiB

-human-readable: this will provide the data that is already in KB, MB, GB. TB etc.

To get size of all the buckets in your S3 use the following script:
root@ip-10-0-1-231 ravi]# cat s3size.sh 
#!/bin/bash
#Script to find the size of all S3 buckets.

aws s3 ls | awk '{ print $3 }' > /home/ec2-user/s3.list
for i in `cat /home/ec2-user/s3.list` ; do
echo -e "\nBucket Name: $i"
aws s3 ls s3://$i --recursive --human-readable --summarize | grep Total ; done

output of script:
root@ip-10-0-1-231 ravi]# tail s3data.txt
Bucket Name: bucket1
Total Objects: 17676
   Total Size: 67.3 GiB

Bucket Name: bucket2
Total Objects: 1
   Total Size: 7.6 KiB

Bucket Name: bucket3
Total Objects: 5302
   Total Size: 3.5 GiB



Friday, March 11, 2016

How to run multiple commands on servers via Ansible

Ansible is a very powerful tool for central management of servers and we can run commands on servers from a central location but there is limitation that we can only run one command at a time on servers using command module and cat perform complex commands so to over come it following script is very helpful.

We are creating a script and placing it in /usr/bin/ so that can be run from any location.
[root@ip-10-0-1-231 ravi]# which prod-command.sh
/usr/bin/prod-command.sh

Following is the script which is being used to run multiple commands of servers:

[root@ip-10-0-1-231 ec2-user]# cat /usr/bin/prod-command.sh 
#!/bin/bash
#To run commands on server in group tag_Prod_VPC
#By Ravi Gadgil

echo -e "Running command on Production Server... "

for i in "$@" ; do ansible tag_Prod_VPC -u ec2-user -s -m shell -a "$i" ; done 

$@ : Will take the variables from the scripts which can be of n numbers.
ansible : To run ansible command line.
tag_Prod_VPC : Server Host group on which we want to run commands.
-s : For Sudo.
-m : To define we will be using the ansible defined module.
shell : Shell is the module which we are using to get output on our screen.
-a : To run commands on host server group.

Script output:
[root@ip-10-0-1-231 ec2-user]# prod-command.sh "id ; pwd ; echo hi"
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
/home/ec2-user
hi

We can run complex commands as well like for loop:
prod-command.sh 'mkdir /tmp/test/; cd /tmp/test/ ; for i in 1 2 3 4 5 ; do touch a$i ; done ; ls -lrt /tmp/test/'
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
total 0
-rw-r--r-- 1 root root 0 Mar 11 18:00 a1
-rw-r--r-- 1 root root 0 Mar 11 18:00 a3
-rw-r--r-- 1 root root 0 Mar 11 18:00 a2
-rw-r--r-- 1 root root 0 Mar 11 18:00 a5
-rw-r--r-- 1 root root 0 Mar 11 18:00 a4

[root@ip-10-0-1-231 ec2-user]# prod-command.sh 'ps -ef | grep nginx; echo "hello"' w 'w|grep sudo'
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
root     13627 13626  0 18:01 pts/3    00:00:00 /bin/sh -c ps -ef | grep nginx; echo "hello"
root     13629 13627  0 18:01 pts/3    00:00:00 grep nginx
root     20296     1  0 03:16 ?        00:00:00 nginx: master process /root/downloads/nginx-1.5.7/objs/nginx
apache   20298 20296  0 03:16 ?        00:01:18 nginx: worker process                 
apache   20299 20296  0 03:16 ?        00:01:13 nginx: worker process                 
apache   20300 20296  0 03:16 ?        00:01:13 nginx: worker process                 
apache   20301 20296  0 03:16 ?        00:01:16 nginx: worker process                 
apache   20305 20296  0 03:16 ?        00:00:00 nginx: cache manager process          
hello

52.X.X.X | success | rc=0 >>
 18:01:25 up 64 days,  5:38,  4 users,  load average: 8.34, 7.37, 7.73
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
nadeesha pts/0    bb121-6-206-231. 12:14    5:13m  0.08s  0.08s -bash
subu     pts/1    bb121-6-206-231. 15:24    2:36m  0.02s  0.02s -bash
subu     pts/2    bb121-6-206-231. 15:51    1:55m  0.02s  0.02s -bash
ec2-user pts/3    ec2-54-179-147-9 18:01    0.00s  0.08s  0.00s /bin/sh -c sudo

52.X.X.X | success | rc=0 >>
ec2-user pts/3    ec2-54-179-147-9 18:01    0.00s  0.06s  0.00s /bin/sh -c sudo


Note: I have one server in my host server group "tag_Prod_VPC" so its running on single server. Same will work if there are multiple servers in host server group.


Thursday, March 10, 2016

How to check most memory utilization service in Linux

Linux has lots of tools such as free, top, htop, vmstat etc to show system utilization but in order to find exact service which is using the maximum system memory in descending order following command is very helpful.
[root@ip-10-0-1-231 ravi]# ps aux --sort -rss
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
rundeck   1308 28.6 61.2 2092252 371628 ?      Ssl  10:33   0:56 /usr/bin/java -Djava.security.auth.login.con
jenkins   1272 13.7 23.6 1301096 143400 ?      Ssl  10:33   0:27 /etc/alternatives/java -Dcom.sun.akuma.Daemo
root      1561  0.0  0.4 181944  2492 pts/0    S    10:35   0:00 sudo su
root      1229  0.0  0.3  91012  2256 ?        Ss   10:33   0:00 sendmail: accepting connections
root      1563  0.0  0.3 115432  2068 pts/0    S    10:35   0:00 bash
root      1549  0.0  0.2  73688  1796 ?        Ss   10:35   0:00 ssh: /root/.ansible/cp/ansible-ssh-52.74.164
root      1519  0.0  0.2 113428  1772 ?        Ss   10:35   0:00 sshd: ec2-user [priv]
smmsp     1237  0.0  0.2  82468  1608 ?        Ss   10:33   0:00 sendmail: Queue runner@01:00:00 for /var/spo
root      1562  0.0  0.2 156296  1604 pts/0    S    10:35   0:00 su
ec2-user  1521  0.0  0.2 113428  1540 ?        S    10:35   0:00 sshd: ec2-user@pts/0
ec2-user  1522  0.0  0.2 115300  1264 pts/0    Ss   10:35   0:00 -bash
root      1583  0.0  0.1 117168  1172 pts/0    R+   10:36   0:00 ps aux --sort -rss
root      1197  0.0  0.1  79680   968 ?        Ss   10:33   0:00 /usr/sbin/sshd
ntp       1214  0.0  0.1  29280   968 ?        Ss   10:33   0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
root      1288  0.0  0.1 119500   948 ?        Ss   10:33   0:00 crond
root      1508  0.0  0.1  52616   844 ?        Ss   10:35   0:00 ssh-agent -s

As we can see that it shows the exact percentage in which the memory is used by the individual services now we can narrow down on issue creating service and do the needful action.

How to add multiple users in Linux with or without password

To add multiple users in Linux with or without password, refer to following video:


For details to add multiple users in Linux following steps can be used.

Create a file having list of user which need to be added:
[root@localhost ravi]# cat add.user 
user1
user2
user3
user4
user5

Run the following command to add users:
[root@localhost ravi]# for i in `cat add.user` ; do useradd $i ; done

i : its the variable used to have values from add.user.
add.user : Its the file having name of the users which you want to added.
useradd : Command used to add user.

To check is users are created or not:
[root@localhost ravi]# for i in `cat add.user` ; do id $i ; done 
uid=501(user1) gid=501(user1) groups=501(user1) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=502(user2) gid=502(user2) groups=502(user2) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=503(user3) gid=503(user3) groups=503(user3) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=504(user4) gid=504(user4) groups=504(user4) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=505(user5) gid=505(user5) groups=505(user5) context=root:system_r:unconfined_t:SystemLow-SystemHigh

If you want to assign custom password to users as well while creating them:
[root@localhost ravi]# for i in `cat add.user` ; do useradd $i ; echo Pass$i | passwd $i --stdin ; done 
Changing password for user user1.
passwd: all authentication tokens updated successfully.
Changing password for user user2.
passwd: all authentication tokens updated successfully.
Changing password for user user3.
passwd: all authentication tokens updated successfully.
Changing password for user user4.
passwd: all authentication tokens updated successfully.
Changing password for user user5.
passwd: all authentication tokens updated successfully.

Above command will assign password to user as a combination of Pass+username for example user1 will get  password as "Passuser1". You can modify this according to your need or can assign as single password to all.



Thursday, March 3, 2016

Run mysqltuner on AWS RDS

In order to tune up AWS MySQL RDS following script is very helpful as it can find out quite a few flaws in DB and can provide good recommendation to make RDS better.

To download the script:
wget http://mysqltuner.pl/ -O mysqltuner.pl

Run script on RDS by providing the amount of memory allocated to DB server:
[root@ip-10-0-1-55 ravi]# ./mysqltuner.pl --host rds-staging.DB.com --user root --password dbpassword --forcemem 75000
 >>  MySQLTuner 1.6.4 - Major Hayden <major@mhtx.net>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
[--] Performing tests on rds-staging.DB.com:3306
Please enter your MySQL administrative password: 
[--] Skipped version check for MySQLTuner script
[--] Assuming 75000 MB of physical memory
[!!] Assuming 0 MB of swap space (use --forceswap to specify)
[OK] Currently running supported MySQL version 5.6.19-log

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MRG_MYISAM 
[--] Data in MyISAM tables: 51M (Tables: 112)
[--] Data in InnoDB tables: 187G (Tables: 42377)
[!!] Total fragmented tables: 13360

-------- Security Recommendations  -------------------------------------------
[OK] There are no anonymous accounts for any database users
[!!] User 'haproxy_check@%' has no password set.
[!!] User 'haproxy@%' has user name as password.
[!!] User 'sup@%' has user name as password.
[!!] User 'rdsrepladmin@%' hasn't specific host restriction.
[!!] User 'readonly@%' hasn't specific host restriction.
[!!] User 'root@%' hasn't specific host restriction.
[!!] There is no basic password file list!

-------- CVE Security Recommendations  ---------------------------------------
[--] Skipped due to --cvefile option undefined

-------- Performance Metrics -------------------------------------------------
[--] Up for: 28d 2h 44m 53s (74M q [30.512 qps], 697K conn, TX: 341B, RX: 31B)
[--] Reads / Writes: 84% / 16%
[--] Binary logging is enabled (GTID MODE: OFF)
[--] Total buffers: 6.0G global + 1.5M per thread (609 max threads)
[OK] Maximum reached memory usage: 6.1G (8.33% of installed RAM)
[OK] Maximum possible memory usage: 6.9G (9.47% of installed RAM)
[!!] Slow queries: 6% (5M/74M)
[OK] Highest usage of available connections: 5% (36/609)
[OK] Aborted connections: 0.00%  (11/697772)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (3K temp sorts / 5M sorts)
[!!] Joins performed without indexes: 197309
[OK] Temporary tables created on disk: 11% (296K on disk / 2M total)
[OK] Thread cache hit rate: 99% (2K created / 697K connections)
[!!] Table cache hit rate: 0% (2K open / 4M opened)
[OK] Open file limit used: 0% (43/65K)
[OK] Table locks acquired immediately: 99% (81M immediate / 81M locks)
[OK] Binlog cache memory access: 98.07% ( 9489125 Memory / 9675420 Total)

-------- MyISAM Metrics ------------------------------------------------------
[!!] Key buffer used: 18.4% (3M used / 16M cache)
[OK] Key buffer size / total MyISAM indexes: 16.0M/23.1M
[OK] Read Key buffer hit rate: 99.6% (6M cached / 25K reads)
[OK] Write Key buffer hit rate: 98.9% (159K cached / 1K writes)

-------- InnoDB Metrics ------------------------------------------------------
[--] InnoDB is enabled.
[!!] InnoDB buffer pool / data size: 6.0G/187.2G
[!!] InnoDB buffer pool instances: 4
[OK] InnoDB Used buffer: 98.96% (389108 used/ 393216 total)
[OK] InnoDB Read buffer efficiency: 100.00% (4599951975 hits/ 4600141604 total)
[!!] InnoDB Write Log efficiency: 78.63% (36994338 hits/ 47050824 total)
[!!] InnoDB log waits: 0.00% (65 waits / 10056486 writes)

-------- ThreadPool Metrics --------------------------------------------------
[--] ThreadPool stat is disabled.

-------- AriaDB Metrics ------------------------------------------------------
[--] AriaDB is disabled.

-------- TokuDB Metrics ------------------------------------------------------
[--] TokuDB is disabled.

-------- Galera Metrics ------------------------------------------------------
[--] Galera is disabled.

-------- Replication Metrics -------------------------------------------------
[--] No replication slave(s) for this server.
[--] This is a standalone server..

-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    Set up a Password for user with the following SQL statement ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
    Set up a Secure Password for user@host ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
    Restrict Host for user@% to user@SpecificDNSorIp
    Adjust your join queries to always utilize indexes
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
    Beware that open_files_limit (65535) variable 
    should be greater than table_open_cache ( 2000)
Variables to adjust:
    query_cache_size (>= 8M)
    join_buffer_size (> 256.0K, or always use indexes with joins)
    table_open_cache (> 2000)
    innodb_buffer_pool_size (>= 187G) if possible.
    innodb_buffer_pool_instances(=6)
    innodb_log_buffer_size (>= 8M)

How to do automated NFS failover in AWS via script

NFS is always consider to be Single point of failure and if we are using it so in order to over come we can use clusters, glusterfs, DRBD etc but in case you wanna do a manual fail over following script can be very helpful.

The scenario which is being for this script is:
1. The elastic IP is attached to NFS server. 
2. The lsync is used to keep main server and secondary in sync.
3. Main server is pinged every minute and if 5 continuous ping fails do the required fail over.
4. Restart the netfs service in all the client servers to avoid NFS tail error.

 Script:
#!/bin/bash
#Script to make secondary server as primary NFS storage server.
#By Ravi Gadgil.

#To check 54.254.X.X is up or not for 5 consecutive times

count=$(ping -c 5 54.254.X.X | grep 'received' | awk -F',' '{ print $2 }' | awk '{ print $1 }')
  if [ $count -eq 0 ]; then
    # 100% failed 
    echo "Host : 54.254.X.X is down (ping failed) at $(date)"

#To disassociate IP from primary NFS storage server
aws ec2 disassociate-address --public-ip 54.254.X.X

#To associate IP to secondary NFS server
aws ec2 associate-address --instance-id i-1cxxxx34 --public-ip 54.254.X.X

#To give time before restarting service in auto scaling hosts
sleep 10

#To get the instance ID's of servers running in auto scaling group on which NFS clients are mounted.
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name autoscaling-group | grep i- | awk '{print $4}' > /tmp/instances.txt

#To get the Hostname of servers running in NFS client auto scaling group
for inc in `cat /tmp/instances.txt`; do aws ec2 describe-instances --instance-ids $inc | grep -ir publicdns | awk '{print $4}'; done > /tmp/runninghostname.txt

#To run command on the Hosts running in NFS client auto scaling group
for host in `cat /tmp/runninghostname.txt` ; do ssh -oStrictHostKeyChecking=no -i /home/ec2-user/key.pem -t ec2-user@$host 'sudo umount -l /nfs-data;sudo /etc/init.d/netfs restart ; sudo df -h' ; done

  else
    echo "Host : 54.254.X.X is up and running"
  fi


 

Thursday, February 25, 2016

Script to monitor sites via response code

To find out the availability of our sites following script can be very helpful as it will enable us to check the status code of sites. I have created the file which has list of sites which we have to monitor and than passing that file in script to monitor them.

First create a file having names of the sites which need to be monitored listed:
 [root@ip-10-0-1-55 ravi]# cat sitelist.txt 
https://www.theteamie.com
https://nyp-trial.theteamie.com
http://samsung.theteamie.com

Than use the following script to monitor the sites and check the return status.
Note : I am only considering status code 200 and 301 as acceptable return code rest all are consider as error.
#!/bin/bash
#By Ravi Gadgil
#Script to monitor sites using there return status

for i in `cat /home/ravi/sitelist.txt`;
do
echo -e "-----------------------------------------------------------------"
echo -e "$i is being checked"
res=`curl -I -s $i | grep HTTP/1.1 | awk {'print $2'}`
echo -e "$res"
    if [ $res -ne 200 ] && [ $res -ne 301 ];
        then
        echo "Error $res on $i"
        echo -e "----------------------------------------\nSiteName : $i \nStatus Code Returned : $res \n" >> /home/ravi/siteissue.txt

        else
        echo -e "$i is OK"
    fi
done

if [ -f /home/ravi/siteissue.txt ]
then
    echo -e "\n\n\nSites having issues:\n"
    cat /home/ravi/siteissue.txt
    rm -rf /home/ravi/siteissue.txt
fi


The output of the script will be in following way:
[root@ip-10-0-1-55 ravi]# ./sitescheck.sh 
-----------------------------------------------------------------
https://www.theteamie.com is being checked
301
https://www.theteamie.com is OK
-----------------------------------------------------------------
https://nyp-trial.theteamie.com is being checked
503
Error 503 on https://nyp-trial.theteamie.com
-----------------------------------------------------------------
http://samsung.theteamie.com is being checked
504
Error 504 on http://samsung.theteamie.com



Sites having issues:

----------------------------------------
SiteName : https://nyp-trial.theteamie.com 
Status Code Returned : 503 

----------------------------------------
SiteName : http://samsung.theteamie.com 
Status Code Returned : 504 


Use the siteissue.txt to mail you the effected sites if any is there.


How to use crontab with examples

Crontab is one of the most useful services in Linux which helps us to automate the tasks. It enables us to run commands on a specific interval of time.

It takes following 5 parameters into consideration:
# Minute   Hour   Day of Month       Month          Day of Week        Command    
# (0-59)  (0-23)     (1-31)    (1-12 or Jan-Dec)  (0-6 or Sun-Sat)              

 .---------------- minute (0 - 59) 
 |  .------------- hour (0 - 23)
 |  |  .---------- day of month (1 - 31)
 |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ... 
 |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7)  OR sun,mon,tue,wed,thu,fri,sat 
 |  |  |  |  |
 *  *  *  *  *  <command to be executed>  

To edit cron for current user:
# crontab -e

To list cron for current user:
# crontab -l

To edit cron for any specific user:
# crontab -u ravi -e

To list cron for any specific user:
# crontab -u ravi -l

There are few predefined strings which can help to run cron at specific time frame:
string         meaning
------         -------
@reboot        Run once, at startup.
@yearly        Run once a year, "0 0 1 1 *".
@annually      (same as @yearly)
@monthly       Run once a month, "0 0 1 * *".
@weekly        Run once a week, "0 0 * * 0".
@daily         Run once a day, "0 0 * * *".
@midnight      (same as @daily)
@hourly        Run once an hour, "0 * * * *".

Examples to run crontab:

To run cron at every boot of system:
@reboot   /usr/scripts/test.sh

To run cron at every minute:
* * * * * /usr/scripts/test.sh

To run cron at every 5 minutes:
 */5 * * * * /usr/scripts/test.sh

To run cron on specific minutes(It will run on every 2 and 5 minutes of hours):
2,5 * * * *  /usr/scripts/test.sh

To run con every hour(It will run on every hour at 0 minute):
0 * * * *  /usr/scripts/test.sh

To run cron on every 2 hours:
* */2 * * * /usr/scripts/test.sh

To run cron for within specific interval of time(It will run from 3AM to 9PM):
* 3-21 * * *  /usr/scripts/test.sh

To run cron for specific time of year(It will run on 3AM and 5AM on 20th July):
0 3,5 20 7 *  /usr/scripts/test.sh

To run cron on any specific weekday(It will run on 1AM on every Sunday):
0 1 * * 0  /usr/scripts/test.sh




Wednesday, February 24, 2016

How to find daily Web server hits count

If we are hosting the Web servers in shared hosting or dedicated hosting and what to know the hits count sorted in descending order than following script can be very helpful. We are analyzing of access logs situated in /var/log/nginx folder to get the daily hits counts.

If you want to get the daily hits count the best time would be just before your log rotation happens as after the log rotation the access logs will become null and new logs will start to add in.

Script:
#!/bin/bash
#By Ravi Gadgil.
#Script to find daily hits count via web server access logs.

echo -e "Hits \t  Url"

for i in `find /var/log/nginx/ -name "*access.log"`;do echo $i | sed -e 's/_access.log/ /g' | sed -e 's/:/ /g'| cut -d'/' -f5 --output-delimiter=' ' | awk '{printf $0}'; grep -v 'jpg\|png\|jpeg\|gif\|js' $i | grep -ircn `date | awk ' { print $3 } '`; done | awk ' { print $2,"\011",$1 } ' | sort -nr 

Note: Access logs in my case is domain.com_access.log and not counting the "jpg,png,jpeg,gif,js" so if you want to count them as well remove the grep -v from command.

Wednesday, February 10, 2016

Script to delete Server with attached EBS volumes.

Following script is to delete Server with all attached EBS volumes with it. In AWS it can be tough task to remove Servers if it has EBS volumes attached in it and they need to be removed manually so following script can be a great help at that time.

Note : In order to make it work you need to have your AWS output set to table form else you need to do bit of changes as per your output type.

Script:
#!/bin/bash
#By Ravi Gadgil.
#Script to delete ami with attached snapshots.

#Take input of server to be deleted.
echo -e "$1" > /tmp/imageid.txt

#Find EBS associated with server.
aws ec2 describe-instances --instance-ids `cat /tmp/imageid.txt` | grep vol | awk ' { print $4 }' > /tmp/vol.txt


echo -e "Following are the volume associated with it : `cat /tmp/vol.txt`:\n "
 
echo -e "Starting the termination of Server... \n"

#Terminating server 
aws ec2 terminate-instances --instance-ids `cat /tmp/imageid.txt`

echo -e "\nDeleting the associated Volumes.... \n"

#Deleting Volumes associated with Server
for i in `cat /tmp/vol.txt`;do aws ec2 delete-volume --volume-id $i ; done

Pass the Server id as a parameter to script to see the magic..

Output:
[root@ip-10-0-1-55 ravi]# ./instanceremove.sh i-3fa5f317
Following are the volume associated with it : vol-514d0159
vol-93f2a39b
vol-81f5a489:
 
Starting the termination of Server... 

----------------------------
|    TerminateInstances    |
+--------------------------+
||  TerminatingInstances  ||
|+------------------------+|
||       InstanceId       ||
|+------------------------+|
||  i-3fa5f317            ||
|+------------------------+|
|||     CurrentState     |||
||+-------+--------------+||
||| Code  |    Name      |||
||+-------+--------------+||
|||  48   |  terminated  |||
||+-------+--------------+||
|||     PreviousState    |||
||+--------+-------------+||
|||  Code  |    Name     |||
||+--------+-------------+||
|||  80    |  stopped    |||
||+--------+-------------+||

Deleting the associated Volumes.... 


Script to delete AMI with attached snapshots.

Following script is to delete AMI with all attached snapshots with it. In AWS it can be tough task to remove AMI if it has snapshots attached in it and they need to be removed manually so following script can be a great help at that time.

Note : In order to make it work you need to have your AWS output set to table form else you need to do bit of changes as per your output type.

Script :

#!/bin/bash
#By Ravi Gadgil.
#Script to delete ami with attached snapshots.

#Take input of AMI to be deleted.
echo -e "$1" > /tmp/imageid.txt

#Find snapshots associated with AMI.
aws ec2 describe-images --image-ids `cat /tmp/imageid.txt` | grep snap | awk ' { print $4 }' > /tmp/snap.txt

echo -e "Following are the snapshots associated with it : `cat /tmp/snap.txt`:\n "
 
 echo -e "Starting the Deregister of AMI... \n"

#Deregistering the AMI 
aws ec2 deregister-image --image-id `cat /tmp/imageid.txt`

echo -e "\nDeleting the associated snapshots.... \n"

#Deleting snapshots attached to AMI
for i in `cat /tmp/snap.txt`;do aws ec2 delete-snapshot --snapshot-id $i ; done

Pass the AMI id as a parameter to script to see the magic..

Script output:
[root@ip-10-0-1-55 ravi]# ./amiremove.sh ami-34d08d66
Following are the snapshots associated with it : snap-cba6b326
snap-c4a6b329
snap-c1a6b32c
snap-c2a6b32f:
 
Starting the Deregister of AMI... 


Deleting the associated snapshots.... 




Change user IAM password with AWS CLI.

In order to change the password of IAM user in AWS following commands can be used.

First we need to create a json file having old and new password of the user.
[root@ip-10-0-1-55 ravi]# cat change.json 
{
    "OldPassword": "Ravi@123",
    "NewPassword": "Ravi@1234"
}

Following command will reset the password of user from which the command has been run.
[root@ip-10-0-1-55 ravi]# aws iam change-password --cli-input-json file://change.json

Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...