Friday, March 11, 2016

How to run multiple commands on servers via Ansible

Ansible is a very powerful tool for central management of servers and we can run commands on servers from a central location but there is limitation that we can only run one command at a time on servers using command module and cat perform complex commands so to over come it following script is very helpful.

We are creating a script and placing it in /usr/bin/ so that can be run from any location.
[root@ip-10-0-1-231 ravi]# which

Following is the script which is being used to run multiple commands of servers:

[root@ip-10-0-1-231 ec2-user]# cat /usr/bin/ 
#To run commands on server in group tag_Prod_VPC
#By Ravi Gadgil

echo -e "Running command on Production Server... "

for i in "$@" ; do ansible tag_Prod_VPC -u ec2-user -s -m shell -a "$i" ; done 

$@ : Will take the variables from the scripts which can be of n numbers.
ansible : To run ansible command line.
tag_Prod_VPC : Server Host group on which we want to run commands.
-s : For Sudo.
-m : To define we will be using the ansible defined module.
shell : Shell is the module which we are using to get output on our screen.
-a : To run commands on host server group.

Script output:
[root@ip-10-0-1-231 ec2-user]# "id ; pwd ; echo hi"
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)

We can run complex commands as well like for loop: 'mkdir /tmp/test/; cd /tmp/test/ ; for i in 1 2 3 4 5 ; do touch a$i ; done ; ls -lrt /tmp/test/'
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
total 0
-rw-r--r-- 1 root root 0 Mar 11 18:00 a1
-rw-r--r-- 1 root root 0 Mar 11 18:00 a3
-rw-r--r-- 1 root root 0 Mar 11 18:00 a2
-rw-r--r-- 1 root root 0 Mar 11 18:00 a5
-rw-r--r-- 1 root root 0 Mar 11 18:00 a4

[root@ip-10-0-1-231 ec2-user]# 'ps -ef | grep nginx; echo "hello"' w 'w|grep sudo'
Running command on Production Server... 
52.X.X.X | success | rc=0 >>
root     13627 13626  0 18:01 pts/3    00:00:00 /bin/sh -c ps -ef | grep nginx; echo "hello"
root     13629 13627  0 18:01 pts/3    00:00:00 grep nginx
root     20296     1  0 03:16 ?        00:00:00 nginx: master process /root/downloads/nginx-1.5.7/objs/nginx
apache   20298 20296  0 03:16 ?        00:01:18 nginx: worker process                 
apache   20299 20296  0 03:16 ?        00:01:13 nginx: worker process                 
apache   20300 20296  0 03:16 ?        00:01:13 nginx: worker process                 
apache   20301 20296  0 03:16 ?        00:01:16 nginx: worker process                 
apache   20305 20296  0 03:16 ?        00:00:00 nginx: cache manager process          

52.X.X.X | success | rc=0 >>
 18:01:25 up 64 days,  5:38,  4 users,  load average: 8.34, 7.37, 7.73
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
nadeesha pts/0    bb121-6-206-231. 12:14    5:13m  0.08s  0.08s -bash
subu     pts/1    bb121-6-206-231. 15:24    2:36m  0.02s  0.02s -bash
subu     pts/2    bb121-6-206-231. 15:51    1:55m  0.02s  0.02s -bash
ec2-user pts/3    ec2-54-179-147-9 18:01    0.00s  0.08s  0.00s /bin/sh -c sudo

52.X.X.X | success | rc=0 >>
ec2-user pts/3    ec2-54-179-147-9 18:01    0.00s  0.06s  0.00s /bin/sh -c sudo

Note: I have one server in my host server group "tag_Prod_VPC" so its running on single server. Same will work if there are multiple servers in host server group.

Thursday, March 10, 2016

How to check most memory utilization service in Linux

Linux has lots of tools such as free, top, htop, vmstat etc to show system utilization but in order to find exact service which is using the maximum system memory in descending order following command is very helpful.
[root@ip-10-0-1-231 ravi]# ps aux --sort -rss
rundeck   1308 28.6 61.2 2092252 371628 ?      Ssl  10:33   0:56 /usr/bin/java
jenkins   1272 13.7 23.6 1301096 143400 ?      Ssl  10:33   0:27 /etc/alternatives/java -Dcom.sun.akuma.Daemo
root      1561  0.0  0.4 181944  2492 pts/0    S    10:35   0:00 sudo su
root      1229  0.0  0.3  91012  2256 ?        Ss   10:33   0:00 sendmail: accepting connections
root      1563  0.0  0.3 115432  2068 pts/0    S    10:35   0:00 bash
root      1549  0.0  0.2  73688  1796 ?        Ss   10:35   0:00 ssh: /root/.ansible/cp/ansible-ssh-52.74.164
root      1519  0.0  0.2 113428  1772 ?        Ss   10:35   0:00 sshd: ec2-user [priv]
smmsp     1237  0.0  0.2  82468  1608 ?        Ss   10:33   0:00 sendmail: Queue runner@01:00:00 for /var/spo
root      1562  0.0  0.2 156296  1604 pts/0    S    10:35   0:00 su
ec2-user  1521  0.0  0.2 113428  1540 ?        S    10:35   0:00 sshd: ec2-user@pts/0
ec2-user  1522  0.0  0.2 115300  1264 pts/0    Ss   10:35   0:00 -bash
root      1583  0.0  0.1 117168  1172 pts/0    R+   10:36   0:00 ps aux --sort -rss
root      1197  0.0  0.1  79680   968 ?        Ss   10:33   0:00 /usr/sbin/sshd
ntp       1214  0.0  0.1  29280   968 ?        Ss   10:33   0:00 ntpd -u ntp:ntp -p /var/run/ -g
root      1288  0.0  0.1 119500   948 ?        Ss   10:33   0:00 crond
root      1508  0.0  0.1  52616   844 ?        Ss   10:35   0:00 ssh-agent -s

As we can see that it shows the exact percentage in which the memory is used by the individual services now we can narrow down on issue creating service and do the needful action.

How to add multiple users in Linux with or without password

To add multiple users in Linux with or without password, refer to following video:

For details to add multiple users in Linux following steps can be used.

Create a file having list of user which need to be added:
[root@localhost ravi]# cat add.user 

Run the following command to add users:
[root@localhost ravi]# for i in `cat add.user` ; do useradd $i ; done

i : its the variable used to have values from add.user.
add.user : Its the file having name of the users which you want to added.
useradd : Command used to add user.

To check is users are created or not:
[root@localhost ravi]# for i in `cat add.user` ; do id $i ; done 
uid=501(user1) gid=501(user1) groups=501(user1) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=502(user2) gid=502(user2) groups=502(user2) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=503(user3) gid=503(user3) groups=503(user3) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=504(user4) gid=504(user4) groups=504(user4) context=root:system_r:unconfined_t:SystemLow-SystemHigh
uid=505(user5) gid=505(user5) groups=505(user5) context=root:system_r:unconfined_t:SystemLow-SystemHigh

If you want to assign custom password to users as well while creating them:
[root@localhost ravi]# for i in `cat add.user` ; do useradd $i ; echo Pass$i | passwd $i --stdin ; done 
Changing password for user user1.
passwd: all authentication tokens updated successfully.
Changing password for user user2.
passwd: all authentication tokens updated successfully.
Changing password for user user3.
passwd: all authentication tokens updated successfully.
Changing password for user user4.
passwd: all authentication tokens updated successfully.
Changing password for user user5.
passwd: all authentication tokens updated successfully.

Above command will assign password to user as a combination of Pass+username for example user1 will get  password as "Passuser1". You can modify this according to your need or can assign as single password to all.

Thursday, March 3, 2016

Run mysqltuner on AWS RDS

In order to tune up AWS MySQL RDS following script is very helpful as it can find out quite a few flaws in DB and can provide good recommendation to make RDS better.

To download the script:
wget -O

Run script on RDS by providing the amount of memory allocated to DB server:
[root@ip-10-0-1-55 ravi]# ./ --host --user root --password dbpassword --forcemem 75000
 >>  MySQLTuner 1.6.4 - Major Hayden <>
 >>  Bug reports, feature requests, and downloads at
 >>  Run with '--help' for additional options and output filtering
[--] Performing tests on
Please enter your MySQL administrative password: 
[--] Skipped version check for MySQLTuner script
[--] Assuming 75000 MB of physical memory
[!!] Assuming 0 MB of swap space (use --forceswap to specify)
[OK] Currently running supported MySQL version 5.6.19-log

-------- Storage Engine Statistics -------------------------------------------
[--] Data in MyISAM tables: 51M (Tables: 112)
[--] Data in InnoDB tables: 187G (Tables: 42377)
[!!] Total fragmented tables: 13360

-------- Security Recommendations  -------------------------------------------
[OK] There are no anonymous accounts for any database users
[!!] User 'haproxy_check@%' has no password set.
[!!] User 'haproxy@%' has user name as password.
[!!] User 'sup@%' has user name as password.
[!!] User 'rdsrepladmin@%' hasn't specific host restriction.
[!!] User 'readonly@%' hasn't specific host restriction.
[!!] User 'root@%' hasn't specific host restriction.
[!!] There is no basic password file list!

-------- CVE Security Recommendations  ---------------------------------------
[--] Skipped due to --cvefile option undefined

-------- Performance Metrics -------------------------------------------------
[--] Up for: 28d 2h 44m 53s (74M q [30.512 qps], 697K conn, TX: 341B, RX: 31B)
[--] Reads / Writes: 84% / 16%
[--] Binary logging is enabled (GTID MODE: OFF)
[--] Total buffers: 6.0G global + 1.5M per thread (609 max threads)
[OK] Maximum reached memory usage: 6.1G (8.33% of installed RAM)
[OK] Maximum possible memory usage: 6.9G (9.47% of installed RAM)
[!!] Slow queries: 6% (5M/74M)
[OK] Highest usage of available connections: 5% (36/609)
[OK] Aborted connections: 0.00%  (11/697772)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (3K temp sorts / 5M sorts)
[!!] Joins performed without indexes: 197309
[OK] Temporary tables created on disk: 11% (296K on disk / 2M total)
[OK] Thread cache hit rate: 99% (2K created / 697K connections)
[!!] Table cache hit rate: 0% (2K open / 4M opened)
[OK] Open file limit used: 0% (43/65K)
[OK] Table locks acquired immediately: 99% (81M immediate / 81M locks)
[OK] Binlog cache memory access: 98.07% ( 9489125 Memory / 9675420 Total)

-------- MyISAM Metrics ------------------------------------------------------
[!!] Key buffer used: 18.4% (3M used / 16M cache)
[OK] Key buffer size / total MyISAM indexes: 16.0M/23.1M
[OK] Read Key buffer hit rate: 99.6% (6M cached / 25K reads)
[OK] Write Key buffer hit rate: 98.9% (159K cached / 1K writes)

-------- InnoDB Metrics ------------------------------------------------------
[--] InnoDB is enabled.
[!!] InnoDB buffer pool / data size: 6.0G/187.2G
[!!] InnoDB buffer pool instances: 4
[OK] InnoDB Used buffer: 98.96% (389108 used/ 393216 total)
[OK] InnoDB Read buffer efficiency: 100.00% (4599951975 hits/ 4600141604 total)
[!!] InnoDB Write Log efficiency: 78.63% (36994338 hits/ 47050824 total)
[!!] InnoDB log waits: 0.00% (65 waits / 10056486 writes)

-------- ThreadPool Metrics --------------------------------------------------
[--] ThreadPool stat is disabled.

-------- AriaDB Metrics ------------------------------------------------------
[--] AriaDB is disabled.

-------- TokuDB Metrics ------------------------------------------------------
[--] TokuDB is disabled.

-------- Galera Metrics ------------------------------------------------------
[--] Galera is disabled.

-------- Replication Metrics -------------------------------------------------
[--] No replication slave(s) for this server.
[--] This is a standalone server..

-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    Set up a Password for user with the following SQL statement ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
    Set up a Secure Password for user@host ( SET PASSWORD FOR 'user'@'SpecificDNSorIp' = PASSWORD('secure_password'); )
    Restrict Host for user@% to user@SpecificDNSorIp
    Adjust your join queries to always utilize indexes
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64:
    Beware that open_files_limit (65535) variable 
    should be greater than table_open_cache ( 2000)
Variables to adjust:
    query_cache_size (>= 8M)
    join_buffer_size (> 256.0K, or always use indexes with joins)
    table_open_cache (> 2000)
    innodb_buffer_pool_size (>= 187G) if possible.
    innodb_log_buffer_size (>= 8M)

How to do automated NFS failover in AWS via script

NFS is always consider to be Single point of failure and if we are using it so in order to over come we can use clusters, glusterfs, DRBD etc but in case you wanna do a manual fail over following script can be very helpful.

The scenario which is being for this script is:
1. The elastic IP is attached to NFS server. 
2. The lsync is used to keep main server and secondary in sync.
3. Main server is pinged every minute and if 5 continuous ping fails do the required fail over.
4. Restart the netfs service in all the client servers to avoid NFS tail error.

#Script to make secondary server as primary NFS storage server.
#By Ravi Gadgil.

#To check 54.254.X.X is up or not for 5 consecutive times

count=$(ping -c 5 54.254.X.X | grep 'received' | awk -F',' '{ print $2 }' | awk '{ print $1 }')
  if [ $count -eq 0 ]; then
    # 100% failed 
    echo "Host : 54.254.X.X is down (ping failed) at $(date)"

#To disassociate IP from primary NFS storage server
aws ec2 disassociate-address --public-ip 54.254.X.X

#To associate IP to secondary NFS server
aws ec2 associate-address --instance-id i-1cxxxx34 --public-ip 54.254.X.X

#To give time before restarting service in auto scaling hosts
sleep 10

#To get the instance ID's of servers running in auto scaling group on which NFS clients are mounted.
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name autoscaling-group | grep i- | awk '{print $4}' > /tmp/instances.txt

#To get the Hostname of servers running in NFS client auto scaling group
for inc in `cat /tmp/instances.txt`; do aws ec2 describe-instances --instance-ids $inc | grep -ir publicdns | awk '{print $4}'; done > /tmp/runninghostname.txt

#To run command on the Hosts running in NFS client auto scaling group
for host in `cat /tmp/runninghostname.txt` ; do ssh -oStrictHostKeyChecking=no -i /home/ec2-user/key.pem -t ec2-user@$host 'sudo umount -l /nfs-data;sudo /etc/init.d/netfs restart ; sudo df -h' ; done

    echo "Host : 54.254.X.X is up and running"


Setup fully configurable EFK Elasticsearch Fluentd Kibana setup in Kubernetes

In the following setup, we will be creating a fully configurable Elasticsearch, Flunetd, Kibana setup better known as EKF setup. There is a...