Add Secondary IP / Alias On Network Interface in RHEL / CentOS 7

This guide will show you how to add an extra IP address to an existing interface in Red Hat Enterprise Linux / CentOS 7. There are a few different methods than on CentOS 6, so there may be some confusion if you're trying this on a CentOS 7 system for the first time.
First, determine if your network interfaces are under the control of the Network Manager. If that's the case, you'll want to keep using the Network Manager to manage your interfaces and aliases. If it's not under Network Manager control, you can happily modify your configs by hand.

View your IP Addresses

The "old" days of Linux used to be all about ifconfig. It would show you all interfaces and their IP aliases on the server. In CentOS/RHEL 7, that's not the case. To see all IP addresses, use the ip tool.
$ ip a | grep 'inet '
    inet 127.0.0.1/8 scope host lo
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
    inet 172.28.128.3/24 brd 172.28.128.255 scope global dynamic eth1
This syntax is more inline with most routers/switches, where you can grep for inet and inet6 for your IPv4 and IPv6 IP addresses.
$ ip a | grep 'inet6 '
    inet6 ::1/128 scope host
    inet6 fe80::a00:27ff:fe19:cd16/64 scope link
    inet6 fe80::a00:27ff:fefd:6f54/64 scope link
So remember: use ip over ifconfig.

Using Network Manager

Check if your interface you want to add an alias to, uses the Network Manager.
$ grep 'NM_CONTROLLED' /etc/sysconfig/network-scripts/ifcfg-ens160
NM_CONTROLLED="yes"
If that's a yes, you can proceed with the next configurations using the Network Manager tool.
You may be used to adding a new network-scripts file in /etc/sysconfig/network-scripts/, but you'll find that doesn't work in RHEL / CentOS 7 as you'd expect if the Network Manager is being used. Here's what a config would look like in CentOS 6:
$ cat ifcfg-ens160:0
NAME="ens160:0"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR="10.50.10.5"
NETMASK="255.255.255.0"
After a network reload, the primary IP address will be removed from the server and only the IP address from the alias interface will be present. That's not good. That's the Network Manager misinterpreting your configuration files, overwriting the values from your main interface with the one from your alias.
The simplest/cleanest way to add a new IP address to an existing interface in CentOS 7 is to use the nmtui tool (Text User Interface for controlling NetworkManager).
$ nmtui
centos7_nmtui
Once nmtui is open, go to the Edit a network connection and select the interface you want to add an alias on.
nmtui_select_interface
Click Edit and tab your way through to Addto add extra IP addresses.
nmtui_add_alias_interface
Save the configs and the extra IP will be added.
If you check the text-configs that have been created in /etc/sysconfig/network-scripts/, you can see how nmtui has added the alias.
$ cat /etc/sysconfig/network-scripts/ifcfg-ens192
...
# Alias on the interface
IPADDR1="10.50.23.11"
PREFIX1="32"
If you want, you can modify the text file, but I find using nmtui to be much easier.

Manually Configuring An Interface Alias

Only use this if your interface is notcontrolled by Network Manager.
$ grep 'NM_CONTROLLED' /etc/sysconfig/network-scripts/ifcfg-ens160
NM_CONTROLLED="no"
If Network Manager isn't used, you can use the old style aliases you're used to from CentOS 5/6.
$ cat ifcfg-ens160:0
NM_CONTROLLED="no"
DEVICE="ens160:0"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR="10.50.10.5"
NETMASK="255.255.255.0"
Bring up your alias interace and you're good to go.
$ ifup ens160:0
Don't use this if Network Manager is in control.

Adding a temporary IP address

Want to add an IP address just for a little while? You can add one using the ipcommand. It only lasts until you reboot your server or restart the network service, after that -- the IP is gone from the interface.
$ ip a add 10.50.100.5/24 dev eth0
Perfect for temporary IPs!

Removing old Swap (Disk & LV) and adding a new dedicated Swap

Swap is a space on a disk that is used when the amount of physical RAM memory is full. When a Linux system runs out of RAM, inactive pages are moved from the RAM to the swap space. Swap space can take the form of either a dedicated swap partition or a swap file.

Recommendation:

If RAM is 1-2 GB, it is recommended to use twice the SWAP space as amount of RAM.

If RAM is 2-16 GB, SWAP is recommended to be equal to amount of RAM.

If RAM is above 16 GB, SWAP of minimum 16 GB should be fine. 

Steps:

[root@Server ~]# swapon
NAME      TYPE      SIZE USED PRIO
/dev/sda2 partition 3.9G   0B   -1
/dev/dm-2 partition   4G   0B   -2

[root@Server ~]# swapoff /dev/dm-2

[root@Server ~]# swapon
NAME      TYPE      SIZE USED PRIO
/dev/sda2 partition 3.9G   0B   -1
[root@Server ~]#

[root@Server ~]# lvremove /dev/appvg/swap2
Do you really want to remove active logical volume appvg/swap2? [y/n]: y
  Logical volume "swap2" successfully removed

[root@Server ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  appvg    2   9   0 wz--n- 124.99g 35.99g
  rootvg   1  11   0 wz--n-  45.59g  3.38g

[root@Server ~]# lsblk -l | grep -i sdd
sdd                    8:48   0   15G  0 disk
[root@Server ~]#

[root@Server ~]# fdisk /dev/sdd
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x8c1fe0d2.

Command (m for help): p

Disk /dev/sdd: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8c1fe0d2

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-31457279, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-31457279, default 31457279):
Partition 1 of type Linux and size 15 GiB is set

Command (m for help): p

Disk /dev/sdd: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8c1fe0d2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    30722047    15360000   83  Linux

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): L

 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris
 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-
 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-
 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden C:  c6  DRDOS/sec (FAT-
 4  FAT16 <32M      40  Venix 80286     85  Linux extended  c7  Syrinx
 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data
 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .
 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility
 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt
 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access
 a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O
 b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor
 c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi eb  BeOS fs
 e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         ee  GPT
 f  W95 Ext'd (LBA) 54  OnTrackDM6      a6  OpenBSD         ef  EFI (FAT-12/16/
10  OPUS            55  EZ-Drive        a7  NeXTSTEP        f0  Linux/PA-RISC b
11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f1  SpeedStor
12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f4  SpeedStor
14  Hidden FAT16 <3 61  SpeedStor       ab  Darwin boot     f2  DOS secondary
16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      fb  VMware VMFS
17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fc  VMware VMKCORE
18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fd  Linux raid auto
1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fe  LANstep
1c  Hidden W95 FAT3 75  PC/IX           be  Solaris boot    ff  BBT
1e  Hidden W95 FAT1 80  Old Minix
Hex code (type L to list all codes): 82
Changed type of partition 'Linux' to 'Linux swap / Solaris'

Command (m for help): p

Disk /dev/sdd: 16.1 GB, 16106127360 bytes, 31457280 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8c1fe0d2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048    30722047    15360000   82  Linux swap / Solaris

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@Server ~]# partprobe
[root@Server ~]#
[root@Server ~]# mkswap /dev/sdd1
Setting up swapspace version 1, size = 15359996 KiB
no label, UUID=caf2bb4e-23ea-4e8a-8843-0020eac8e77f
[root@Server ~]# echo "UUID=caf2bb4e-23ea-4e8a-8843-0020eac8e77f   swap     swap    defaults        0 0" >> /etc/fstab

[root@Server ~]# swapon -a

[root@Server ~]# swapon
NAME      TYPE       SIZE USED PRIO
/dev/sda2 partition  3.9G   0B   -1
/dev/sdd1 partition 14.7G   0B   -2
[root@Server ~]# swapoff /dev/sda2
[root@Server ~]# swapon
NAME      TYPE       SIZE USED PRIO
/dev/sdd1 partition 14.7G   0B   -1

[root@Server ~]# vgextend rootvg /dev/sda2
WARNING: swap signature detected on /dev/sda2 at offset 4086. Wipe it? [y/n]: y
  Wiping swap signature on /dev/sda2.
  Physical volume "/dev/sda2" successfully created.
  Volume group "rootvg" successfully extended

[root@Server ~]# vgs
  VG     #PV #LV #SN Attr   VSize   VFree
  appvg    2   9   0 wz--n- 124.99g 35.99g
  rootvg   2  11   0 wz--n-  49.47g  7.25g





NMON Setup on Linux Servers

nmon (short hand for Nigel's Monitor) is a computer performance system monitor tool
for the AIX and Linux operating systems. The nmon tool has two modes a) displays the
performance stats on-screen in a condensed format or b) the same stats are saved to a
comma-separated values (CSV) data file for later graphing and analysis to aid the
understanding of computer resource use, tuning options and bottlenecks.
Prerequisite:
The server must have enough space to capture nmon logs generated 24x7.
Create a separate FileSystem as /nmon_data and ensure the logs are generated inside the FS.
Installation:

1) Install the nmon rpm:

Version can be different, the below pkg is shown just for instance.

[root@Server]# rpm -ivh nmon-16g-3.el6.x86_64.rpm
warning: nmon-16g-3.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895:
NOKEY
Preparing... ########################################### [100%]
1:nmon ########################################### [100%]

Link: https://pkgs.org/download/nmon

2) Download nmonchart.tar and untar the file:

tar -xf nmonchart31.tar

Link: http://nmon.sourceforge.net/pmwiki.php

3) Update the permissions and ownerships of the extracted files:

chmod 775 *
chown root:root *

4) To automate the log collection through nmon, copy the below script and run the same
through cron. It will also perform the log rotation.

Crontab Entry:

0 0 * * * /nmon_data/nmon_collector.sh

Script:

#cat nmon_collector.sh
#!/bin/sh
#Kill nmon if running
for i in `ps -ef | grep "nmon_data" | grep -v grep | grep -v nmon_collector | awk '{print $2}'`
do
kill -9 $i
sleep 30
done
#Nmon command
nmon -f -s 60 -c 1440 -F /nmon_data/nmon_$(date +%Y-%m-%d_%H-%M-%S).nmon

#Gzip files older than 8 days and remove files older than 35 days
cd /nmon_data
find . -name "*.nmon" -mtime +8 -exec gzip {} \;
find . -name "*.nmon.gz" -mtime +35 -exec rm {} \;

5) To create a html file from nmon file, use the below "nmonchart" command:

sh nmonchart /home/xxxxxxxxx.nmon xxxxxxxx.html

Or even you can use the “NMON Analyser” to get the graphical view.

6) To Create nmon filesystem :

Check the free space in VG (rootvg)
vgdisplay rootvg

Create LV:

lvcreate –n lvname –L +size vgname
mkfs.xfs /dev/vg/lv ------------------------ For RHEL/OEL7.x
mkfs.ext4 /dev/vg/lv-----------------------  For RHEL/OEL 6.x/earlier
mkdir filesystem name
update /etc/fstab
mount filesystem

Example:

lvcreate -n nmon_fs -L +3G rootvg
mkfs.xfs /dev/rootvg/nmon_fs
mkdir /nmon_data
vi /etc/fstab
/dev/mapper/rootvg-nmon_fs /nmon_data xfs defaults 0 0
mount /nmon_data

Move or migrate user accounts from old Linux server to a new Linux server

      Steps:

       
     Please ensure to keep the system users at the top on /etc/passwd.  There should be no override of user-ids.  Please copy following files/dirs is required for traditional Linux user management.

/etc/passwd – contains various pieces of information for each user account
/etc/shadow – contains the encrypted password information for user’s accounts and optional the password aging information.
/etc/group – defines the groups to which users belong
/etc/gshadow – group shadow file (contains the encrypted password for group)
/var/spool/mail – Generally user emails are stored here. (If required)
/home – All Users data is stored here. – home directory can be any other directory instead of /home. It depends on the type of account (general/technical). Please ensure to copy the SSH keys.

Commands to type on old Linux system

First create a tar ball of old uses (old Linux system). Create a directory:
# mkdir /root/move/
Setup UID filter limit:
# export UGIDLIMIT=500
Now copy /etc/passwd accounts to /root/move/passwd.mig using awk to filter out system account (i.e. only copy user accounts)
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > /root/move/passwd.mig
Copy /etc/group file:
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > /root/move/group.mig
Copy /etc/shadow file:
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/passwd | tee - |egrep -f - /etc/shadow > /root/move/shadow.mig
Copy /etc/gshadow (rarely used):
# cp /etc/gshadow /root/move/gshadow.mig
Make a backup of /home and /var/spool/mail dirs:
# tar -zcvpf /root/move/home.tar.gz /home
# tar -zcvpf /root/move/mail.tar.gz /var/spool/mail
Where,
Users that are added to the Linux system always start with UID and GID values of as specified by Linux distribution or set by admin. Limits according to different Linux distro:
RHEL/CentOS/Fedora Core : Default is 500 and upper limit is 65534 (/etc/libuser.conf).
Debian and Ubuntu Linux : Default is 1000 and upper limit is 29999 (/etc/adduser.conf).
You should never ever create any new system user accounts on the newly installed Cent OS Linux. So above awk command filter out UID according to Linux distro.
export UGIDLIMIT=500 – setup UID start limit for normal user account. Set this value as per your Linux distro.
awk -v LIMIT=$UGIDLIMIT -F: ‘($3>=LIMIT) && ($3!=65534)’ /etc/passwd > /root/move/passwd.mig – You need to pass UGIDLIMIT variable to awk using -v option (it assigns value of shell variable UGIDLIMIT to awk program variable LIMIT). Option -F: sets the field separator to : . Finally awk read each line from /etc/passwd, filter out system accounts and generates new file /root/move/passwd.mig. Same logic is applies to rest of awk command.
tar -zcvpf /root/move/home.tar.gz /home – Make a backup of users /home dir
tar -zcvpf /root/move/mail.tar.gz /var/spool/mail – Make a backup of users mail dir
Use scp or usb pen or tape to copy /root/move to a new Linux system.
# scp -r /root/move/* user@new.linuxserver.com:/path/to/location

Commands to type on new Linux system

First, make a backup of current users and passwords:
# mkdir /root/newsusers.bak
# cp /etc/passwd /etc/shadow /etc/group /etc/gshadow /root/newsusers.bak
Now restore passwd and other files in /etc/
# cd /path/to/location
# cat passwd.mig >> /etc/passwd
# cat group.mig >> /etc/group
# cat shadow.mig >> /etc/shadow
# /bin/cp gshadow.mig /etc/gshadow
Please note that you must use >> (append) and not > (create) shell redirection.
Now copy and extract home.tar.gz to new server /home
# cd /
# tar -zxvf /path/to/location/home.tar.gz
Now copy and extract mail.tar.gz (Mails) to new server /var/spool/mail
# cd /
# tar -zxvf /path/to/location/mail.tar.gz
Now reboot system; when the Linux comes back, your user accounts will work as they did before on old system:
# reboot



Script to create ALT DISK and to set Bootlist



#!/bin/bash

oldrootvg=`lspv | grep -i old_rootvg | awk '{print $3}'`
if [ -z $oldrootvg ]
then
echo "it is null"
rootdisk=`lspv | grep -w rootvg | awk '{print $1}'`
sizerootdisk=`bootinfo -s $rootdisk`
for i in `lspv | grep -v $rootdisk | awk '{print $1}'`
do
echo $i
tempdisksize=`bootinfo -s $i`; echo $tempdisksize; echo $sizerootdisk;
if [ $sizerootdisk == $tempdisksize ]; then
vgname=`lspv | grep -w $i | awk '{print $3}'`; echo $vgname;
if [ $vgname == "None" ]; then
newaltdisk=$i
echo "Creating ALT Disk"
alt_disk_copy -d $newaltdisk
echo "Setting Bootlist"
bootlist -m normal $rootdisk
fi
fi
exit
done
elif [ $oldrootvg == "old_rootvg" ]
then
rootdisk=`lspv | grep -w rootvg | awk '{print $1}'`
oldrootdisk=`lspv | grep -w old_rootvg | awk '{print $1}'`
sizerootdisk=`bootinfo -s $rootdisk`
sizeoldrootdisk=`bootinfo -s $oldrootdisk`
if [ $sizerootdisk == $sizeoldrootdisk ]; then
exportvg $oldrootvg
echo "Creating ALT Disk"
alt_disk_copy -d $oldrootdisk
echo "Setting Bootlist"
bootlist -m normal $rootdisk
fi
else
echo "Need to create ALTDISK Manually"
fi

Improving Linux System Performance with I/O Scheduler Tuning

What Is an I/O Scheduler?
The I/O Scheduler is an interesting subject; it’s something that’s rarely thought about unless you are trying to get the best performance out of your Linux systems. Before going too deep into how to change the I/O scheduler, let’s take a moment to better familiarize ourselves with what I/O schedulers provide.

Disk access has always been considered the slowest method of accessing data. Even with the growing popularity of Flash and Solid State storage, accessing data from disk is considered slower when compared to accessing data from RAM. This is especially true when you have infrastructure that is using spinning disks.

The reason for this is because traditional spinning disks write data based on locations on a spinning platter. When reading data from a spinning disk it is necessary for the physical drive to spin the disk platters to a specific location to read the data. This process is known as “seeking” and in terms of computing, this process can take a long time.

I/O schedulers exist as a way to optimize disk access requests. They traditionally do this by merging I/O requests to similar locations on disk. By grouping requests located at similar sections of disk, the drive doesn’t need to “seek” as often, improving the overall response time for disk operations.

On modern Linux implementations, there are several I/O scheduler options available. Each of these have their own unique method of scheduling disk access requests. In the rest of this article, we will break down how each of these schedulers prioritizes disk access and measure the performance changes from scheduler to scheduler.


Changing the I/O Scheduler
For today’s article, we will be using an Ubuntu Linux server for our tests. With Ubuntu, changing the I/O Scheduler can be performed at both runtime and on bootup. The method for changing the scheduler at runtime is as simple as changing the value of a file located within /sys. Changing the value on bootup, which allows you to maintain the setting across reboots, will involve changing the Kernel parameters passed via the Grub boot loader.

Before we change the I/O scheduler however, let’s first identify our current I/O scheduler. This can be accomplished by reading the /sys/block/<disk device>/queue/scheduler file.

# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
The above shows that the I/O scheduler for disk sda is currently set to deadline.

One important item to remember is that I/O scheduling methods are defined at the Linux Kernel level, but they are applied on each disk device separately. If we were to change the value in the file above, this would mean that all filesystems on disk device sda will use the new I/O scheduler.

As with anything performance-tuning related, it is important to understand what types of workloads exist for the environment being tuned. Each I/O scheduler has a unique way to prioritize disk operations. Understanding the workload required makes it easier to select the right scheduler.

However, like any other performance-tuning change, it is always best to test multiple options and choose based on the results. This is exactly what we will be doing in this article.


Runtime modification of I/O scheduler
As I mentioned earlier, there are two ways to change the I/O scheduler. You can change the scheduler at runtime, which is applied immediately to a running system, or we can modify the Grub boot loader’s configuration to apply the scheduler on boot.

Since we will be performing benchmark tests to evaluate which scheduler provides the best results for our PostgreSQL instance, we will start off by changing the scheduler at runtime.

To accomplish this, we simply need to overwrite the /sys/block/<disk device>/queue/scheduler file with the new I/O scheduler selection.

# echo "cfq" > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
From the above, we can see that echoing cfq to the /sys/block/sda/queue/scheduler file changed our current I/O scheduler to CFQ. This change takes effect immediately. This means we can start testing the scheduler performance without having to restart PostgreSQL or any other service.


CFQ
The Complete Fairness Queueing (CFQ) I/O scheduler works by creating a per-process I/O queue. The goal of this I/O scheduler is to provide a fair I/O priority to each process. While the CFQ algorithm is complex, the gist of this scheduler is that after ordering the queues to reduce disk seeking, it services these per-process I/O queues in a round-robin fashion.

What this means for performance is that the CFQ scheduler tries to provide each process with the same priority for disk access. However, in doing so it makes this scheduler less optimal for environments that might need to prioritize one request type (such as reads) from a single process.

With that understanding of the CFQ scheduler, let’s go ahead and establish a benchmark performance metric for our PostgreSQL database instance with pgbench.

# su - postgres
In order to run pgbench, we first need to switch to the postgres user.

$ pgbench -c 100 -j 2 -t 1000 example
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 100
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
latency average: 60.823 ms
tps = 1644.104024 (including connections establishing)
tps = 1644.228715 (excluding connections establishing)
From the above, we can see that our tps reached roughly 1,644 transactions per second. While not a bad start, this is not the fastest scheduler for this workload.

Deadline
The Deadline scheduler works by creating two queues: a read queue and a write queue. Each I/O request has a time stamp associated that is used by the kernel for an expiration time.

While this scheduler also attempts to service the queues based on the most efficient ordering possible, the timeout acts as a “deadline” for each I/O request. When an I/O request reaches its deadline, it is pushed to the highest priority.

While tunable, the default “deadline” values are 500 ms for Read operations and 5,000 ms for Write operations. Based on these values, we can see why the Deadline scheduler is considered an optimal scheduler for read-heavy workloads. With these timeout values, the Deadline scheduler may prioritize reads more than writes.

Now that we understand the Deadline scheduler a bit better, let’s go ahead and change to the Deadline scheduler and see how it holds up to our pgbench testing.

# echo deadline > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop [deadline] cfq
With the above, we can see that our I/O scheduler is now the Deadline scheduler. Let’s go ahead and run our pgbench test again.

# su - postgres
$ pgbench -c 100 -j 2 -t 1000 example
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 100
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
latency average: 46.700 ms
tps = 2141.318132 (including connections establishing)
tps = 2141.489076 (excluding connections establishing)
This time it seems that pgbench was able to reach 2,141 transactions per second. This is a 500 transactions-per-second increase, a pretty sizable increase.

What this tells us is that even though pgbench is creating a database workload that is both read and write heavy, the overall PostgreSQL instance benefits from a read-priority-based I/O scheduler.

Noop
The Noop scheduler is a unique scheduler. Rather than prioritizing specific I/O operations, it simply places all I/O requests into a FIFO (First in, First Out) queue. While this scheduler does try to merge similar requests, that is the extent of the complexity of this scheduler.

This scheduler is optimized for systems that essentially do not need an I/O scheduler. This scheduler can be used in numerous scenarios such as environments where the underlying disk infrastructure is performing I/O scheduling on Virtual Machines.

Since a VM is running within a Host Server/OS, that host already may have an I/O scheduler in use. In this scenario, each disk operation is passing through two I/O schedulers: one for the VM and one for the VM Host.

Let’s take a look at what kind of performance Noop has in our environment.

# echo noop > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

With the above, the scheduler has been changed to the Noop scheduler. We can now run pgbench to measure the impact of this I/O scheduler.

# su - postgres
$ pgbench -c 100 -j 2 -t 1000 example
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 100
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 100000/100000
latency average: 46.364 ms
tps = 2156.838618 (including connections establishing)
tps = 2157.102989 (excluding connections establishing)

From the above, we can see that we were able to reach 2,156 transactions per second. This is only a slightly better performance over the Deadline scheduler. One of the reasons this scheduler may have better performance in our case is because the environment we are testing with is hosted within a VM.

This means that regardless of the changes being made within the VM, the I/O scheduler in use on the VM host will stay the same.

Changing the Scheduler on Boot
Since the Noop scheduler provided quite a bit of improvement over the CFQ scheduler, let’s go ahead and make that change permanent. To do this, we will need to edit the /etc/default/grub configuration file.

# vi /etc/default/grub
The /etc/default/grub configuration file is used to configure the Grub boot loader. In this case, we will be looking for an option named GRUB_CMDLINE_LINUX. This option is used to add kernel boot parameters on startup.

The parameter we need to add is the elevator parameter. This is used to specify the desired I/O scheduler. Let’s go ahead and add the parameter specifying the Noop scheduler.

GRUB_CMDLINE_LINUX="elevator=noop"
In the above, we added elevator=noop. This is used to define that the I/O scheduler on boot should be the Noop I/O scheduler. Once the changes have been made, we will need to run the update-grub2 command to apply the changed configurations.

# update-grub2
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.4.0-62-generic
Found initrd image: /boot/initrd.img-4.4.0-62-generic
Found linux image: /boot/vmlinuz-4.4.0-57-generic
Found initrd image: /boot/initrd.img-4.4.0-57-generic
done
With the grub configurations applied, we can now reboot the system and validate that the changes are still in effect.

# cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

VMware Basics


Please go though the below VMware doc to get to know the basics of Virtualization.. 





Ethernet Port Details

Below post will help you know how to find out available ethernet ports on server and which one is mapped to your IP. 


[root@Server network-scripts]# cat ifcfg-eth[01234567] | grep -i HWADDR
HWADDR=A0:36:9F:46:24:1C
HWADDR=A0:36:9F:46:24:1D
HWADDR=A0:36:9F:46:24:1E
HWADDR=A0:36:9F:46:24:1F
HWADDR=98:BE:94:3C:FD:AA
HWADDR=98:BE:94:3C:FD:AB
HWADDR=98:BE:94:3C:FD:AC
HWADDR=98:BE:94:3C:FD:AD

[root@Server network-scripts]# lspci | grep -i ethernet
06:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
06:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
06:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
06:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
20:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
20:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
20:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
20:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)

[root@Server network-scripts]# for i in {0..7}
> do
> ethtool -i eth$i | grep -i bus-info
> done
bus-info: 0000:20:00.0
bus-info: 0000:20:00.1
bus-info: 0000:20:00.2
bus-info: 0000:20:00.3
bus-info: 0000:06:00.0
bus-info: 0000:06:00.1
bus-info: 0000:06:00.2
bus-info: 0000:06:00.3

[root@Server network-scripts]# ip link show up | grep eth*
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 98:be:94:3c:fd:aa brd ff:ff:ff:ff:ff:ff
    link/ether 9a:be:94:3c:fd:a9 brd ff:ff:ff:ff:ff:ff
[root@Server network-scripts]#

DNS Server Configuration


Below steps were executed on real server. I have mentioned only the important commands/entries required to setup DNS server. In this scenario, there is only 1 client is configured to test the setup. 


[root@server etc]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=server.example.com


[root@server etc]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.10 server.example.com server
192.168.0.12 client1.example.com client1

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++


#cat named.conf
options {
        listen-on port 53 { 127.0.0.1; 192.168.0.10;};
        #listen-on-v6 port 53 { ::1; };


allow-query     { any;};


include "/etc/named.rfc1912.zones";
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

#cat /etc/named.rfc1912.zones

zone "example.com" IN {
        type master;
        file "forward.zone";
        allow-update { none; };
};

zone "0.168.192.in-addr.arpa" IN {
        type master;
        file "reverse.zone";
        allow-update { none; };

};

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

#cd /var/named

[root@server named]# cat forward.zone
$TTL 1D
@       IN SOA  server.example.com. root.server.example.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN NS   server.example.com.
server  IN A    192.168.0.10
client1 IN A    192.168.0.12
[root@server named]# cat reverse.zone
$TTL 1D
@       IN SOA  server.example.com. root.server.example.com. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        IN NS   server.example.com.
10      IN PTR  server.example.com.
12      IN PTR  client1.example.com.

X11 Setup with SSH Linux


PROCESS OVERVIEW:

In order to obtain the ability to interact with an X11 GUI remotely, we will follow these general steps:
  1. Ensure that the foundational X11 packages are installed
  2. Ensure that OpenSSH server is configured to forward X11 connections
  3. Configure a local X11 server on our workstation (You can use MobaXterm too)
  4. Configure our ssh application to forward X11 requests ( I'll use MobaXterm, however, Xming can also be installed at local machine.)
  5. Test with a simple application
  6. Configured authentication if user changes are needed.


1. # yum install xorg-x11-xauth xterm xorg-x11-apps -y

2. # cat /etc/ssh/sshd_config | grep -i X11
X11Forwarding yes
X11DisplayOffset 10
X11UseLocalhost no

# service sshd restart
Redirecting to /bin/systemctl restart sshd.service

3. Xming  or MobaXterm need to be installed on the workstation.

4. Try to connect the server by mentioning the hostname on MobaXterm. However, for putty you need to check "X11 forwarding" under SSH settings.


5# /usr/bin/xauth
Using authority file /root/.Xauthority
xauth> exit

#xclock
#xeyes

From Putty: We need to set the display as below:

# xauth list
server:10  MIT-MAGIC-COOKIE-1  5a5f2e0832b24f8c3391614cec512363
# export DISPLAY=server:10
# xclock
Warning: Missing charsets in String to FontSet conversion


6. Suppose We need to run the X11 for a particular user - webadmin

[root@server~]# su - webadmin
Last login: Mon Sep  9 14:59:35 IST 2019 from vp-1c2oadc062.1dc.com on pts/0
[webadmin@server~]$ xclock
Error: Can't open display:
[webadmin@server~]$ xauth list
xauth:  file /home/webadmin/.Xauthority does not exist

# cp .Xauthority /home/webadmin/  (from root)
#chown webadmin:webadmin /home/webadmin/.Xauthority (from root)
#[webadmin@server~]$ xauth
Using authority file /home/webadmin/.Xauthority
xauth> exit

[webadmin@server~]$ xauth list
server:10  MIT-MAGIC-COOKIE-1  8d2414ef1216a5ef3b276866c490311c
export DISPLAY=server:10
[webadmin@lipvap1077 ~]$ xclock
Warning: Missing charsets in String to FontSet conversion
[webadmin@lipvap1077 ~]$




Rescue Mode Recovery


Series of commands executed for OS recovery in rescue mode for RHEL 7.6


Scenario: After installing VMWare tools, the kernel got corrupted and not was not booting through old kernel images. In rescue mode, I was unable to see the root partition mounted inside /mnt/sysimage. Later I found out all the VGs were deactivated. Hence, ran vgchange in rescue mode to enable all the VGs. Please go through the commands, in case of any confusion or if you need clarity regarding command, please leave a comment.. I'll surely get back to you.

Pre-requisites: Please mount the ISO image of the required OS and boot it from CD/DVD ROM and not from disk.

#blkid
#lvs
#pvs
#vgs
#vgchange -ay
#mount
#mount <root-partion> /mnt/sysimage
#lvs
#vgs
#lvm lvdisplay
#chroot /mnt/sysimage
#lvm lvdisplay
#lvm display | more
#mkdir /mnt/sysimage/var
#ls /mmnt/sysimage/
#mount /dev/rootvg/var /mnt/sysimage/var

Same for usr
usr

#Even I found the /mnt/sysimage/etc/fstab is blank, hence I populated the entries so that the OS comes up in normal boot. Open the fstab file in vi editor run r!blkid in command mode, it will import all the block devices, just compare with a relevant working server and keep the required entries. 



#lvm lvdipslay | grep "LV Path"
#mount /dev/rootvg/tmp /mnt/sysimage/tmp
#mount
#cat /etc/fstab
exit
#cat /mnt/sysimage/etc/fstab
#ifconfig
#cat message | tail -200 | more
#df -h
#cat /etc/selinux/config
#cat /etc/fstab


Installation of Jenkins on Linux and Deployment NGINX through Jenkins

Installation of Jenkins: [root@worker1 ~]# useradd -c "jenkins user" jenkins [root@worker1 ~]# passwd jenkins Changing passw...