Wednesday, 7 November 2012


Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.

You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.

Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:
Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.

Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.


A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems to share a single hardware host. Each operating system appears to have the host's processor, memory, and other resources all to itself. However, the hypervisor is actually controlling the host processor and resources, allocating what is needed to each operating system in turn and making sure that the guest operating systems (called virtual machines) cannot disrupt each other. 

In computing, a hypervisor, is one of many hardware virtualization techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are very commonly installed on server hardware, with the function of running guest operating systems, that themselves act as servers.

Hardware Virtualization:

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system, Ubuntu-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.

Different types of hardware virtualization include:

Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified.
Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.
Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator, both are computer programs that imitate hardware, but their domain of use in language differs.

Tuesday, 6 November 2012

Voice Over IP

VoIP (voice over IP) is an IP telephony term for a set of facilities used to manage the delivery of voice information over the Internet. VoIP involves sending voice information in digital form in discrete packets rather than by using the traditional circuit-committed protocols of the public switched telephone network . A major advantage of VoIP and Internet telephony is that it avoids the tolls charged by ordinary telephone service.

Voice over IP, commonly refers to the communication protocols and transmission techniques involved in the delivery of voice communications and multimedia sessions over Internet Protocol. Internet telephony refers to communication services: voice, fax, sms and voice messaging applications, that are transported over INTERNET rather than Public Switched Telephone Network(PSTN).

The steps involved in originating a VOIP telephone call are signaling and media channel setup, digitization of analog voice signal, encoding, packetization and transmission as INTERNET protocol over a packet switched network.

On receiving side, similar steps reproduce the original voice stream. VOIP is one of the technology used by IP telephony to transport phone calls.

Voice Over IP has been implemented in various ways using different protocols:
Media Gateway control Protocol
Session Initiation Protocol
Real Time Transport Protocol
Session Description Protocol

In addition to IP, VoIP uses the real-time protocol to help ensure that packets get delivered in a timely way. Using public networks, it is currently difficult to guarantee Quality of Service. Better service is possible with private networks managed by an enterprise or by an Internet telephony service provider.

Using VoIP, an enterprise positions a "VoIP device" at a gateway. The gateway receives packetized voice transmissions from users within the company and then routes them to other parts of its intranet (local area or wide area network) or, using a T-carrier system or E-carrier interface, sends them over the public switched telephone network.


The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is originated and received on even port numbers and the associated RTCP communication uses the next higher odd port number.
RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol which assists in setting up connections across the network.


The Session Initiation Protocol (SIP) is a signaling protocol widely used for controlling communication sessions such as voice and video calls over Internet Protocol (IP). The protocol can be used for creating, modifying and terminating two-party (unicast) or multi-party (multicast) sessions. Sessions may consist of one or several media streams.
Other SIP applications include video conferencing, streaming multimedia distribution, instant messaging, presence information, file transfer and online games.
The SIP protocol is an Application Layer protocol designed to be independent of the underlying Transport Layer; it can run on Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Stream Control Transmission Protocol(SCTP). It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP).


The Session Description Protocol (SDP) is a format for describing streaming media initialization parameters. SDP is intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. SDP does not deliver media itself but is used for negotiation between end points of media type, format, and all associated properties. The set of properties and parameters are often called a session profile. SDP is designed to be extensible to support new media types and formats.

SDP started off as a component of the Session Announcement Protocol (SAP), but found other uses in conjunction with Real-time Transport Protocol (RTP), Real-time Streaming Protocol (RTSP), Session Initiation Protocol (SIP) and even as a standalone format for describing multicast sessions.

Monday, 5 November 2012

KVM Virtual Machine


KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86(64 bit architecture) hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

KVM is open source software.

Installation of KVM on linux:

If you want to install KVM on your machine, you need to check your architecture first, use the command:

# uname -m

possibly display :


your machine support KVM only if the above command displays last option.

To install KVM, we need to install extra packages on the system. We use yum for this purpose. If you don't have yum configured, refer to my previous post installing extra packages.

After yum is configured, we need to install following packages using yum:

# yum install kvm qemu* virt-manager libvirt* -y

If the packages installed correctly, restart the system and then restart the service of libvirt.

# /etc/init.d/libvirtd restart

Now configure the Network with bridged configuration. For configuring the bridge refer to one of my previous posts Network configuration.

Now bridge is configured. Now run the following command to install virtual machine as:

# virt-manager
virt-manager is the GUI utility that helps us to install the virtual machine graphically and manage them.

Running this command will result in openning a pop up window, for installing the new VM.

Follow the steps and install the new VM as:

Give the name of the VM machine, specify whether you want to install from a DVD or ISO image.

You need to specify the hardware drive. You can create a new partition on the base machine and specify that partition here.

Specify the bridge interface that you created above in respective fields.

After installation, configure the network and work accordingly.

Sunday, 4 November 2012



In our previous post, we created the LVM. But what is the use of the lvm, if we can't resize the lvm. It is the useful utility provided by the lvm, that we can resize the volume while not halting the current service. Resizing includes both extending and reducing the size.

The main important point while extending or reducing the lvm is that we need to umount the device before resizing the partition.

# umount /dev/vg_name/lv_name

Extending LVM:

Suppose we have the lvm lv_name of size 9GB , But our requirement is of 10GB. So we need to extend our logical volume by 1GB. Now there may be two circumstances:

1. Your volume group has 1GB space.
2. Volume group doesn't have enough space.

1. If volume group has 1 GB of space then, we can simply extend the lvm as:

# e2fsck -f /dev/vg_name/lv_name

This command will check the file system for errors. It will repair the bad sectors in the filesystem.

#lvextend -L +1G lv_name
# lvextend -l +100 lv_name

this will increase the lvm by 100 physical extends. For more information refer to previous post of create lvm.

Now run following command for using the extended space. As we can't format the volume, because our data may be lost. So we run the following command to use the extended space.

# resize2fs -f /dev/vg_name/lv_name 10G

The lvm is created and ready to mount.
After mounting, you will be able to use the extra space you added.

2. Volume group doesn't have extra space.

If volume group doesn't have extra space, we will go through following procedure.
Create new partition and change the hex code for that partition, choose the size according to your need.
Create physical volume from that partition. This is illustrated in my post create lvm.
Add the physical volume created to the existing volume group as:

# vg_extend vg_name /dev/sda9

Where sda9 be your new created physical volume.
Now go through the procedure as in step one.

Reducing LVM:

After umounting the lvm we need to go through the following procedure:

Suppose we have 10GB of lvm space, but 5 GB of space is more than sufficient for us, so we will reduce the lvm as:

# e2fsck -f /dev/vg_name/lv_name

# resize2fs -f /dev/vg_name/lv_name 5G

#lvreduce -L 5G /dev/vg_name/lv_name

# e2fsck -f /dev/vg_name/lv_name

Now remount the lvm and use the space. e2fsck command is used to check filesystem for errors and resize2fs will format the filesystem to the size.

LVM(Logical Volume Manager)

LVM(Logical Volume Manager)

LVM is the logical volume manager for the linux kernel. It manages disk drives and similar mass storage devices. The term volume refers to a disk drive or partition. LVM helps in managing large hard disk farms by letting you add disks, replace disks, copy and share contents from one disk to another without disrupting service (hot swapping). It also supports the backup support by taking the snapshot of existing data. It allows to resize the logical volume online.

Creating a logical volume:

If you want to create a logical volume on your machine, follow the steps below, you can modify the steps according to your need.

1. First of all you will need to create a partition, you can create the lvm from single partition and more than one partition as well:

# fdisk /dev/sda

/dev/sda may vary according to hard disk type.

press n for new partition, suppose sda6 is new partition created. Also we need to change the type of partition, for that press t and then type 8e for linux lvm and enter. Now save the changes using w.

To see the new partition created, use

# fdisk -l

Now reboot the system for the partition table to change.

2. Once partition is created, we will go through three steps for creating the LVM.

Step 1: Creating the Physical Volume.

From the partition created, we will create the physical volume using the command:

# pvcreate /dev/sda6

If you are creating the physical volume from more than one partitions:

# pvcreate /dev/sda6 /dev/sda7

This way we can create physical volume from multiple partitions. You can see the created physical volume using the command:

# pvdisplay


# pvscan

Step 2: Creating Volume Group

From physical volumes, we need to create the volume group:

# vgcreate vg_name /dev/sda6

# vgcreate vg_name /dev/sda6 /dev/sda7

These commands are used to create volume groups, however this command will create the volume groups of the default size 4 MB. However, we can change this size using the option -s

# vgcreate -s 16M vg_name /dev/sda6

where 16M is the size of physical extend. This means that the smallest block in the lvm will be of the size 16M.

This must be in the multiple of two.

To see the created volume groups, use the command:

# vgdisplay


# vgscan

These commands will display the Physical extend size.

Step 3: Creating the Logical volume

Now lvm can be created from volume group, we can do it in two ways as well.

# lvcreate -L 10G -n lv_name vg_name


# lvcreate -l 4000 -n lv_name vg_name

The first command will create the lvm of the size 10GB from the Volume group, and the second command will create the lvm of 16x4000=64000MB from volume group.

But before specifying the size of the lvm, make sure that you have created the volume group of that much size.

Now lvm is created, you need to format the lvm and mount on some directory as;

# mkfs.ext4 lv_name

# mount lv_name directory_name

For mounting permanently, make entry in /etc/fstab as directed in my post creating partitions.



IPTABLES allows us to modify the route of incoming traffic accordingly. IPTABLES is an administrative tool for IPv4 packet filtering and NAT. IPTABLES is used to set up, maintain and inspect the tables of IPv4 packet filter rules in the linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user-defined chains. Each chain is a list of rules which can match a set of packets. Each rule specifies what to do with a packet that matches. This is called a `target`, which may be a jump to a user-defined chain in the same table.


A firewall rule specifies criteria for a packet and a target. If the packet doesn't match, the next rule in the chain is examined, if it doesn't match, then the next rule is specified by the value of the target, which can be the name of a user_defined chain or one of the special values ACCEPT, DROP, QUEUE or RETURN.

ACCEPT means to let the packet through. DROP means to drop the packet on the floor. QUEUE means to pass the packet to userspace. How the packet can be received by a userspace process differs by the particular queue handler. Packets with a target of QUEUE will be sent to queue number '0' in this case. RETURN means stop traversing this chain and resume at the next rule in the previous chain. If the end of the built-in chain is reached or a rule in a built-in chain with target RETURN is matched, the target specified by the chain policy determines the fate of the packet.


There are currently three independent tables, which tables are present at a time depends on the kernel configuration options and which modules are present.

The tables are:


This is default table and contains the built-in chains INPUT(for incoming packets), FORWARD(for the packets being routed through) and OUTPUT(for locally generated packages).


This table is consulted when a packet that creates a new connection is encountered. It consists of three built-ins

PREROUTING(for altering packets as soon as they come in), OUTPUT(for altering locally generated packets before routing) and POSTROUTING(for altering packets as they are about to go out).


This table is used for specialized packet alteration. It has all the chains that the above two tables have.

We will discuss some options that are useful in commandline.

-A use this option if you want to append your rule.

-I use this option if you want to insert your rule on the top of the file.

-D use this option if you want to delete the rule from the chain.

-R use this option if you want to replace the rule.
-L use this options to list all the rules.

-F use this option to flush all the rules.

-p using this option specify the protocol(tcp or udp)

-j use this option to specify the action(accept, reject)

--dport use this option to specify the port number.

-d/-s use this option to specify the destination or source address, address may be hostname or ipaddress.

Here we will discuss some examples to understand the concept of iptables.

# iptables -A INPUT -p tcp -s --dport 22 -j REJECT

The above rule will reject the incoming packet from source from port 22 on TCP.

# iptables -A INPUT -p udp -s ! --dport 20 -j REJECT

This rule will REJECT all packets coming from the source ipaddress in port number 20.

Understanding Logs in Linux

Understanding Logs in Linux
Whatever else you do to secure a linux system, it must have a comprehensive, accurate and carefully watched logs. Logs server several purposes. First they help us to troubleshoot virtually all kind of system and application problems. Second, they provide valuable early warnings signs of system abuse. And third, when all else fails(whether that means a system crashes or a system compromises), logs provides us with crucial forensic data. Syslog accepts log data from the kernel by the way of klogd (daemon), for any and all the local processes. It is flexible as well and allowing you to determine what gets logged in and where. A prefigured syslog installation is the part of the base operating system in virtually all the variants of UNIX and LINUX.

The syslog daemon receives a log message from kernel and acts based on the message's type or priority. The mapping of the syslog actions is listed in


Each line in this file specifies on or more facility/priority selector followed by an action. A selector consists of a facility or facilities and a single priority.

For example, consider the following line from the same file:

mail.notice /var/log/mail


*.* /var/log/new

This means that service type is mail and priority is notice, and for this type of situation logs will be created in /var/log/mail file.

In the above situation ;

the * before the dot stands for the facilities.

the * after the dot stands for the priorities

and the location at the right is the location where file will be created.

Types of facilities:

Facilities are simply categories. Supported facilities in linux are :

auth: used for many security events
authpriv: used for the access control related messages
daemon: used for system processes and other daemons
kern: used for kernel messages
mark: messages generated by syslogd itself
user: the default facility when none is specified by an application or in a selector
local7: boot messages

* : wild card stands for any and all.
Types of priorities:

Unlike facilities, which have no relationship between each other, priorities are hierarchical. Possible priorities in linux are:

In increasing order of urgency:


In practice, most log messages are written to files. If you list full path of the filename as a line's action in syslog.conf, messages that match that line will be appended to that file. If the file doesn't exists, syslog will create it.
For example:
Open the configuration file using vim
# vim /etc/syslog.conf

At the top of the file write:

kern.* /var/log/iptables.log

save the file and exit.

restart the syslog service.

# /etc/init.d/syslogd restart

Run any iptable rules and check the log file as:

# iptables -A INPUT -J LOG -log-level 4

# tailf /var/log/iptables.log