Wednesday, 7 November 2012


Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources.

You probably know a little about virtualization if you have ever divided your hard drive into different partitions. A partition is the logical division of a hard disk drive to create, in effect, two separate hard drives.

Operating system virtualization is the use of software to allow a piece of hardware to run multiple operating system images at the same time. The technology got its start on mainframes decades ago, allowing administrators to avoid wasting expensive processing power.

In 2005, virtualization software was adopted faster than anyone imagined, including the experts. There are three areas of IT where virtualization is making headroads, network virtualization, storage virtualization and server virtualization:
Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts, much like your partitioned hard drive makes it easier to manage your files.

Storage virtualization is the pooling of physical storage from multiple network storage devices into what appears to be a single storage device that is managed from a central console. Storage virtualization is commonly used in storage area networks (SANs).
Server virtualization is the masking of server resources (including the number and identity of individual physical servers, processors, and operating systems) from server users. The intention is to spare the user from having to understand and manage complicated details of server resources while increasing resource sharing and utilization and maintaining the capacity to expand later.

Virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and work loads.


A hypervisor, also called a virtual machine manager, is a program that allows multiple operating systems to share a single hardware host. Each operating system appears to have the host's processor, memory, and other resources all to itself. However, the hypervisor is actually controlling the host processor and resources, allocating what is needed to each operating system in turn and making sure that the guest operating systems (called virtual machines) cannot disrupt each other. 

In computing, a hypervisor, is one of many hardware virtualization techniques allowing multiple operating systems, termed guests, to run concurrently on a host computer. It is so named because it is conceptually one level higher than a supervisory program. The hypervisor presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are very commonly installed on server hardware, with the function of running guest operating systems, that themselves act as servers.

Hardware Virtualization:

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system, Ubuntu-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the actual machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.

Different types of hardware virtualization include:

Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified.
Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.
Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator, both are computer programs that imitate hardware, but their domain of use in language differs.

Tuesday, 6 November 2012

Voice Over IP

VoIP (voice over IP) is an IP telephony term for a set of facilities used to manage the delivery of voice information over the Internet. VoIP involves sending voice information in digital form in discrete packets rather than by using the traditional circuit-committed protocols of the public switched telephone network . A major advantage of VoIP and Internet telephony is that it avoids the tolls charged by ordinary telephone service.

Voice over IP, commonly refers to the communication protocols and transmission techniques involved in the delivery of voice communications and multimedia sessions over Internet Protocol. Internet telephony refers to communication services: voice, fax, sms and voice messaging applications, that are transported over INTERNET rather than Public Switched Telephone Network(PSTN).

The steps involved in originating a VOIP telephone call are signaling and media channel setup, digitization of analog voice signal, encoding, packetization and transmission as INTERNET protocol over a packet switched network.

On receiving side, similar steps reproduce the original voice stream. VOIP is one of the technology used by IP telephony to transport phone calls.

Voice Over IP has been implemented in various ways using different protocols:
Media Gateway control Protocol
Session Initiation Protocol
Real Time Transport Protocol
Session Description Protocol

In addition to IP, VoIP uses the real-time protocol to help ensure that packets get delivered in a timely way. Using public networks, it is currently difficult to guarantee Quality of Service. Better service is possible with private networks managed by an enterprise or by an Internet telephony service provider.

Using VoIP, an enterprise positions a "VoIP device" at a gateway. The gateway receives packetized voice transmissions from users within the company and then routes them to other parts of its intranet (local area or wide area network) or, using a T-carrier system or E-carrier interface, sends them over the public switched telephone network.


The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over IP networks. RTP is used extensively in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications, television services and web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is originated and received on even port numbers and the associated RTCP communication uses the next higher odd port number.
RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol which assists in setting up connections across the network.


The Session Initiation Protocol (SIP) is a signaling protocol widely used for controlling communication sessions such as voice and video calls over Internet Protocol (IP). The protocol can be used for creating, modifying and terminating two-party (unicast) or multi-party (multicast) sessions. Sessions may consist of one or several media streams.
Other SIP applications include video conferencing, streaming multimedia distribution, instant messaging, presence information, file transfer and online games.
The SIP protocol is an Application Layer protocol designed to be independent of the underlying Transport Layer; it can run on Transmission Control Protocol (TCP), User Datagram Protocol (UDP), or Stream Control Transmission Protocol(SCTP). It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP).


The Session Description Protocol (SDP) is a format for describing streaming media initialization parameters. SDP is intended for describing multimedia communication sessions for the purposes of session announcement, session invitation, and parameter negotiation. SDP does not deliver media itself but is used for negotiation between end points of media type, format, and all associated properties. The set of properties and parameters are often called a session profile. SDP is designed to be extensible to support new media types and formats.

SDP started off as a component of the Session Announcement Protocol (SAP), but found other uses in conjunction with Real-time Transport Protocol (RTP), Real-time Streaming Protocol (RTSP), Session Initiation Protocol (SIP) and even as a standalone format for describing multicast sessions.

Monday, 5 November 2012

KVM Virtual Machine


KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86(64 bit architecture) hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

KVM is open source software.

Installation of KVM on linux:

If you want to install KVM on your machine, you need to check your architecture first, use the command:

# uname -m

possibly display :


your machine support KVM only if the above command displays last option.

To install KVM, we need to install extra packages on the system. We use yum for this purpose. If you don't have yum configured, refer to my previous post installing extra packages.

After yum is configured, we need to install following packages using yum:

# yum install kvm qemu* virt-manager libvirt* -y

If the packages installed correctly, restart the system and then restart the service of libvirt.

# /etc/init.d/libvirtd restart

Now configure the Network with bridged configuration. For configuring the bridge refer to one of my previous posts Network configuration.

Now bridge is configured. Now run the following command to install virtual machine as:

# virt-manager
virt-manager is the GUI utility that helps us to install the virtual machine graphically and manage them.

Running this command will result in openning a pop up window, for installing the new VM.

Follow the steps and install the new VM as:

Give the name of the VM machine, specify whether you want to install from a DVD or ISO image.

You need to specify the hardware drive. You can create a new partition on the base machine and specify that partition here.

Specify the bridge interface that you created above in respective fields.

After installation, configure the network and work accordingly.

Sunday, 4 November 2012



In our previous post, we created the LVM. But what is the use of the lvm, if we can't resize the lvm. It is the useful utility provided by the lvm, that we can resize the volume while not halting the current service. Resizing includes both extending and reducing the size.

The main important point while extending or reducing the lvm is that we need to umount the device before resizing the partition.

# umount /dev/vg_name/lv_name

Extending LVM:

Suppose we have the lvm lv_name of size 9GB , But our requirement is of 10GB. So we need to extend our logical volume by 1GB. Now there may be two circumstances:

1. Your volume group has 1GB space.
2. Volume group doesn't have enough space.

1. If volume group has 1 GB of space then, we can simply extend the lvm as:

# e2fsck -f /dev/vg_name/lv_name

This command will check the file system for errors. It will repair the bad sectors in the filesystem.

#lvextend -L +1G lv_name
# lvextend -l +100 lv_name

this will increase the lvm by 100 physical extends. For more information refer to previous post of create lvm.

Now run following command for using the extended space. As we can't format the volume, because our data may be lost. So we run the following command to use the extended space.

# resize2fs -f /dev/vg_name/lv_name 10G

The lvm is created and ready to mount.
After mounting, you will be able to use the extra space you added.

2. Volume group doesn't have extra space.

If volume group doesn't have extra space, we will go through following procedure.
Create new partition and change the hex code for that partition, choose the size according to your need.
Create physical volume from that partition. This is illustrated in my post create lvm.
Add the physical volume created to the existing volume group as:

# vg_extend vg_name /dev/sda9

Where sda9 be your new created physical volume.
Now go through the procedure as in step one.

Reducing LVM:

After umounting the lvm we need to go through the following procedure:

Suppose we have 10GB of lvm space, but 5 GB of space is more than sufficient for us, so we will reduce the lvm as:

# e2fsck -f /dev/vg_name/lv_name

# resize2fs -f /dev/vg_name/lv_name 5G

#lvreduce -L 5G /dev/vg_name/lv_name

# e2fsck -f /dev/vg_name/lv_name

Now remount the lvm and use the space. e2fsck command is used to check filesystem for errors and resize2fs will format the filesystem to the size.

LVM(Logical Volume Manager)

LVM(Logical Volume Manager)

LVM is the logical volume manager for the linux kernel. It manages disk drives and similar mass storage devices. The term volume refers to a disk drive or partition. LVM helps in managing large hard disk farms by letting you add disks, replace disks, copy and share contents from one disk to another without disrupting service (hot swapping). It also supports the backup support by taking the snapshot of existing data. It allows to resize the logical volume online.

Creating a logical volume:

If you want to create a logical volume on your machine, follow the steps below, you can modify the steps according to your need.

1. First of all you will need to create a partition, you can create the lvm from single partition and more than one partition as well:

# fdisk /dev/sda

/dev/sda may vary according to hard disk type.

press n for new partition, suppose sda6 is new partition created. Also we need to change the type of partition, for that press t and then type 8e for linux lvm and enter. Now save the changes using w.

To see the new partition created, use

# fdisk -l

Now reboot the system for the partition table to change.

2. Once partition is created, we will go through three steps for creating the LVM.

Step 1: Creating the Physical Volume.

From the partition created, we will create the physical volume using the command:

# pvcreate /dev/sda6

If you are creating the physical volume from more than one partitions:

# pvcreate /dev/sda6 /dev/sda7

This way we can create physical volume from multiple partitions. You can see the created physical volume using the command:

# pvdisplay


# pvscan

Step 2: Creating Volume Group

From physical volumes, we need to create the volume group:

# vgcreate vg_name /dev/sda6

# vgcreate vg_name /dev/sda6 /dev/sda7

These commands are used to create volume groups, however this command will create the volume groups of the default size 4 MB. However, we can change this size using the option -s

# vgcreate -s 16M vg_name /dev/sda6

where 16M is the size of physical extend. This means that the smallest block in the lvm will be of the size 16M.

This must be in the multiple of two.

To see the created volume groups, use the command:

# vgdisplay


# vgscan

These commands will display the Physical extend size.

Step 3: Creating the Logical volume

Now lvm can be created from volume group, we can do it in two ways as well.

# lvcreate -L 10G -n lv_name vg_name


# lvcreate -l 4000 -n lv_name vg_name

The first command will create the lvm of the size 10GB from the Volume group, and the second command will create the lvm of 16x4000=64000MB from volume group.

But before specifying the size of the lvm, make sure that you have created the volume group of that much size.

Now lvm is created, you need to format the lvm and mount on some directory as;

# mkfs.ext4 lv_name

# mount lv_name directory_name

For mounting permanently, make entry in /etc/fstab as directed in my post creating partitions.



IPTABLES allows us to modify the route of incoming traffic accordingly. IPTABLES is an administrative tool for IPv4 packet filtering and NAT. IPTABLES is used to set up, maintain and inspect the tables of IPv4 packet filter rules in the linux kernel. Several different tables may be defined. Each table contains a number of built-in chains and may also contain user-defined chains. Each chain is a list of rules which can match a set of packets. Each rule specifies what to do with a packet that matches. This is called a `target`, which may be a jump to a user-defined chain in the same table.


A firewall rule specifies criteria for a packet and a target. If the packet doesn't match, the next rule in the chain is examined, if it doesn't match, then the next rule is specified by the value of the target, which can be the name of a user_defined chain or one of the special values ACCEPT, DROP, QUEUE or RETURN.

ACCEPT means to let the packet through. DROP means to drop the packet on the floor. QUEUE means to pass the packet to userspace. How the packet can be received by a userspace process differs by the particular queue handler. Packets with a target of QUEUE will be sent to queue number '0' in this case. RETURN means stop traversing this chain and resume at the next rule in the previous chain. If the end of the built-in chain is reached or a rule in a built-in chain with target RETURN is matched, the target specified by the chain policy determines the fate of the packet.


There are currently three independent tables, which tables are present at a time depends on the kernel configuration options and which modules are present.

The tables are:


This is default table and contains the built-in chains INPUT(for incoming packets), FORWARD(for the packets being routed through) and OUTPUT(for locally generated packages).


This table is consulted when a packet that creates a new connection is encountered. It consists of three built-ins

PREROUTING(for altering packets as soon as they come in), OUTPUT(for altering locally generated packets before routing) and POSTROUTING(for altering packets as they are about to go out).


This table is used for specialized packet alteration. It has all the chains that the above two tables have.

We will discuss some options that are useful in commandline.

-A use this option if you want to append your rule.

-I use this option if you want to insert your rule on the top of the file.

-D use this option if you want to delete the rule from the chain.

-R use this option if you want to replace the rule.
-L use this options to list all the rules.

-F use this option to flush all the rules.

-p using this option specify the protocol(tcp or udp)

-j use this option to specify the action(accept, reject)

--dport use this option to specify the port number.

-d/-s use this option to specify the destination or source address, address may be hostname or ipaddress.

Here we will discuss some examples to understand the concept of iptables.

# iptables -A INPUT -p tcp -s --dport 22 -j REJECT

The above rule will reject the incoming packet from source from port 22 on TCP.

# iptables -A INPUT -p udp -s ! --dport 20 -j REJECT

This rule will REJECT all packets coming from the source ipaddress in port number 20.

Understanding Logs in Linux

Understanding Logs in Linux
Whatever else you do to secure a linux system, it must have a comprehensive, accurate and carefully watched logs. Logs server several purposes. First they help us to troubleshoot virtually all kind of system and application problems. Second, they provide valuable early warnings signs of system abuse. And third, when all else fails(whether that means a system crashes or a system compromises), logs provides us with crucial forensic data. Syslog accepts log data from the kernel by the way of klogd (daemon), for any and all the local processes. It is flexible as well and allowing you to determine what gets logged in and where. A prefigured syslog installation is the part of the base operating system in virtually all the variants of UNIX and LINUX.

The syslog daemon receives a log message from kernel and acts based on the message's type or priority. The mapping of the syslog actions is listed in


Each line in this file specifies on or more facility/priority selector followed by an action. A selector consists of a facility or facilities and a single priority.

For example, consider the following line from the same file:

mail.notice /var/log/mail


*.* /var/log/new

This means that service type is mail and priority is notice, and for this type of situation logs will be created in /var/log/mail file.

In the above situation ;

the * before the dot stands for the facilities.

the * after the dot stands for the priorities

and the location at the right is the location where file will be created.

Types of facilities:

Facilities are simply categories. Supported facilities in linux are :

auth: used for many security events
authpriv: used for the access control related messages
daemon: used for system processes and other daemons
kern: used for kernel messages
mark: messages generated by syslogd itself
user: the default facility when none is specified by an application or in a selector
local7: boot messages

* : wild card stands for any and all.
Types of priorities:

Unlike facilities, which have no relationship between each other, priorities are hierarchical. Possible priorities in linux are:

In increasing order of urgency:


In practice, most log messages are written to files. If you list full path of the filename as a line's action in syslog.conf, messages that match that line will be appended to that file. If the file doesn't exists, syslog will create it.
For example:
Open the configuration file using vim
# vim /etc/syslog.conf

At the top of the file write:

kern.* /var/log/iptables.log

save the file and exit.

restart the syslog service.

# /etc/init.d/syslogd restart

Run any iptable rules and check the log file as:

# iptables -A INPUT -J LOG -log-level 4

# tailf /var/log/iptables.log

Friday, 2 November 2012

File Permissions

File Permissions

Working with files and directories in linux is difficult if you don't know about the file permissions of the linux system. In this post, we will learn about the file permissions and advanced file permissions.

There are three types of file permissions:


To see the, if the directory contains which of the items, we use the command:

# ls directory_name

also to see the file permissions, we use :

# ls -l directory_name

For example:

drwxrwxr-x 3 john john 4096 Oct 20 17:00 Music/

In this case, d stands for directory r stands for Read permission w stands for write permission and x stands for execute permission.

Here three rwx stands for three persons:

first for user, second for group and third for others.

User is one who is John in above example, group is also john, and the permission is applied on directory Music .

The main command for changing the file permissions in redhat is:

# chmod 777 filename

Here permission is 777 in which first 7 for user second 7 for group and last 7 for others.

Now what is the meaning of 7.

At the beginning of the post I write:


for read, 4 for write 2 , for execute 1

means if we want to give full permissions add all the three. 4+2+1=7

If don't want to give write permission then


Similarly different combinations can be made as.

# chmod 751 filename

In this way, we can customize the permissions.

Also, there is another useful permission known as FACL.


Using this tool we can assign special permission to a single user. We can allow or deny users for accessing particular items.

This tool can be used using command:

# getfacl filename

This command shows the special permissions on filename.

To apply permissions, we can use following command:

# setfacl -m u:username:rwx filename

#setfacl -m u:username:r-- filename

To remove these permissions use command:

# setfacl -x u:username:rwx filename

For more information, use mannuals using man command.

Wednesday, 31 October 2012

Apache Server

Apache Server

Apache server or http server or webserver all are the same things. Apache server plays a key role in the initial development of the world wide web. The name of apache was chosen out of respect to the Native American Tribe Apache and its superior skills in warfare strategy. Apache is developed and maintained by an open community of developers under apache software foundation.

The meaning from web server is that, we are not going to create a website, however we are going to create a platform that will run a website. Today mostly apache servers are running on the cloud machines.

Web server in case of redhat is HTTP and in case of Ubuntu it is apache. When we install HTTP on our machine, and restart the service, 8 threads of the apache will run concurrently. Threads mean child processes. When someone makes a hit on the web server, a new process thread will be created. More will be the traffic on your server, more will be the threads and more will be the resources utilized.

One of the major problem, that may exists in troubleshooting the apache is SElinux. If the tags are not defined carefully then your apache server may refuse to work.

Following is the simple guide to run your test site on redhat:

Install the package for apache as

# yum install httpd* -y

When done, create a test file in following location and add any test contents:

# vim /var/www/html/index.html

add the content:

This is a test site.

save the file and exit. 

Restart the service of apache to run the changes.

# /etc/init.d/httpd restart 

Now open the local browser and open with the name of localhost.

Enter the url: http://localhost

The webpage will open and display the above contents.

Also you can open the website with another name suppose your machine's ip is

open the file and add:

# vim /etc/hosts   

save and quit.

Now open the website with url

This is open the website you have created.

Installing RPM packages

Installing Extra Packages

When you are working with Operating system, you always need to install additional packages or softwares. The main advantage of Linux is that every software of this operating system is completely free. You can download the software from internet at no cost. Redhat operating system uses Redhat Package Manager (RPM) packages for installation. While installing the packages, you need to remember the architecture of your computer. Install only that package that support your system. For example, do not install 64 bit package on 32 bit architecture.

To install packages using command line :
# rpm -ivh package_name
This will replace the installed package with the new one. 

# rpm -Uvh package_name
This will remove the installed package.

# rpm -ev package_name

To see the list of packages installed.

# rpm -qa 

To see which file comes from which rpm.

# rpm -qf /etc/init.d/network


To install packages without full filling the dependencies.

# rpm -ivh package.rpm --nodeps

To see what files are extracted by a given package.

# rpm -ql rpmname

However, If we are going to install package one by one, It very long process. As we deal with the dependencies of the packages. So we have another tool known as 'YUM'.

yum is the utility that helps to install the packages automatically. It automatically resolve the dependencies and install the selected packages.

Redhat provides some of the packages along with the distribution. These packages are inside any ISO distribution of redhat.

Copy those packages to any location as;
# cp -rv /media/Packages/ /var/www/html/Packages/

# cd /var/www/html/Packages/

Install the package:

#rpm -ivh createrepo****.rpm

now run a command.

# createrepo -v /var/www/html/Packages/

Now edit a file;

# vim /etc/yum.repos.d/redhat.repo

enter the following text:


Now run the command:

# yum list all

If it shows the list of packages, means yum is configured on your machine.
After that use following command to install the packages:

# yum install package_name -y

# yum install php* -y

In this way you can install packages.

Creating Partitions

Creating Partitions

You can create partitions using command line. But firstly you should know what is the current partitioning table, for that

#fdisk -l

# df -h

These command will show you the existing partitions and where these partitions are mounted. In linux, mounting the partitioning is very necessary because without mounting the partitions you can't use the partition space. There are two ways of mounting the partitions. One is permanent way and other is temporary way.

To create the partition use the following command:

# fdisk /dev/sda

press m for more options, and press n for new partition, then press enter for initial address and enter +2G as last address. This will create 2GB partition. Then press w and write the partitioning table.

After that reboot the machine to work your partitions well.

This will create the new partition. But this new partition is not in use so far. For that create a new directory using

# mkdir /movies

Now mount the new partition on the directory.

For temporary mount:

# mount /dev/sda5 /movies

where sda5 is the new partition created. In your case the number may vary. Now check the mount using :

# df -h

For permanent mounting the partition, make entry in file

# vim /etc/fstab

/dev/sda5 movies ext4 defaults 0 0

The permanent mounted partition lasts after the reboot also as temporary partition doesn't.

Network Configuration

Network Configuration

Linux is a perfect networking operating system. It is highly secure if handled carefully. We can handle network of linux operating system in two ways:

1. Static IP allocation.

2. NetworkManager IP allocation.

3. Bridge Configuration.

We will learn above the methods one by one.

1. Static IP allocation:

Now open the file:

# vim /etc/sysconfig/network-scripts/ifcfg-eth0


save the file and quit. Make sure that you know the gateway address. Don't forget to restart the network service, It is necessary to restart the service so that new settings take place.

# /etc/init.d/network restart

Using this technique we can assign IP address permanently to a machine. We can also assign temporary IP as below:

# ifconfig eth0 netmask

This will temporarily set the IP address to given above, also we have to specify the interface on which we are giving the IP address. We can also bring down or bring up the interfaces manually using the below commands:

# ifup eth0
#ifdown eth0

2. Dynamic IP allocation:

Open the file:

# vim /etc/sysconfig/network-scripts/ifcfg-eth0


save the file and quit.

In this case restart the service of networkmanager as:
# /etc/init.d/NetworkManager restart
However, If the network is not managed yet, check for the following:

1. Check for the network cable, may be there is no cable.
2. Check for the interface on which cable is plugged, i.e. there may be two ethernet cards. In case you have two ethernet cards, run the command as:

# ethtool eth0
# ethtool eth1
# mii-tool

The above commands shows if the link is present or not.

3. In case, if link is not present, check your cable. It may be broken.

3. Network configuration for bridged network.

Stop the service of networkmanager as;

# /etc/init.d/NetworkManager stop;chkconfig NetworkManager off

Now edit the file ifcfg-br0 as;

# vim /etc/sysconfig/network-scripts/ifcfg-br0

Now open the file ifcfg-eth0 as;

# vim /etc/sysconfig/network-scripts/ifcfg-eth0


Don't forget to restart the service of network as;

# /etc/init.d/network restart

Directory Structure

Directory Structure

To understand the basic of linux, you have to remember the directory structure of the linux. The structure is however not complicated. It consists of a tree structure. The root of the tree is "/". Then there are a number of childs of / directory. It means that / contains a number of directories as listed below:








/etc consists of all the configuration files of the system.
/var is a standard subdirectory of the root directory and consists of logs, document root of the website etc.
/boot consists of the boot files of the system and it is a sensitive part of the system.
/root is the home directory root user.
/dev is the directory for the hardware devices
/bin contains all the binaries of the system.
/proc contains all the useful drivers for the system.

Basic Commands

Basic Commands

As you are interested in linux, you should have to know the basic linux commands. We will discuss here simple linux commands. For more help about the commands you can see man page for that command as;

# man commandname
To change directories, suppose you are in root directory, then

# cd /var/

this command will take you to inside the /var directory.

Where cd is the command used to change directories.

For more information about the command use man command as:

# man cd

If you want to check the contents in a directory use following command:

#ls /var/

you can use various options with in commands as:

# ls -l /var/

To create a file, use following commands:

# touch filename


# vim filename

press i for opening in insert mode and write contents in the file. to save the contents press escape and then :wq then press enter. The file will be saved.

To create a directory use following command, also note your location before creating directory.

# mkdir directoryname

To move and copy the files:

# mv /var/filename /tmp/filename

#cp /var/filename /tmp/filename

Here I have written /var/filename, it means you have to specify the complete path of the filename to move a specific file and location of the destination also. Same is the case with copy command.

To reboot and shutdown your system use following commands

# init 6

# init 0

To see the history of commands:

# history

Tuesday, 30 October 2012

Customize the CMD prompt

When you login the linux system for the first time, the following prompt you will see.

[root@localhost ~]#

This means you are login with root user with hostname localhost and you are in the root directory.

If you are logged in with a simple user's account then

[joy@localhost ~] $

What actually this means, We can also represent the above command line with below:


Change options for the prompt:

\d : the date Weekday Month Date format
\h : the hostname up to the first ‘.’
\A : the current time in 24-hour HH:MM format
\u : the username of the current user
\w : the current working directory, with $HOME abbreviated with a tilde
\$ : if the effective UID is 0, a #, otherwise a $

Also if we want to change this console that we see by default, we can change it as well. There is a variable called PS1, we need to edit that variable in order to customize the look of the prompt. To see the default value of the variable enter the below command:

[root@host1 ~] # echo $PS1

This is by default value, we need to customized values to this variable and our task is completed. Here are below some of the example:

[root@host1 ~] # export PS1='\e[0;36m[\u@\h \W]\$ \e[m '
[root: myself_sahil~]#

0;36m is for color specifications..

There are a few variable also that helps to customize the prompt. These are as below:

\a : an ASCII bell character (07)
\d : the date in "Weekday Month Date" format (e.g., "Tue May 26")
\D{format} : the format is passed to strftime(3) and the result is inserted into the prompt string; an empty format results in a locale-specific time representation. The braces are required
\e : an ASCII escape character (033)
\h : the hostname up to the first '.'
\H : the hostname
\j : the number of jobs currently managed by the shell
\l : the base name of the shell’s terminal device name
\n : newline
\r : carriage return
\s : the name of the shell, the base name of $0 (the portion following the final slash)
\t : the current time in 24-hour HH:MM:SS format
\T : the current time in 12-hour HH:MM:SS format
\@ : the current time in 12-hour am/pm format
\A : the current time in 24-hour HH:MM format
\u : the username of the current user
\v : the version of bash (e.g., 2.00)
\V : the release of bash, version + patch level (e.g., 2.00.0)
\w : the current working directory, with $HOME abbreviated with a tilde
\W : the base name of the current working directory, with $HOME abbreviated with a tilde
\! : the history number of this command
\# : the command number of this command
\$ : if the effective UID is 0, a #, otherwise a $
\nnn : the character corresponding to the octal number nnn
\\ : a backslash
\[ : begin a sequence of non-printing characters, which could be used to embed a terminal control sequence into the prompt
\] : end a sequence of non-printing characters

Create a colored prompt:

You may want to create a color prompt that you can use for visibility. In this example the hostname has been dropped to make a shorter prompt and the prompt is turned red but the commands that you enter will be black. The export command will change these features.

sahil@ub:~$ export PS1=’\e[0;31m[\u:\w]\$ \e[m ‘

This will color the prompt but not any commands that you enter.

List of Color codes:
Color Code
Black 0;30
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33
Blue 0;34
Green 0;32
Cyan 0;36
Red 0;31
Purple 0;35
Brown 0;33

Replace digit 0 with 1 for a lighter color.

Make Changes permanent:

All of the changes you make will be lost when you close the terminal or log out. Here are directions to make them permanent.


The .bashrc file in each user’s home directory allows you to change the default for the prompt to a color prompt by uncommenting the line:


Unfortunately a typo in the line must also be corrected so that it should read:

Ubuntu or CentOS
Place your custom prompt in the user .bashrc file with this command:

export PS1=’\e[0;31m[\u:\w]\$ \e[m ‘

Monday, 29 October 2012


MediaWiki is a free web-building software developed by Wikimedia foundation and others. It is used to create the websites in php. Its main utility in Wikipedia. It is used to run all of the foundation's projects, including Wikipedia, Wiktionary and Wikinews. It is written in php programming language and uses back end database.

The first version of the software was deployed to serve the needs of Wikipedia. Since then, it is deployed by many companies as content management system for internal knowledge management.

The software is optimized to correctly and efficiently handle projects of all sizes, including the largest wikis, which can have terabytes of content and hundreds of thousands of hits per second. Because Wikipedia is one of the world's largest websites, achieving scalability through multiple layers of caching and database replication has also been a major concern for developers. Wikipedia and other Wikimedia projects continue to define a large part of the requirement set for MediaWiki.

The software is highly customizable, with more than 700 configuration settings and more than 1800 extensions available for enabling various features to be added or changed. More than 600 automated and semi-automated bots and other tools have been developed to assist in editing MediaWiki sites.

Creating website using mediawiki:

Download the latest version of MediaWiki from the website:

The downloaded file is tar file.

Now for the users using Unix operating system, untar the file in the document root of the system as:

# cd /var/www/html/
# tar -xvf "name of the file"

rename the folder created as you want to publish on the website as:

# mv mediawiki.1.9 wiki/

Now open the url of your machine in the browser:

Now first step of the installation begins on the browser and continue as directed.

It will ask for the databases and name of the sites etc. Give the appropriate name as required.

Finish the installation.

Now if you want to change the settings related to server. visit the file in :

#cat /var/www/html/wiki/LocalSettings.php

Restart the database service and Apache service and continue with the website building.

Sunday, 28 October 2012

Eucalyptus Cloud Computing

Eucalyptus is opensource cloud computing software designed for private as well as public clouds. It pools together existing virtualized infrastructure to create cloud resources for compute, network and storage.
Today I am going to tell you how to install and operate Eucalyptus software that can help you to build the cloud VM instances.
VM instances are the instances, that are the end product of the software. We go through a number of steps to install Cloud software. In the VM instances we will have the IP of the virtual machine. We can enter that machine using various tools such as SSH and work on that machine. We can also clone that machine and use whenever we required.
Since I am working with opensource projects, so I will work with eucalyptus on RedHat Enterprise linux 5.6, however it doesn't matter which Operating system you are using.

Installation Steps:

1. Install rhel 5.6 on a machine, suggested machine should have larger resources than a normal machine. Also turn off the features such as firewall, IPtables etc on the boot time. Also ensure that yum is working on the system.

2. Download the java modules from the website. Following modules are sufficient.

a) jdk-6u29-linux-x64-rpm.bin

b) jre-6u29-linux-x64-rpm.bin

install using



3. Install ntp on the machine and sync your system with as

# yum install ntp

# ntpdate

4. Now install following packages, these are the dependencies for installing the main packages. Resolve the dependencies and install as:

# yum install -y ant ant-nodeps dhcp bridge-utils perl-convert-ASN1.noarch scsi-target-utils httpd

5. Download the main packages from following links:


Extract the downloaded tar files and create a repository from that. Copy those packages in a directory and make that make that repository available by yum. Now install the main eucalyptus tools as:

# yum install eucalyptus-cloud eucalyptus-cc eucalytpus-walrus eucalyptus-sc euca2ools

6. Restart the services of eucalyptus:

# /etc/init.d/eucalyptus-cc restart; chkconfig eucalyptus-cc on

#/etc/init.d/eucalytpus-cloud restart; chkconfig eucalytus-cloud on
Now Install a virtual machine on the same system, you can use KVM virtual machine. If you don't know how to do this, refer to my post KVM.

After you successfully install the virtual machine, create yum on that virtual machine in the same way as on the base machine.

On node controller(VM):

Now sync the time zone with as

# yum install ntp -y

# ntpdate

Now install xen on the virtual machine as:

# yum install xen -y

when completed, restart the system with xen kernel and modify the following:

# sed --in-place 's/#(xend-http-server no)/(xend-http-server yes)/' /etc/xend/xend-config.sxp

# sed --in-place 's/#(xend-address localhost)/(xend-address localhost)/' /etc/xend/xend-config.sxp

Restart the service of xend as:

# /etc/init.d/xend restart

Make sure that all the security features are disabled, for example, firewall IPTABLES. Now install the node controller on VM machine.

# yum install eucalyptus-nc -y

When completed open the configuration file of libvirtd and add

# vim /etc/libvirt/libvirt.conf

uncomment the following lines:


save and exit.

Now restart the service of node controller.

# /etc/init.d/eucalyptus-nc restart; chkconfig eucalyptus-nc on

This document describe the steps to install eucalyptus on single machine. However, these components if installed on different machines then we need to register all the components so that they can communicate with each other. For making these components to communicate with each other we will register all the components as:

We assume that all the components are installed and running.

# $EUCALYPTUS/usr/sbin/euca_conf --register-walrus

# $EUCALYPTUS/usr/sbin/euca_conf --register-cluster cluster_name

# $EUCALYPTUS/usr/sbin/euca_conf --register-sc cluster_name

# $EUCALYPTUS/usr/sbin/euca_conf --register-walrus
# $EUCALYPTUS/usr/sbin/euca_conf --register-walrus 
# $EUCALYPTUS/usr/sbin/euca_conf --register-nc "
# $EUCALYPTUS/usr/sbin/euca_conf --register-walrus" 

Where is the IP of the base machine and is the IP of the virtual machine. When all the components are registered successfully, login to


this will prompt you username and password, which by default set to admin and admin.

When you login, you will be forced to change the password for admin and your email address will be asked.

Now download the credentials from graphical menu.

# mkdir /.euca

# unzip euca2-admin* -d /.euca/

The above command will unzip the contents in the above directory. Run this command to use your credentials if required.

# . /.euca/eucarc

Now download the 64 bit image from following url:

# wget

Extract the downloaded image as:

To enable a VM image as an executable entity, a user/admin must add a root disk image, a kernel/ramdisk pair (ramdisk may be optional) to Walrus and register the uploaded data with Eucalyptus.
Each is added to Walrus and registered with Eucalyptus separately, using three EC2 commands. The following example uses the test image that we provide. Unpack it to any directory. Add the kernel to Walrus, and register it with Eucalyptus (WARNING: your bucket names must not end with a slash!):

# tar zxvf euca-centos-5.3-x86_64.tar.gz
# cd euca-centos-5.3-x86_64
# euca-bundle-image -i xen-kernel/vmlinuz- --kernel true
# euca-upload-bundle -b centos-kernel-bucket -m /tmp/vmlinuz-
# euca-register centos-kernel-bucket/vmlinuz-
# euca-bundle-image -i xen-kernel/initrd- --ramdisk true
# euca-upload-bundle -b centos-ramdisk-bucket -m /tmp/initrd-
# euca-register centos-ramdisk-bucket/initrd-
Next, add the root filesystem image to Walrus:

# euca-bundle-image -i centos.5-3.x86-64.img --kernel eki-8D7316E7 --ramdisk eri-87EE16CF
# euca-upload-bundle -b centos-image-bucket -m /tmp/centos.5-3.x86-64.img.manifest.xml
# euca-register centos-image-bucket/centos.5-3.x86-64.img.manifest.xml

=> Now, configure the dhcp server.

First copy the /usr/share/doc/dhcp3.0.5/dhcpd.conf.sample to /etc/dhcpd.conf

# cp /usr/share/doc/dhcp-3.0.5/dhcpd.conf.sample /etc/dhcpd.conf
# vim /etc/dhcpd.conf

Configure DHCP in this file.

=> Restart the service of dhcp

# /etc/init.d/dhcpd restart

Once the image is added, you can see the added image as:

# euca-describe-images
# euca-describe-instances
# euca-describe-avalability-zones
# euca-describe-keypairs

Adding keypair:

We add keypairs to ssh the remote computer with password authentication.
# euca-add-keypair mykey > mykey.private
Change the permission of mykey.private

#chmod 0600 mykey.private
Now run instances using following command:

# euca-run-instances -k mykey -n 1 <emi-id>

# euca-describe-instances

This will show the list of the instances, with ipaddresses.

Now ssh to the ip using following command.

# ssh -i .euca/mykey.private -l root -v

you will get the command line access of the vm created.

Cloud Computing

Cloud computing is the use of computing resources that are delivered as service over the network. Mainly three types of service models are there in cloud computing.
1. Infrastructure as a service
2. Platform as a service
3. Software as a service

1. Infrastructure as a service

In this service model, cloud providers offers computers, as physical or more often the virtual machines and other resources. The virtual machines are run as guest machine by hypervisors such as KVM or XEN.

Other resources in IAAS cloud includes image in virtual image library, raw and file based storage, load balancers, firewalls, ip addresses.

To deploy their applications, cloud users then install operating system images on the machines as well as there application softwares. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software. Cloud providers typically bill IaaS services on a utility computing basis, that is, cost will reflect the amount of resources allocated and consumed.

2. Platform as a service

In the PaaS model, cloud providers deliver a computing platformtypically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying computer and storage resources scale automatically to match application demand such that cloud user does not have to allocate resources manually.

3. Software as a service

In this model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. The cloud users do not manage the cloud infrastructure and platform on which the application is running. This eliminates the need to install and run the application on the cloud user's own computers simplifying maintenance and support. What makes a cloud application different from other applications is its scalability. This can be achieved by cloning tasks onto multiple virtual machines at run-time to meet the changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user who sees only a single access point. To accommodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization.


Opensource is the word related to the computers community. The product in opensource you are dealing with is free of cost and whose sourcecode is shared with world. Opensource is one of the way to provide peoples free products. Everyone can make modifications in the softwares in opensource. If your modifications are liked by the users, your product can be enrolled as new version of the software.

Opensource is boon to general people as well as bigger organizations. However, it always seems to be paid product for larger organizations as they support for the product.