Ahoy there! This is my personal blog which I use as my memory extension and a medium to share stuff that could be useful to others.

Linux Archives

How to build AMP from source on RHEL 5.7

Typically, building a LAMP system on RHEL may be performed by yum installs. However, I wanted specific options built-in for my AMP and I wanted to locate the software in specific locations. hence, I opted to compile from source. It ain’t scary, but took me a few iterations to get stuff sorted out and this article describes what I did:

My LAMP System:

  • L – RHEL 5.7 (kernel 2.6.18-274.3.1.el5)
  • A – Apache 2.2.20
  • M – MySQL 5.5.15
  • P – PHP 5.3.8

STEP 1: Install Apache HTTP


  • Create a user for Apache. This user will be used to launch the httpd child processes (assuming that the root user will launch the parent process to listen at port 80 (or any port < 1024). I created a user called apache as shown below (command executed as the root user):

    useradd -c "Apache HTTP" -s /bin/bash -m apache
  • Select a location to install apache and ensure that the user created in the above step has appropriate privileges. I executed the following commands as the root user:

    mkdir /opt/apache-2.2.20
    chown -R apache:apache /opt/apache-2.2.20


As the apache user, I executed the following:

tar -xvzf httpd-2.2.20.tar.gz
cd httpd-2.2.20
./configure --prefix=/opt/apache-2.2.20 --enable-so

STEP 2: Install MySQL


  • Create a user for MySQL. This user will be used to launch the mysqld process. I created a user called mysql as shown below (command executed as the root user):

    useradd -c "MySQL Admin" -s /bin/bash -m mysql
  • Select a location to install mysql and ensure that the user created in the above step has appropriate privileges. I executed the following commands as the root user:

    mkdir /opt/mysql-5.5.15
    chown -R mysql:mysql /opt/mysql-5.5.15
  • You may have to install some packages to build MySQL. I installed packages as per the following command (executed as the root user):

    yum install gcc gcc-c++.x86_64 cmake ncurses-devel libxml2-devel.x86_64


As the mysql user, I executed the following:

tar -xvzf mysql-5.5.15.tar.gz
cd mysql-5.5.15
cmake . -DCMAKE_INSTALL_PREFIX=/opt/mysql-5.5.15 -DSYSCONFDIR=/opt/mysql-5.5.15
make install

STEP 3: Install PHP


  • Select a location to install php and ensure that the appropriate user (web server user e.g. apache) created in the above step has appropriate privileges. I executed the following commands as the root user:

    mkdir /opt/php-5.3.8
    chown -R apache:apache /opt/php-5.3.8
  • As I needed a few packages for the phpMyAdmin application and other bespoke PHP applications, I did the following (use a combination of yum and rpm as I did not find all packages in my yum repositories):

    # As root user
    rpm -ivh libmcrypt-2.5.7-1.2.el5.rf.x86_64.rpm
    rpm -ivh libmcrypt-devel-2.5.7-1.2.el5.rf.x86_64.rpm
    rpm -ivh mhash-0.9.9-1.el5.rf.x86_64.rpm
    yum install php53-mbstring.x86_64 bzip2 bz2 libbz2 libbz2-dev autoconf
    tar -xvzf mcrypt-2.6.8.tar.gz
    cd mcrypt-2.6.8
    ./configure --disable-posix-threads --prefix=/opt/mcrypt


As the apache user, I executed the following:

tar -xvzf php-5.3.8.tar.gz
cd php-5.3.8
./configure --prefix=/opt/php-5.3.8 --with-apxs2=/opt/apache-2.2.20/bin/apxs --with-config-file-path=/opt/php-5.3.8 --with-mysql=/opt/mysql-5.5.15 --with-bz2 --with-zlib --enable-zip --enable-mbstring --with-mcrypt
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Comprehensive Perl Archive Network (CPAN) is a one-stop shop for all your Perl module requirements. While installing Foswiki, I had a requirement to install the HTML::Tree Perl module and this is the procedure which I used successfully:

STEP 1: Download the Perl module from CPAN.

I downloaded the gzipped, tarred module HTML-Tree-4.1.tar.gz from CPAN

STEP 2: Unpack the Perl module

I extracted the gzipped, tarred Perl module as follows and a directory HTML-Tree-4.1 was created :

tar xvzf HTML-Tree-4.1.tar.gz

STEP 3: Build the Perl Module

The HTML-Tree-4.1 directory (as will all Perl modules) contains a README which provided the usual installation instructions of ./Build; ./Build test and ./Build install. I did not have the Module::Build module and its dependencies and was put off by having to get all that stuff, but I had root privileges. So, I did the following as the root user, to install the HTML::Tree Perl Module:

perl Makefile.PL
make install
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Cannot grep UTF-16 Unicode files


Microsoft SQL Server error logs had I/O errors in them. After transferring 4 months’ logs over to a UNIX machine to analyze them with grep/awk/sed, the grep command did not return any output when searching for strings which were present as indicated when viewing the file using the vi editor.

Background & Analysis:

On the UNIX host, I checked the file type of the SQL Server error log as follows:

$> file ERRORLOG.1
ERRORLOG.1: Little-endian UTF-16 Unicode English character data, with very long lines,
with CRLF line terminators

So, the grep command couldn’t parse the UTF-16 Unicode file. Hence, the file had to be converted to a format which ‘grep’ could parse. The iconv program helps us perform this file format (character encoding) conversion.


Change the file’s character encoding from UTF-16 to UTF-8 and then perform the grep as follows:

$> iconv -f UTF-16 -t UTF-8 ERRORLOG.1 | grep "SQL Server has encountered.*I/O requests"

Root Cause:

The grep program cannot parse files with certain character encodings like UTF-16.



(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated. Also, feel free to leave comments.


VN:F [1.9.22_1171]
Rating: +4 (from 4 votes)

Using Mutt to send email

Mutt is a popular email client (MUA) which is common on Linux systems.

Given below are some how-tos on basic uses of mutt. For all UNIX utilities, the "man pages" are your best bet to learn them. I’ve just documented some popular uses of mutt. Refer the "man pages" for a more comprehensive understanding of mutt. The commands below have been tested on Red Hat Enterprise Linux 4.0 AS Update 7 with mutt v1.4.1i, unless otherwise stated.

HOW-TO 1: Send email with blank/empty body

mutt -s "Test email" testmail@abc.com < /dev/null
# where:
# -s => Subject
# testmail@abc.com => recipient's email address

HOW-TO 2: Send email with body read from a file

mutt -s "Test email" testmail@abc.com < email_body.txt
# where:
# -s => Subject
# testmail@abc.com => recipient's email address
# email_body.txt => file containing message body

HOW-TO 3: Send email with a customized sender name and email address

# The .muttrc file is Mutt's configuration file. It's default location is the $HOME directory.
# If you locate it elsewhere, specify its location with the '-F' flag.
# Add the following to the .muttrc file:
set realname="Joe Bloggs"
set from="noreply@jb.com"
set use_from=yes
# where:
# realname => Sender's name as it appears in the recipient's mail inbox.
# from => the "reply-to" address

After configuring .muttrc, send emails as per how-tos 1 and 2.

HOW-TO 4: Send attachment(s)

mutt -s "Test email" -a file1 -a file2 testmail@abc.com < /dev/null
# where:
# -s => Subject
# testmail@abc.com => recipient's email address
# file1 => first attachment
# file2 => second attachment

HOW-TO 5: Send HTML email

I know that the technical purists out there abhor HTML emails due to potential issues with accessibility and security, but hey, there’s no denying the fact that HTML-formatted emails are far more interesting to look at than plain-text email and are better at drawing your attention to specific information (ask the marketing guys and senior executives!). HTML-formatted emails are supported by Mutt versions 1.5 and higher. Here’s how you may send an HTML-formatted email using mutt v1.5.21:

mutt -e "set content_type=text/html" -s "Test email" testmail@abc.com < welcome.html
# where:
# -s => Subject
# testmail@abc.com => recipient's email address
# -e => command to execute
# content_type => email body MIME type

The MIME type multipart/alternative ensures your emails are received properly by both plain-text and HTML clients, but it does not work well with mutt at present.

VN:F [1.9.22_1171]
Rating: +4 (from 8 votes)

Sendmail is a popular Mail Transfer Agent (MTA) and it is the default MTA on Red hat Linux Enterprise (RHEL). Typically, within enterprises, you will need a Mail User Agent (MUA), an MTA and an SMTP Relay to send emails (outbound) from the Linux command-line or shell scripts.

So, let’s assume that your Mail Administrator provides you IP addresses of primary ( and secondary ( SMTP Relay hosts. Given below are steps to configure sendmail (on RHEL 4) to use the SMTP relays to send email (you must use root user’s privileges):

STEP 1: Verify sendmail packages

In order to configure sendmail, you will require the sendmail and sendmail-cf packages. Given below is an example of how to check for these packages:

rpm -qa | grep sendmail


STEP 2: Modify sendmail.mc

Do not edit /etc/mail/sendmail.cf. Instead, you must edit /etc/mail/sendmail.mc (more readable than sendmail.cf) and use the m4 macro processor to generate sendmail.cf.

Edit /etc/mail/sendmail.mc and add the following line:



(1) Pay attention to the quotes. Do not prefix the above by the letters dnl as dnl (delete to new line) denotes a comment in sendmail.mc

(2) If you wish to configure only 1 SMTP relay host (e.g., then add the following:


STEP 3: Start/Restart sendmail

When sendmail is started (if not running) or restarted (if running), then the sendmail.mc file will be processed by the m4 macro processor and a corresponding sendmail.cf will be generated. On RHEL, you may start sendmail as follows:

/sbin/service sendmail start

STEP 4: Use an MUA to test outbound mail

In order to test your sendmail configuration and the SMTP relays, use an MUA to send emails. Given below is an example that uses mutt to send emails:

mutt -s "test email" test@abc.com < /dev/null
VN:F [1.9.22_1171]
Rating: +27 (from 33 votes)

Unlike the Oracle client which provides you with the required drivers and tools (e.g. sqlplus) to execute SQL statements against an Oracle database on a variety of platforms, Microsoft does not have such drivers and clients for non-Microsoft platforms. For example, there is no Microsoft driver which you can install on RHEL 4 to allow you configure an SQL interface to Microsoft’s SQL Server.

The goal of ODBC is to solve this very problem by providing a standard software interface for accessing a variety of database management systems on a variety of platforms.

Given below are steps to configure an SQL interface with MS SQL Server on a Red Hat Enterprise Linux 4 (RHEL 4) host:

STEP 1: Verify the installation of unixODBC

unixODBC is provided with Linux by default for recent versions of most Linux distributions. Refer the unixODBC home for more details on unixODBC

rpm -qa | grep unixODBC

If the unixODBC packages are not installed, then download them and install them using rpm

STEP 2: Install FreeTDS

Refer the FreeTDS home for more details on FreeTDS. Download the latest stable release and install it using rpm

STEP 3: Configure a DSN

In order for the unixODBC software to interface with MS SQL Server, the relevant database access details must be provided in a Database Source Name (DSN). To configure a DSN, edit /etc/odbc.ini and add the relevant details. Given below is a sample DSN in /etc/odbc.ini :


In the sample DSN above, /usr/lib64/libtdsodbc.so.0 is the absolute path to the FreeTDS driver.

STEP 4: Use isql to interface with MS SQL Server

After successfully completing the 3 steps above, you are now ready to perform operations on the database. To do so, you may use the isql utility that is bundled with the unixODBC package. Given below are few examples on using isql :

# The examples below assume you have a DSN called MY_DSN
# configured in /etc/odbc.ini and the isql utility in your PATH.
# Example 1 : Open an interactive session
isql MY_DSN username password

| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
# Example 2 : Execute an SQL statement. Assume the statement is in a file
# called test.sql. The last line in test.sql must be a blank line.
cat test.sql | isql MY_DSN username password

Refer the isql man page for more details on using the isql utility.

VN:F [1.9.22_1171]
Rating: +2 (from 8 votes)

Runaway process causes 100% disk utilization


A Solaris 9 mountpoint was 100% utilized (as per “df”) and no new files could be added.

df output:

cybergavin@myhost:/dashboard> df –h /dashboard
Filesystem             size   used  avail capacity  Mounted on
                        16G    16G   2.1M   100%    /dashboard

du output:

cybergavin@myhost:/dashboard> du –sk /dashboard
1789259 /dashboard

Background & Analysis:

As you can see above, both “du” and “df” provide significantly different metrics for the utilization of /dashboard. The “df” output tells me that I have very little free space (~ 2.1 MB) whereas the “du” output indicates that I have around 14 GB free space.

Well, first and foremost, df and du intend to give you disk usage stats, but they do not work in the same way. Refer this article to understand the differences between df and du.

Secondly, the mountpoint /dashboard was mounted on a VxFS. The dmesg output showed the following:

Feb  1 09:29:00 myhost vxfs: [ID 702911 kern.notice] NOTICE: msgcnt 112748 mesg 001: V-2-1: vx_nospace -  /dev/vx/dsk/A19278-S01-7uitx-dg/dashboard file system full (1 block extent)

An explanation for the above (quite obvious) message is given in this Symantec article.

I found a runaway background process (iostat –x 2) running for the past 2 months. It was a process launched by a shell script. The shell script exited, but the process wasn’t killed. The process was redirecting its output to a file and that file was also deleted. Consequently, the process’ stdout file descriptor (1) was not closed and the process was still writing to stdout. This caused the space occupied by the stdout to be hidden. To determine how much space is actually being used by the process when writing to stdout, try the following command (<pid> = process id):


ls -l /proc/<pid>/fd/1



Killed the runaway process and the mountpoint utilization dropped significantly to 14%. Further, df and du outputs correlated.

Root Cause:

A runaway process was consuming most of the disk space and this disk space consumption was “hidden” because the file to which the process’ stdout was being redirected, was deleted.


(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated. Also, feel free to leave comments.


VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

UNIX shell commands return exit statuses or exit codes upon completion, irrespective of whether successful or not. In UNIX, an exit status of 0 indicates successful execution of a command and a non-zero exit status indicates a failure. Some shells (e.g. ksh93) document the various exit statuses and their meanings. So, checking exit statuses of commands is typically done in programs like shell scripts to decide further action to be taken by the script.

However, piped commands or a pipeline (cmd A | cmd B), add a twist to checking exit statuses as by default, most UNIX shells adhere to the POSIX requirement of returning the exit status of the last command in the pipe. Refer the example below:


# Two commands piped and both commands are successful.
$ grep MemTotal /proc/meminfo | awk '{print $2}'; echo "EXIT STATUS = $?"                                                                      
# Two commands piped and first command throws an error.
# Note that the EXIT STATUS is still 0
$ grep MemTotal /proc/meminf | awk '{print $2}'; echo "EXIT STATUS = $?"
grep: /proc/meminf: No such file or directory


So, checking the exit status of a pipeline as in the above example will cause problems for your script if any of the piped commands other than the last command throw an error. Given below are two solutions (using ksh93 and bash) to solve this problem and ensure a valid exit status check for a pipeline:

SOLUTION 1: Using ksh93 and the pipefail option

The Korn Shell 1993 version ‘g’ point release (ksh93g) introduced a pipefail option which ensures that the exit status of a pipeline will be that of the first command in the pipe that has failed (of course, the exit status of the pipeline will be 0 if all commands in the pipe succeed). Unfortunately, ksh88 is distributed as the default Korn Shell with most UNIX systems (with all Solaris versions I believe) because ksh93 is owned by Lucent and AT&T and there were licensing restrictions. Refer I (Q.14) and III (Q.8) of the Korn shell FAQs at http://kornshell.com/doc/faq.html

An example of how the pipefail option is used is shown below:


# Check ksh version
$ ksh --version
  version         sh (AT&amp;T Research) 93s+ 2008-01-31
# Set pipefail option on (it's off by default)
$ set -o pipefail
# Now, notice the non-zero exit status due to the error in the first command
$ grep MemTotal /proc/meminf | awk '{print $2}'; echo "EXIT STATUS = $?"
grep: /proc/meminf: No such file or directory


Note: There are other solutions using ksh88 with co-processes and file descriptor manipulation, but those solutions are not script-friendly.


SOLUTION 2: Using bash and the PIPESTATUS array

The bash shell uses an array called PIPESTATUS to store the exit statuses of commands in a pipeline.


# Using the PIPESTATUS array to display exit statuses of both commands in the pipe
mrkips@mrkips-laptop:~$ grep MemTotal /proc/meminf | awk '{print $2}'; echo "EXIT STATUS = ${PIPESTATUS[0]}"

grep: /proc/meminf: No such file or directory


mrkips@mrkips-laptop:~$ grep MemTotal /proc/meminf | awk '{print $2}'; echo "EXIT STATUS = ${PIPESTATUS[1]}"

grep: /proc/meminf: No such file or directory



Be aware of SIGPIPE

SIGPIPE is a signal sent to a process (that causes the process to terminate) that writes to a pipe when there is nothing to read from the pipe. If you use any of the above mthods to check the exit status of a pipeline and a SIGPIPE is received by one of the commands in the pipeline, then your exit status may not correlate with the success/failure of the pipeline.

For example, in the command,

ls -lrt | head -1

“ls -lrt” writes to a pipe and “head -1” reads from that pipe. Now, “head -1” only requires the first line from the pipe and as soon as it gets it, it terminates and sends a SIGPIPE (signal 13) to “ls -lrt” causing “ls -lrt” to terminate with a non-zero exit code.



(1) The how-to above describes how I implemented something and may not be the only method of implementation.

(2) Your rating of this post will be much appreciated. Also, feel free to leave comments.

VN:F [1.9.22_1171]
Rating: +16 (from 16 votes)

How to determine UNIX process elapsed time

At times, you may want to determine the elapsed wall-clock time or uptime for a running process. While you can determine the process’ elapsed time using certain versions of the ps utility, it can also be determined using a simple shell script with a bit of perl. Both these methods are described below with the init process (PID = 1):

SOLUTION 1: The ps utility on some variants of UNIX (e.g. Linux, but not Solaris)

ps -oetime 1


SOLUTION 2: A shell script with a bit of perl (should work on all variants of UNIX)

Download the shell script (puptime.ksh) or copy and paste from below:

# puptime.ksh - Gavin Satur - http://cybergav.in/2009/11/09/puptime
# Simple script to calculate the uptime (elapsed wall clock time) of a process
# Accept PID and do some basic validation
if [ -z "$proc_pid" ]; then
   print "\nERROR : Missing input argument. SYNTAX: ksh puptime.ksh \n"
   exit 999
if [ ! -d /proc/$proc_pid ]; then
   print "\nERROR : No directory for PID $proc_pid in /proc. Check if process is running!\n"
   exit 999
# Calculate start time of process and current time in epoch time using a bit of perl
proc_stime=`perl -e 'use File::stat;  my $filename = "$ARGV[0]"; $sb = stat($filename); printf "%s", $sb->mtime;' /proc/$proc_pid`
currtime=`perl -e 'print time;'`
# Calculate process uptime in seconds and then slice'n'dice for human-friendly output
proc_time=$(( currtime - proc_stime ))
proc_time_days=$(( proc_time / 86400 ))
proc_time_secs=$(( proc_time % 86400 ))
proc_time_hours=$(( proc_time_secs / 3600 ))
proc_time_secs=$(( proc_time_secs % 3600 ))
proc_time_minutes=$((proc_time_secs / 60 ))
proc_time_secs=$(( proc_time_secs % 60 ))
print "\nUPTIME FOR PROCESS WITH PID $proc_pid = $proc_time_days day(s) $proc_time_hours hour(s) $proc_time_minutes minute(s) $proc_time_secs second(s) \n"
./puptime.ksh 1
UPTIME FOR PROCESS WITH PID 1 = 0 day(s) 0 hour(s) 39 minute(s) 37 second(s)
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

How to determine the Linux distribution

If you’re given access to a Linux machine without being told the Linux distribution being used, there are a couple of ways by which you can determine the Linux distribution

OPTION 1: Use the lsb_release utility.

The Linux Standard Base (LSB) is a joint project by several Linux distribution vendors working under the Linux Foundation, to develop and promote a set of open standards that will increase compatibility among Linux distributions and enable software applications to run on any compliant system even in binary form.

The lsb_release utility which is part of all Linux distributions which adhere to the LSB specification will print distribution specific information. Check the lsb_release manpage for details on usage. Example screenshots of using lsb_release on Ubuntu and Fedora distributions are given below:






OPTION 2: Check release files in /etc.

Linux distribution vendors typically include release files with details about the distribution, in the /etc directory. Screenshots of examples showing how to check such files on Ubuntu and Fedora are given below:





VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)


When executing a korn shell script, execution fails with the following error:

/bin/ksh^M: bad interpreter: No such file or directory



I transferred the executable korn shell script from my backup on my Windows server to a RHEL 5.1 host. On Linux, opened the file using the vi editor and did not find any ^M characters. Also, checked file with "set list" in vi.



I used dos2unix on the file as follows:

dos2unix myscript.ksh

Note: Usually, improper file transfer modes in ftp can cause this problem. However, I used ascii mode and checked the file using the vi editor, but didn’t find anything abnormal. So, dos2unix is your best bet!


Root Cause:

Invalid file format after transferring it from a Windows machine to a UNIX host.



(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated. Also, feel free to leave comments.

VN:F [1.9.22_1171]
Rating: +30 (from 32 votes)

Password Management in Linux/Solaris

Although using Public Key Infrastructure (PKI) with SSH is strongly recommended over password authentication, basic UNIX password authentication (using username and password for authentication) is still widely prevalent. If you are one of those users still using password authentication, then this tutorial could provide you with essential information to enable you manage your passwords and prepare to switch to PKI or other more secure methods of authentication. The examples in this tutorial are based on Fedora 11 Linux, Ubuntu 9.04 Linux and Solaris 10 (x86) and will apply to all these UNIX distributions unless otherwise mentioned. The examples for Fedora 11 will most probably apply to Red Hat Enterprise Linux (RHEL) as Fedora is a spin-off of the terminated Red Hat Linux Desktop and shares code with RHEL.


When and where are passwords created?

Passwords for users are created when a user is created. Users are typically created with the useradd command. An example user creation is given below:

useradd -d /home/asg -m -s /bin/bash asg

The above command creates a user called asg with home directory /home/asg and the bash shell and will work on

Upon creation of a user, a blank password is created, locked and assigned to that user and all password details for that user are stored in the /etc/shadow file as shown below.

# Fedora 11

# Ubuntu 9.04

# Solaris 10

Note the differences in default password creation among the three UNIX distributions:

(1) Notation used to denote password locking. In Fedora 11, it’s "!!", in Ubuntu 9.04, it’s "!" and in Solaris 10, it’s "*LK*".

(2) In both Fedora 11 and Ubuntu 9.04, days that denote when the password was last changed (14479 days since 01/01/1970), before which a password may be changed (0 days), after which a password must be changed (99999 days) and the warning before a password expires (7 days) are assigned (defaults assigned in /etc/login.defs and /etc/default/useradd) whereas in Solaris 10, these values are not defined.

Refer to the shadow manual (man /etc/shadow) to understand the fields associated with a password definition (a colon separated line) for a user in the /etc/shadow file.


 How do you set passwords?

Passwords are set using the “passwd” command as shown below:

# For non-root users (e.g. asg)

# For root user (e.g. set the password for the asg user)
passwd asg

Note: Non-root users can set only their own passwords (unless given elevated privileges) whereas the root user can set any user’s password.

When a password is set, it will be encrypted and stored in the /etc/shadow file as in the example shown below (single line):



How do you lock passwords?

By default, as seen above, passwords are locked during user creation. If you need to lock a password any time after it has been unlocked, then the following command must be run as the "root" user:

# As root user, lock the password for the asg user
passwd -l asg

When a password is locked, the corresponding user’s account cannot be used and you cannot login directly as the user or even switch user using su.

Note: In Fedora 11 and Ubuntu 9.04, a user whose password has been locked will still have its cron jobs running normally, but in Solaris 10, such a user’s cron jobs will no longer execute.


How do you unlock passwords?

You can unlock a user’s password (as root user) in one of two ways as shown in the example below:

# Option 1: Simply set the asg user's password
passwd asg

# Option 2: Unlock the asg user's password
passwd -u asg

How do you delete passwords?

You can delete a user’s password (as root user) by using the passwd command as shown in the example below:

# Delete the asg user's password
passwd -d asg

How do you manage password expiry?

Just like perishables, passwords too can have lifetimes and expire. By default, when users are created, password expiry is not set, thereby allowing a password to be used forever without ever being changed. Well, password authentication is not a very secure authentication mechanism and so apart from setting difficult-to-guess passwords, it is important to change passwords often to reduce the risk of security being compromised. Password expiry may be set (as root user) as shown in the examples below:

# Option 1 : Using the passwd utility, set expiry of the asg user's password to 30 days from now
passwd -x 30 asg

# Option 2 (Fedora 11 and Ubuntu 9.04): Using the chage utility, set expiry of the asg user's password to 30 days from now
chage -M 30 asg

Password expiry may be unset (as root user) as shown in the examples below:

# Option 1(a) : Using the passwd utility, turn off expiry for the asg user.
passwd -x -1 asg
# Option 1(b) : Using the passwd utility, turn off expiry for the asg user.
passwd -x 99999 asg

# Option 2(a) (Fedora 11 and Ubuntu 9.04): Using the chage utility, turn off expiry for the asg user.
chage -M -1 asg
# Option 2(b) (Fedora 11 and Ubuntu 9.04): Using the chage utility, turn off expiry for the asg user.
chage -M 99999 asg

Note:If using password authentication, I recommend that password expiry is always turned off or disabled for system/shared user accounts to prevent inefficiencies caused by dependencies on password tracking.


What is the difference between deleted, locked and expired passwords?

Some people refer to deleted passwords as disabled passwords, but I will refrain from using the term "disabled" as it only adds ambiguity. Refer to the table below to understand the differences and similarities between deleted and locked passwords.


Feature Ubuntu 9.04 Fedora 11 Solaris 10
Login directly with user account (username/password)

NOTE: If your password has expired, you can login directly, provided you remember your old password when prompted to set a new password as soon as you login.
No (deleted)

No (locked)

Yes (expired)

No (deleted)

No (locked)

Yes (expired)

No (deleted)

No (locked)

Yes (expired)

Switch user (su) to user account No (deleted)

No (locked)

No (expired)

Yes (deleted)

No (locked)

No (expired)

Yes (deleted)

No (locked)

No (expired)

Cron jobs run normally (assuming password hasn’t expired) Yes (deleted)

Yes (locked)

No (expired)

Yes (deleted)

Yes (locked)

No (expired)

Yes (deleted)

No (locked)

No (expired)



 How do you disable password authentication for only certain users?

Password authentication is still widely used on UNIX Production systems. And some systems even still use the telnet daemon to accept connections. The telnet protocol is insecure as the password is transmitted in plain text and if you’re using telnet on UNIX Production systems, discontinue its use and switch to the SSH daemon at the earliest. If you wish to disable password authentication for all users, then setting the directive "PasswordAuthentication no" in the sshd_config file and restarting the sshd daemon will ensure no user can use password authentication. If on the other hand, like me, you are not a System Administrator and wish to disable password authentication only for specific users (your team perhaps), then you may do so as follows (Fedora 11, Ubuntu 9.04 & most Linux distributions):

(1) Ensure each user creates a public-private key pair and configures his/her public key in the appropriate home directory, in order to use PKI/SSH to login.

(2) Turn off expiry for user passwords in order to prevent any impact on cron jobs.

(3) Lock the users’ passwords in order to prevent password authentication.

(4) For shared or system users, ensure elevated privileges are provided to normal users using softwares like sudo or Power Broker. If using sudo, the users may have to remember their passwords even though they’re using PKI, in order to use the sudo command (not mandatory, depends on sudo setup).

I know the above steps seem a weird way of configuration (use passwords and then prevent password authentication), but I haven’t found any other direct way of disabling password authentication for certain users. Using Pluggable Authentication Modules (PAM), this may be possible, but this tutorial is aimed at users currently using basic password authentication without services like PAM, kerberos, etc.

Note: I do not recommend the above 4-step procedure for Solaris 10 (& probably earlier versions) as step 3 effectively ensures that the users’ cron jobs stop working.



(1) UNIX manuals (man passwd, man chage, man shadow, man sshd): As always, read the manual!

(2) Steve Friedl’s  Secure Linux/UNIX access with PuTTY and OpenSSH: An excellent, lucid tutorial to help you get started with using PKI and SSH.

(3) Werner Puschitz’s Securing and Hardening Red Hat Linux Production Systems: An excellent guide on securing Production RHEL systems (also covers PAM).

VN:F [1.9.22_1171]
Rating: +4 (from 4 votes)

64-bit Ubuntu: No such file or directory


When trying to install the WebLogic 8.1 SP6 binary (platform816_linux32.bin) on 64-bit Ubuntu 9.04 Desktop, the following errors were observed:

./platform816_linux32.bin: No such file or directory


Background & Analysis:

The first couple of checks (obvious) for such an error are:

(1) Check if the file exists in the appropriate location

(2) Check if the file has the required permissions (read/execute)

The above checks were successful and so the error was misleading.

(3) Using the file command, the following was observed:

file platform816_linux32.bin 
platform816_linux32.bin: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.0.0, stripped

The above output indicates that the WebLogic installer binary is 32-bit and uses 32-bit shared libraries.

(4) The host OS was 64-bit Ubuntu and this could be confirmed using the uname command:

uname -a
Linux mrkips-laptop 2.6.28-13-generic #45-Ubuntu SMP Tue Jun 30 22:12:12 UTC 2009 x86_64 GNU/Linux

32-bit shared libraries are not available by default on 64-bit Ubuntu.



Get the 32-bit shared library package for use on amd64 and ia64 systems, ia32-libs, using any one of the following methods:

sudo apt-get install ia32-libs


Launch the Synaptic Package Manager from the Ubuntu Desktop and install the ia32-libs package along with any required dependencies.


Root Cause:

The 32-bit shared libraries required for the installation of the 32-bit WebLogic binary installer were not available by default on 64-bit Ubuntu 9.04 Desktop.



(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated. Also, feel free to leave comments.


VN:F [1.9.22_1171]
Rating: +19 (from 19 votes)


When developing scripts, it is sometimes necessary to determine the absolute location of the script within the script, so that relative paths may be defined. For example, if a script writes logs and data files to different directories relative to the location of the script, you need to determine the location of the script, irrespective of where the script is installed or from where it’s executed. Of course, you can hard code this value in the script, but that’s a dirty solution because you’ll need to change the variable whenever you change the location of the script.



(1) UNIX Shell:

if [ -n "`dirname $0 | grep '^/'`" ]; then
   SCRIPT_LOCATION=`dirname $0`
elif [ -n "`dirname $0 | grep '^..'`" ]; then
     cd `dirname $0`
     cd – > /dev/null
     SCRIPT_LOCATION=`echo ${PWD}/\`dirname $0\` | sed ’s#\/\.$##g’`

(2) Python

import sys
import os.path
from os import *
dirloc = os.path.dirname(sys.argv[0])
curdir = os.getcwd()
if dirloc.startswith("/"):
elif dirloc.startswith(".."):
     SCRIPT_LOCATION =  os.getcwd()
     _SCRIPT_LOCATION = curdir + "/" + dirloc

Both implementations above are based on the assumption that a script may be executed in any of the following 3 ways only:

(1) From anywhere within the directory hierarchy , using the absolute path (beginning with ‘/’)

(2) From anywhere below the directory containing the script, within the directory hierarchy (beginning with ‘../’)

(3) From anywhere above the directory containing the script, within the directory hierarchy (beginning with ‘./’)



(1) The How-To above describes a successful method of implementation. It may or may not be the best method of implementation. If you know of a better implementation or spot an error in the implementation above, kindly make readers aware via comments on this post.

(2) Your rating of this post will be much appreciated. 

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)
 Page 2 of 2 « 1  2