Ahoy there! This is my personal blog which I use as my memory extension and a medium to share stuff that could be useful to others.

Sendmail is slow

Problem:

The sendmail service takes a while (more than a minute) to start and emails sent via sendmail take a couple of minutes to get delivered:

 

Background & Analysis:

Sendmail uses DNS for the following:

  • At startup, to obtain the canonical name for the local host.
  • To obtain the canonical name of a remote host that connects to the local host.
  • To obtain the address of the the SMTP Relay to which sendmail connects.
  • When sendmail expands $[ and $] in the RHS of a rule.

Refer this article for more details.

Solution:

STEP 1: Ensure that the local host’s canonical name may be looked up by adding the following as the first entry in /etc/hosts:

127.0.0.1       localhost.localdomain <hostname>

where, <hostname> should be replaced by the local host’s hostname.

Restart the sendmail service as shown below:

 sudo service sendmail restart 

Implementation of this step enabled my sendmail process start up very quickly.

 

STEP 2: Ensure that the SMTP Mail Relay is configured to not look up DNS by surrounding the Mail Relay’s address by square brackets in /etc/mail/sendmail.cf as shown in the example below:

DS[10.1.1.10]

Restart the sendmail service as shown below:

 sudo service sendmail restart 

Implementation of this step enabled my sendmail process to deliver emails quickly.

 

Root Cause:

The sendmail process takes a while to do lookups in DNS.

 

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Problem:

After configuring HA for a vSphere 5 ESXi cluster, each host in the cluster displays the following error message:

HA-DS-SuppressError

Background & Analysis:

vSphere 5 introduces Datastore heartbeats in addition to network heartbeats for HA, in order to distinguish between a network-isolated host and a crashed host.

Solution:

Using the vSphere client, do the following:

STEP 1: Right-click the ESXi cluster –> Select “Edit Settings” –> Select “vSphere HA” –> Click on “Advanced Options” and enter the following Option – Value:

das.ignoreInsufficientHbDatastore – true  

The default value is false.

 

STEP 2: Right-click the ESXi cluster –> Select “Edit Settings” –> Select “Cluster Features” –> Deselect “Turn on vSphere HA”

 

STEP 3: Right-click the ESXi cluster –> Select “Edit Settings” –> Select “Cluster Features” –> Select “Turn on vSphere HA”

 

Root Cause:

vSphere 5 requires at least 2 datastores (shared) per host in order for Datastore Heartbeats to work. The maximum number of shared datastores allowed for heartbeat is 5.

 

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Linux LVM Example

Requirement: Wipe away /old and add the space recovered (5 GB) to /new for the following disk layout:

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VG01-LV01_app 5G 2.5G 2.5G 50% /old

/dev/mapper/VG02-LV02_app 5G 2.5G 2.5G 50% /new

  • /old is mounted on Logical Volume LV01 in Volume Group VG01
  • Volume VG01 comprises 1 Logical Volume /dev/mapper/VG01-LV01_app and1 Physical Volume /dev/sdb1
  • /new is mounted on Logical Volume LV02 in Volume Group VG02
  • System and other partitions (/, /boot, /home, etc.) are mounted on Logical Volumes in Volume Group VG00 (not considered in example).

Implementation: Here’s how I met the above requirement on RHEL 5.

STEP 1: Backup the partition to be extended

Backup /new before extending it with space recovered from wiping out /old.

 

STEP 2: Unmount the partitions

sudo umount /old
sudo umount /new

NOTE: You cannot unmount the partitions if they’re busy. You may use lsof to determine which files are using the partition and stop the associated services/processes.

 

STEP 3: Determine the number of extents used by the Logical Volume associated with /old

sudo lvdisplay /dev/mapper/VG01-LV01_app

 

STEP 4: Remove Volumes associated with /old

sudo lvremove /dev/mapper/VG01-LV01_app
sudo vgremove VG01
sudo pvremove /dev/sdb1

 

STEP 5: Extend Volumes associated with /new

sudo pvcreate /dev/sdb1
sudo vgextend VG02 /dev/sdb1
sudo lvextend -l+1599 /dev/mapper/VG02-LV02_app /dev/sdb1

Note that the above assumes that the number of extents associated with /dev/mapper/VG01-LV01_app is 1599 (determined in STEP 2).

 

STEP 6: Check and resize the filesystem

Now that the Volume Group and Logical Volume associated with /new has been extended, the filesystem for the Logical Volume must be checked and resized as follows:

sudo e2fsck -f /dev/mapper/VG02-LV02_app
sudo resize2fs /dev/mapper/VG02-LV02_app

 

STEP 7: Mount partitions

Mount the /new partition as follows:

sudo mount /new

You should see the following disk layout for /new:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VG02-LV02_app 10G 2.5G 7.5G 25% /new

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

This article details the steps which I performed to migrate (storage vMotion) a bunch of VMs running Windows Server 2003 Enterprise/Standard (32-bit) from a standalone ESX 3.5.0 host to and ESXi 4.1.0 cluster. With respect to OSes, I’m primarily a Linux Admin and since I encountered a few hiccups which would not have occurred on Linux systems (for the same exercise), I decided to blog this stuff.

STEP 1: Get the VM ready to leave ESX 3.5.0

  • Shutdown the Windows Server 2003 OS on the VM hosted on the standalone ESX 3.5.0 host.

 

STEP 2: Move the VM off the ESX 3.5.0 host

  • Migrate the VM (host and datastore) to an ESXi 4.1.0 cluster host which has access to the same networks as the ESX 3.5.0 host. This step could take a while depending on your environment.

NOTE: All the remaining steps are performed on the ESXi 4.1.0 cluster host.

 

STEP 3: Upgrade the VM’s VMware Tools

  • Edit Settings and select the appropriate network for the NIC.
  • Power on the VM
  • Upgrade VMware Tools (because your VM is now on a higher version hypervisor). You may do this by either clicking the VMware Tools icon in the system tray or by using the vSphere client to mount the VMware Tools CD and install.

 

STEP 4: Some tweaks to avoid issues

With the newer version of VMware Tools, you’ll be able to use the vmxnet3 NIC driver. However, switching drivers requires removing/adding a NIC and this causes IP address clashes in Windows Server 2003 (the IP address is still associated with the old adapter, although not visible in “Network Connections”). Refer this Microsoft KB article. So, to avoid this issue, I did the following:

  • Execute the command “ipconfig /all” at the MSDOS prompt and collect/store all the network configuration for later reference.
  • Open the Network Connection settings, click on TCP/IP Properties and select the options “Obtain an IP address automatically” and “Obtain DNS server address automatically”. Basically, remove the static IP and switch to dynamic IP and DNS. This action clears any configuration associated with the static IP address from the Windows system.
  • Shutdown (not restart!) the OS.

 

STEP 5: Upgrade the VM’s Virtual Hardware

  • Right-click the powered down VM and select the option to upgrade the virtual hardware. This action will upgrade your virtual hardware from version 4 to version 7.
  • Edit settings for the VM and do the following:
    • Remove the existing NIC (which was configured when the VM was on ESX 3.5.0).
    • Add a new NIC and select the VMXNET 3 driver as well as the appropriate network.
  • Power on the VM. As soon as the VM powers on, you will be prompted to restart the OS for the new hardware to work properly. Click “Yes” to reboot.

 

STEP 6: Configure the VM’s Network Connection

  • When the VM powers up after the previous step, open the Network Connection settings and using the settings stored earlier (STEP 4), configure the vmxnet 3 adapter.

 

STEP 7:  Test the VM after migration

 

NOTE:

  1. A storage vMotion had to be performed since the standalone ESX 3.5.0 and the cluster ESXi 4.1.0 hosts did not share storage. Although this migration can be done in 2 steps without downtime (first change host and then change datastore or vice versa), downtime is required for upgrading VMware Tools and the Virtual Hardware.
  2. The standalone ESX 3.5.0 host and the ESXi 4.1.0 cluster hosts did not have access to the same dedicated vMotion network. However, since all the hosts were on the same Management network, this network was used for the migrations (competing with other management traffic) and thus this exercise took a long time.
VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)

Failure in yum update

Problem:

On RHEL 6 (Santiago), a “yum update” fails with the following error:

Error: failed to retrieve repodata/filelists.xml.gz from rhel-x86_64-server-6
       error was [Errno -1] Metadata file does not match checksum
You could try using –skip-broken to work around the problem
You could try running: rpm -Va –nofiles –nodigest

And trying the suggestions above did not help.

Background & Analysis:

Yum uses a cache directory to speed up its operations. Sometimes, the cache could become corrupted.

Solution:

1.  Clean the cache directory as shown below:

sudo yum clean all

2.  Rebuild the cache directory as shown below: 

sudo yum makecache

3.  Perform a yum update as shown below: 

sudo yum update

Root Cause:

Corrupted yum cache directory

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: +7 (from 7 votes)
 Page 4 of 33  « First  ... « 2  3  4  5  6 » ...  Last »