Ahoy there! This is my personal blog which I use as my memory extension and a medium to share stuff that could be useful to others.

IBM HS22 Blade boot failure

Problem:

After upgrading firmware (BIOS, SAS Controller, IMM) on an IBM HS22 (MT:7870) Blade and inserting it into a BladeCenter H slot, the Blade did not boot from HDD.

Background & Analysis:

  1. The HS22 Blade was intended to replace another failed HS22 blade.
  2. The replacement HS22 Blade was first inserted into another BladeCenter H chassis and its firmware (BIOS, SAS Controller, IMM) was upgraded.
  3. The failed HS22 Blade was removed from its BladeCenter H chassis and its HDDs, HBA and NIC were swapped with the replacement HS22 Blade. The HDDs contained an installation of Red Hat Enterprise Linux 4.
  4. The replacement HS22 Blade was inserted into the same slot which previously hosted the failed HS22 Blade and powered on.

 

NOTE: Steps 3 and 4 were performed to prevent any need for Ethernet/FC configuration on BladeCenter and upstream switches.

 

Symptoms & Solutions:

 SYMPTOM 1: The HS22 Blade failed to boot. The boot sequence listed 5 HDDs (HDD0 – HDD4), although the Blade had only 2 HDDs meant to be configured in RAID 1.

SOLUTION 1:  In the system setup (F1 after power on), do the following:

  1. Select Boot Manager and press Enter.
  2. Select Add Boot Option and press Enter.
  3. Select Legacy Only and press Enter.
  4. Press Esc to return to Boot Manager.
  5. Select Change Boot Order and press Enter.
  6. Select the existing Boot Order and press Enter.
  7. Select Legacy Only and press the + key to promote it to a position above the local device which contains the operating system . Typically, this would be above Hard Disk 0. Press Enter.
  8. Select Commit Changes and press Enter.
  9. Press Esc to return to Boot Manager.
  10. Select Reset System and press Enter.

NOTE: The above steps are taken from IBM’s documentation

SYMPTOM 2: The HS22 Blade failed to boot. The boot sequence no longer listed 5 HDDs, but an attempt was made to obtain a DHCP address and PXE boot. The boot sequence displayed a message indicating that the RAID array was INACTIVE.

SOLUTION 2:  After power on, during the boot sequence, do the following:

  1. Press CTRL+C when prompted, to enter the LSI Raid Configuration utility.
  2. Go to Manage Array –> Activate Array

 

Root Causes:

  1. Since RHEL 4 is a Non-UEFI aware operating system, it requires “Legacy Only” boot option and this option was not added.
  2. The RAID Array was inactive.

 

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: +6 (from 8 votes)

Assuming you have a text file (e.g. C:\ADGroups.txt) containing multiple Active Directory (AD) Groups with one AD group per line, you may use an MS-DOS script similar to the following to create all those AD Groups in your Active Directory domain.

for /F "tokens=* delims= " %%G in (C:\ADGroups.txt) do (
 dsadd group CN="%%G",OU="Human Resources",OU="IT",DC=MYAD,DC=CYBERGAV,DC=IN
)

NOTE:
(1) If you are typing the script above at the MS-DOS command prompt, then replace %%G by %G.
(2) The “for” statement above, ensures that AD Groups containing blankspaces are read correctly.

VN:F [1.9.22_1171]
Rating: +6 (from 6 votes)

vSphere Distributed Resource Scheduler (DRS) is one of the several features in vSphere that comes very handy for deployments.

Given below are details of how I use vSphere DRS (in a simple configuration) which has worked very well. I could do this given my organization’s standards for application deployment and VM nomenclature. So, this configuration may not be suitable in other scenarios:

Objective: To distribute VMs belonging to an application, across ESXi hosts within a cluster such that the application runs on more than one ESXi host.

Our Standards (Pre-Requisites):

  • Every application runs on more than one VM.
  • All VMs are deployed on ESXi clusters (to use features like DRS).
  • All VMs and ESXi hosts use a nomenclature which includes a 2-digit suffix (starting from 01).

Configuring DRS for ESXi Clusters:

STEP 1: Configure Automation Level and Migration Threshold

At first, we were apprehensive of full automation and an aggressive migration threshold. However, given that we provision adequate capacity (ESXi hosts) and the positive results of vMotion on our critical applications, the following settings have worked well.

 

DRS-Rules-1

STEP 2: Configure DRS Groups

  • All the VMs with odd number suffixes in their names go into the VM-Nodes-Odds group.
  • All the VMs with even number suffixes in their names go into the VM-Nodes-Evens group.
  • All the ESXi hosts with odd number suffixes in their names go into the ESXi-Hosts-Odds group.
  • All the ESXi hosts with even number suffixes in their names go into the ESXi-Hosts-Evens group.

 

DRS-Rules-2

 

 

STEP 3: Configure DRS Rules

Configure Host-Affinity Rules VM-Nodes-Odds –> ESXi-Hosts-Odds  and VM-Nodes-Evens –> ESXi-Hosts-Evens, using the criterion “should run on hosts in group”.

 

DRS-Rules-4 DRS-Rules-3

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Problem:

When trying to deploy a VMware VM’s OVA file, the deployment fails with the following error:

OVF-unsupported

 

Background & Analysis:

  • A VM’s OVA file is an archive file containing the Open Virtualization Format (OVF) package for a VM. A typical OVA file contains a Manifest file (.mf), an OVF descriptor file (.ovf) and Hard drive images (.vmdk).
  • The error in the screenshot refers to a line in the OVF descriptor file. Line 135 of the OVF descriptor file is given below:
 <rasd:addressonparent>15</rasd:addressonparent> 
  • The above line in the OVF descriptor corresponds to the follow setting for the SCSI controller for one of the Virtual Disks used by the VM:

 

 

OVF-unsupportescictrl15

 

It’s rather strange that a VM can run fine when using SCSI(3:15) controller, but we cannot deploy an OVA/OVF containing this controller.

 

Solution:

In order to resolve this specific error, here’s what I implemented:

 

STEP 1: Extract the OVF package from the OVA file

Use a software such as 7-zip (it’s free) to extract the OVA archive file. Upon successful extraction, you should see the .mf, .ovf and .vmdk files.

 

STEP 2: Modify the OVF descriptor

Modify the erroneous line to reflect the following (basically, just switched to SCSI(3:1) controller for Hard disk 5):

 <rasd:addressonparent>1</rasd:addressonparent> 

NOTE: I used SCSI(3:1) and it worked for me. However, other values may work too, but this problem made it clear that a value such as 15 does not work.

 

STEP 3: Modify the Manifest file

  • The Manifest file (.mf) contains the SHA1 hashes for all the other files in the OVF package.
  • Calculate the SHA1 hash for the modified OVF descriptor (I used SHA1 Generator ) and update the manifest file with the new SHA1 hash.

 

STEP 4: Deploy the OVF

You do not need to re-archive the OVF package into an OVA, because you can simply deploy the OVF package by selecting the OVF descriptor when deploying the OVF via the vSphere client.

 

Root Cause:

Impermissible value for SCSI controller for Virtual Hard Disk in a VMware VM’s OVF descriptor.

 

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Problem:

When trying to register a Linux client with an RHN Satellite server using the rhnreg_ks utility, the following error is displayed:

Introspect error: The name com.redhat.SubscriptionManager was not provided by any .service files

 

Background & Analysis:

  • The Linux client was on RHEL 5.8
  • The rhnreg_ks is a utility used to register a Linux client with RHN and is part of the rhn-setup package (rpm).
  • Even though the error was displayed, the Linux client was successfully registered with RHN.

 

Solution:

I obtained the following solution from this Red Hat Bug Listing. So, here’s what I implemented:

 

STEP 1: Determine location of rhnreg.py and back it up

 sudo grep 'com.redhat.SubscriptionManager' `rpm -ql rhn-client-tools rhn-setup` 

For my Linux client, rhnreg.py was located in /usr/share/rhn/up2date_client/

Take a backup of rhnreg.py

 

STEP 2: Modify rhnreg.py

Comment out the following:

#validity_obj = bus.get_object('com.redhat.SubscriptionManager', 
#'/EntitlementStatus')

Add the following just below the commented lines:

 validity_obj = bus.ProxyObjectClass(bus,'com.redhat.SubscriptionManager',
'/EntitlementStatus', introspect=False) 

NOTE: Since you are modifying Python code, ensure that you maintain the program indentation.

 

Root Cause:

The RHN registration client writes an exception to stderr. The functionality isn’t affected and so this change only has aesthetic value. Perhaps, that’s why I did not find an update in the RHN RHEL 5.8 repository.  Anyway, for those of you who find that exception annoying, you now know how to get rid of it.

 

(1) The solution above describes a successful problem-solving experience and may not be applicable to other problems with similar symptoms.

(2) Your rating of this post will be much appreciated as it gives me and others who read this article, an indication of whether this solution has worked for people other than me. Also, feel free to leave comments.

 

VN:F [1.9.22_1171]
Rating: +2 (from 2 votes)
 Page 3 of 33 « 1  2  3  4  5 » ...  Last »