Monday, June 30, 2008

Upgrading SLES10SP1 to SLES10SP1U1


1) Get the RPM files to be used in the upgrade to the machine in question.

st32:/var/tmp/joeuser # ls -alt (output edited for brevity)
-r--r--r-- 1 root root 45570118 Jun 30 15:32 kernel-source-2.6.16.53-0.16.x86_64.rpm
-r--r--r-- 1 root root 1620378 Jun 30 15:32 kernel-syms-2.6.16.53-0.16.x86_64.rpm
-r--r--r-- 1 root root 121758 Jun 30 15:32 perl-Bootloader-0.4.18.11-0.2.x86_64.rpm
-r--r--r-- 1 root root 17204994 Jun 30 15:32 kernel-smp-2.6.16.53-0.16.x86_64.rpm
-r--r--r-- 1 root root 16846928 Jun 30 15:32 kernel-default-2.6.16.53-0.16.x86_64.rpm
-r--r--r-- 1 root root 17682230 Jun 30 15:32 kernel-debug-2.6.16.53-0.16.x86_64.rpm

NOTE: In my example, I don't need the kernel-default-* or kernel-debug-* RPMs.

2) Upgrade the perl-Bootloader RPM.

# What's there now?
st32:/var/tmp/joeuser # rpm -qa | grep '^perl-Boot'
perl-Bootloader-0.4.15-0.6

# Upgrade it!
st32:/var/tmp/joeuser # rpm -U perl-Bootloader-0.4.18.11-0.2.x86_64.rpm

# Now, what's there?
st32:/var/tmp/joeuser # rpm -qa | grep '^perl-Boot'
perl-Bootloader-0.4.18.11-0.2

3) Upgrade the kernel RPMs.

# What's there now?
st32:/var/tmp/joeuser # rpm -qa | sort | grep '^kernel-'
kernel-smp-2.6.16.46-0.12
kernel-source-2.6.16.46-0.12
kernel-syms-2.6.16.46-0.12

# Upgrade the kernel-smp-* RPM
st32:/var/tmp/joeuser # rpm -U kernel-smp-2.6.16.53-0.16.x86_64.rpm
Setting up /lib/modules/2.6.16.53-0.16-smp
Root device: /dev/disk/by-id/scsi-SATA_Maxtor_6Y120M0_Y3PRT2QE-part2 (/dev/sda2) (mounted on / as reiserfs )
Module list: amd74xx sata_nv processor thermal fan reiserfs edd (xennet xenblk)

Kernel image: /boot/vmlinuz-2.6.16.53-0.16-smp
Initrd image: /boot/initrd-2.6.16.53-0.16-smp
Shared libs: lib64/ld-2.4.so lib64/libacl.so.1.1.0 lib64/libattr.so.1.1.0 lib64/libc-2.4.so lib64/libdl-2. 4.so lib64/libhistory.so.5.1 lib64/libncurses.so.5.5 lib64/libpthread-2.4.so lib64/libreadline.so.5.1 lib64/l ibrt-2.4.so lib64/libuuid.so.1.2 lib64/libnss_files-2.4.so lib64/libnss_files.so.2 lib64/libgcc_s.so.1

Driver modules: ide-core ide-disk scsi_mod sd_mod amd74xx libata sata_nv processor thermal fan edd
Filesystem modules: reiserfs
Including: initramfs fsck.reiserfs
Bootsplash: SuSE-SLES (800x600)
15311 blocks

# Upgrade the kernel-syms-* RPM. Use the "--nodeps" option because, currently, there would
# be a circular dependentcy.
# (e.g. kernel-syms is dependent on kernel-source which is dependent on kernel-sysms)
st32:/var/tmp/joeuser # rpm -U --nodeps kernel-syms-2.6.16.53-0.16.x86_64.rpm

# Upgrade the kernel-source-* RPM. Probably didn't have to use the "--nodeps" option, since
# the kernel-syms-* RPM got installed above, but did anyway.
st32:/var/tmp/joeuser # rpm -U --nodeps kernel-source-2.6.16.53-0.16.x86_64.rpm
Changing symlink /usr/src/linux from linux-2.6.16.46-0.12 to linux-2.6.16.53-0.16
Changing symlink /usr/src/linux-obj from linux-2.6.16.46-0.12-obj to linux-2.6.16.53-0.16-obj

# Reboot to see the magic take effect.
st32:/var/tmp/joeuser # init 6

Friday, June 27, 2008

IB Fabric: Installing SLES10SP1 or SLES10SP2 Consistently


1) Disable iptables
2) Configure static address for ethernet interface
3) Ensure that these package groups are installed:
Server Base System, Common Code Base, Novell AppArmor, 32 Bit Runtime Environment, Documentation, GNOME, X-Window, C/C++ Compiler And Tools, Development (all packages here, resolve minimal to no dependencies).
4) Don't let installer establish a Certificate Authority (CA).
5) Set default runlevel to "3" (multi-user, networked, no X-Window)

Note: This post is just very rough and will need to be embellished in near future.

Thursday, June 26, 2008

IB Fabric: Establishing Pre-Placed SSH Keys So That All Nodes Trust Each Other


Assumptions: /etc/sysconfig/iba/hosts file reflects the compute node (CN) set.

1) Ensure that root SSH key pairs have been generated on all nodes. If not, generate the root SSH key pair using "ssh-keygen -t rsa" on all nodes.

2) Establish SSH trust between the management node (MN) and an individual CN by appending the MN root user's public SSH key (~/.ssh/id_rsa.pub) onto the CN root user's SSH authorized keys file (~/.ssh/authorized_keys2). Repeat this for the CN set.

This command may be helpful:
cat ~/.ssh/id_rsa.pub | ssh CN "cat - >> ~/.ssh/authorized_keys2"

3) On the MN, using the FF /sbin/setup_ssh script, to build a new known_hosts file.
a) cp /dev/null ~/.ssh/known_hosts
b) /sbin/setup_ssh -C

4) On the MN, build a new authorized_keys2 file for later propagation.
a) cat ~/.ssh/id_rsa.pub > /var/tmp/new_authorized_keys2
b) cmdall "cat ~/.ssh/id_rsa.pub" | grep -v 'ssh/id_rsa.pub' >> /var/tmp/new_authorized_keys2

5) Now, distribute the new authorized_keys2 and known_hosts files.
a) scpall /var/tmp/new_authorized_keys2 ~/.ssh/authorized_keys2
b) cmdall "chmod 644 ~/.ssh/authorized_keys2"
c) scpall ~/.ssh/known_hosts ~/.ssh/known_hosts
d) cmdall "chmod 644 ~/.ssh/known_hosts

6) Gain confidence that everything is synchronized correctly by:
a) cmdall "md5sum ~/.ssh/authorized_keys2"
b) cmdall "md5sum ~/.ssh/known_hosts"

NOTE: (3-Feb-2009) -- I have now found that the IFS /sbin/setup_ssh script seems to only work to generate half (no IB interfaces) of a known_hosts file. I used something like this:

/sbin/setup_ssh -s -S -i'-ib' -f /etc/sysconfig/iba/hosts

I still had to generate the authorized_keys2 file using the technique above.

NOTE: this rough procedure still needs more debugging.

Friday, June 20, 2008

Pre-placed SSH Keys


Use this link and this link

Friday, June 13, 2008

Apt Commands To Update Ubuntu Via CLI


apt-get update; apt-get upgrade

Possible CRON:
(apt-get update && apt-get -y upgrade) > /dev/null

Source Link

Tuesday, June 10, 2008

Linux Raid 5 Example Using mdadm(8)

mdadm(8) - manage MD devices aka Linux Software RAID

RAID 5 filesystem (~940GB of effective storage, 14 disks, 2 reserves, ~73GB drives)

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=14 --spare-devices=2 --chunk=128 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1

mke2fs -j -b 4096 -R stride=32 /dev/md0

fstab line:
/dev/md0 /raid5 ext3 rw 0 0

After reboot, we had to re-assemble by:

mdadm --assemble -f /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1

# cat /proc/mdstat
Personalities : [raid5]
md0 : active raid5 sda1[0] sdo1[14] sdp1[15] sdn1[13] sdm1[12] sdl1[11] sdk1[10] sdj1[9] sdi1[8] sdh1[7] sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1]
931864960 blocks level 5, 128k chunk, algorithm 2 [14/14] [UUUUUUUUUUUUUU]
unused devices:

Monitor example:

mdadm --monitor -m someone@somewhere.com -f /dev/md0

Thanks co-worker (you know who you are)

Note: Please see the November 2008 issue of Linux Journal (pg. 68-72) for an excellent article on Linux SW RAID using mdadm.