Linux Find SCSI Hard Disk Model, Serial Number, Size, and Total Sectors Information 
sdparm RPM packages for Red Hat, CentOS and Fedora

To list common mode parameters of a disk, enter:
# sdparm /dev/sda
To list the designators within the device identification VPD page of a disk:
# sdparm --inquiry /dev/sdb
To see all parameters for the caching mode page:
# sdparm --page=ca /dev/sdc
To set the "Writeback Cache Enable" bit in the current values page:
# sdparm --set=WCE /dev/sda


taken: http://www.cyberciti.biz/tips/sdparm-li ... ibute.html

[ add comment ]   |  [ 0 trackbacks ]   |  permalink
Grub bootloader setup after replacing a failed drive - software RAID1 
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83

grub>

setup (hd0)

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/menu.lst"... succeeded
Done.

grub>

quit


taken: http://www.howtoforge.com/software-raid ... an-etch-p2

Grub got deleted:

find /boot/grub/stage1 (optional)
root (hdX,Y)
setup (hd0)
quit


http://www.dedoimedo.com/computers/grub ... ocId976410

[ 6 comments ]   |  [ 0 trackbacks ]   |  permalink
Replacing a failed drive - software RAID1 
Copy partition layout from a disk to file somewhere:
sfdisk -d /dev/sda > /raidinfo/partitions.sda

Mark all the parts on the disk which will be replaced as failed:
mdadm --manage /dev/md0 --fail /dev/sdb1

Then remove all the related partitions from the raid:
mdadm --manage /dev/md0 --remove /dev/sdb1

Power down system:
shutdown -h now

After hard disk is swapped, boot the system and create the same partitioning on a new disk.
sfdisk /dev/sda < /raidinfo/partitions.sda

Then add /dev/sdb1 to /dev/md0 and other partitions as well:
mdadm --manage /dev/md0 --add /dev/sdb1

Next, install Grub boot loader on replaced disk.

[ add comment ]   |  [ 0 trackbacks ]   |  permalink
DomU Nagios plugin check 
#!/bin/bash

DOMU=$1

XM_CMD="/usr/sbin/xm"
sudo $XM_CMD list | grep $DOMU 1>/dev/null
if [ $? -ne 0 ]; then
echo "CRITICAL: The domU $DOMU seems to be down!"
exit 2
else
TIME=`sudo $XM_CMD list | grep $DOMU | awk '{ print $6 }'`
VCPU=`sudo $XM_CMD list | grep $DOMU | awk '{print $4}'`
MEM=`sudo $XM_CMD list | grep $DOMU | awk '{print $3}'`
STATE=`sudo $XM_CMD list | grep $DOMU | awk '{print $5}' | sed 's/-//g'`
if [ $STATE == "d" ]; then
echo "WARNING: $DOMU seems to be dying (Time=$TIME) (MEM=$MEM) (VCPU=$VCPU) (STATE=$STATE)";
exit 2;
elif [ $STATE == "p" ]; then
echo "WARNING: $DOMU seems to be paused (Time=$TIME) (MEM=$MEM) (VCPU=$VCPU) (STATE=$STATE)";
exit 1;
elif [ $STATE == "c" ]; then
echo "CRITICAL: $DOMU seems to be crashed (Time=$TIME) (MEM=$MEM) (VCPU=$VCPU) (STATE=$STATE)";
exit 2;
else echo "OK: $DOMU seems to be up (Time=$TIME) (MEM=$MEM) (VCPU=$VCPU) (STATE=$STATE)"
fi
exit 0
fi
done


the check above is running under the nagios nrpe as nagios user. therefore we need to allow nagios user to execute xm commands.
# #Defaults    requiretty <-- disable this!
### XEN
Cmnd_Alias XEN = /usr/sbin/xm

## Allows members of the users group to shutdown this system
# %users localhost=/sbin/shutdown -h now
%nagios ALL = NOPASSWD: XEN



There is no need for disabling requiretty globally:

Defaults requiretty

Much safer and tighter is to disable requiretty only for the user nagios runs as:

Defaults:nagios !requiretty
e


[ 7 comments ]   |  [ 0 trackbacks ]   |  permalink
Compact Fluorescent Bulbs 



Environmental and Health Concerns
Associated with Compact Fluorescent Lights


[ add comment ]   |  [ 0 trackbacks ]   |  permalink
The Zeitgeist Movement: Orientation Presentation 


[ add comment ]   |  [ 0 trackbacks ]   |  permalink
Monome, the interval of fourth plus Axis. 




[ add comment ]   |  [ 0 trackbacks ]   |  permalink
Adding serial console to XEN VM 
In VM edit /etc/inittab. Serial console howto link.
# Run agetty on COM1/ttyS0
s0:12345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100

[root@vm01 ~]# grep ttyS /etc/securetty
ttyS0

Configure grub, add console parameter:
[root@vm01 ~]# cat /etc/grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00
# initrd /initrd-version.img
#boot=/dev/hda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-92.1.22.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-92.1.22.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet console=ttyS0,38400
initrd /initrd-2.6.18-92.1.22.el5.img
title CentOS (2.6.18-92.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-92.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet console=ttyS0,38400
initrd /initrd-2.6.18-92.el5.img

VM configuration:
[root@xen ]# cat /etc/xen/vm  | grep serial
serial = "pty"


[ 7 comments ]   |  [ 0 trackbacks ]   |  permalink
Linux RAID performance: RAID6 vs RAID5 vs RAID1 vs RAID0 
dom0 tests, bonnie++, ext3 filesystem, default mount options

Machine with 16GB ram:

raid6, 6x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32000M 72651 92 63525 13 60124 11 83520 91 354492 17 180.1 0

raid5, 3x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32020M 47882 61 46185 9 33790 4 83385 88 188044 2 100.8 0

raid1, 2x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32020M 51751 67 70976 15 41676 1 81790 86 101873 1 303.5 0

raid0, 2x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32100M 67954 86 148093 30 80271 7 81854 88 205836 4 384.7 0

Recently deployed Xen virtualisation on hardware which had no special onboard RAID support or RAID controller. The setup was limited by 6 onboard sata ports for disks (500GB/1000rpm). The original idea was to put all the vm images to single RAID6 filesystem. After the setup was done we run a bonnie++ and found the performance is not enough to serve all the vms together.

We started to think about using RAID5 and distribute the vm images across two separate RAID5 filesystems. This was also not so fast as we expected.

Surprisingly, the latest option build upon three separate RAID1 filesystems looked better.

DomU benchmarks using bonnie++ and iozone yet to be published.

DomU on file in raid on VM which has 1GB mem, test file is 16GB and 2GB. The 2GB file can be cached partially. As you can see the RAM size of VM can be a big improvement for filesystem:
      ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
16000 18031 24 18281 4 30305 16 45792 80 80177 8 78.8 5


     ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
2G 70912 96 96401 33 52077 40 50938 89 131777 20 2198 81


[ 6 comments ]   |  [ 0 trackbacks ]   |  permalink
Graphing sar statistics using kSAR, collecting disk data with sar 
To watch server performance meaningfully you can use the kSAR java application which understands the sar output. The kSAR will provide you with nice graphs for almost all of the data collected. kSAR can even do the pdf for later presentation. kSAR seems to be a perfect tool for performance analysis.

documentation:
http://heanet.dl.sourceforge.net/source ... -4.0.4.pdf

kSar download:
http://sourceforge.net/project/showfile ... _id=179805

Within the small window choose launch ssh command and see the graphs...



It is useful sometimes to collect the disk statistics. The linux sysstat package for CentOS/RedHat and others is set by default to not collect them. Therefore 'sar -d' will output that no data is available, specificaly it will say 'Requested activities not available in file'. To watch the disk statistics for your system with sar you have to modify /usr/lib64/sa/sa1 script which comes with sysstat package and is scheduled to run 1/10 min by cron (/etc/cron.d/sysstat).

The switch you should have to add for the sadc is '-d':

#!/bin/sh
# /usr/lib64/sa/sa1.sh
# (C) 1999-2006 Sebastien Godard (sysstat <at> wanadoo.fr)
#
umask 0022
ENDIR=/usr/lib64/sa
cd ${ENDIR}
if [ $# = 0 ]
then
# Note: Stats are written at the end of previous file *and* at the
# beginning of the new one (when there is a file rotation) only if
# outfile has been specified as '-' on the command line...
exec ${ENDIR}/sadc -d -F -L 1 1 -
else
exec ${ENDIR}/sadc -d -F -L $* -
fi


Then you have to clear the sar statisctisc for the day and sar will start to collect data. If you issue the 'sar -d' you will get some similar to this:

09:20:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
09:30:01 AM dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
09:30:01 AM dev8-16 1.72 0.11 98.74 57.36 0.00 0.95 0.26 0.04
09:30:01 AM dev8-32 0.69 2.56 10.48 18.76 0.00 3.79 2.56 0.18
09:30:01 AM dev8-48 0.76 2.53 11.49 18.38 0.00 6.23 2.93 0.22
09:30:01 AM dev8-64 0.75 2.37 11.35 18.25 0.01 8.29 2.77 0.21
09:30:01 AM dev8-80 0.79 1.79 12.65 18.20 0.00 5.50 2.77 0.22
09:30:01 AM dev8-96 0.87 1.65 13.57 17.54 0.00 4.22 2.41 0.21
09:30:01 AM dev8-112 0.73 2.63 10.12 17.42 0.00 4.95 2.82 0.21
09:30:01 AM dev9-0 2.68 3.41 18.01 8.00 0.00 0.00 0.00 0.00
Average: dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: dev8-16 2.16 4.97 103.67 50.37 0.00 1.15 0.67 0.14
Average: dev8-32 1.35 9.30 15.13 18.10 0.01 5.72 2.54 0.34
Average: dev8-48 1.47 9.08 17.29 17.95 0.01 5.46 2.40 0.35
Average: dev8-64 1.40 9.27 16.22 18.23 0.01 6.19 2.56 0.36
Average: dev8-80 1.42 9.06 16.59 18.10 0.01 5.73 2.44 0.35
Average: dev8-96 1.43 8.74 16.97 17.96 0.01 5.32 2.45 0.35
Average: dev8-112 1.35 9.33 15.26 18.28 0.01 5.99 2.57 0.35
Average: dev9-0 4.83 34.11 25.74 12.38 0.00 0.00 0.00 0.00

The question you might have is how the 'dev8-0' relates to the disk. To reveal this just type:

[tech@xen ~]$ find /dev -name sd*  | xargs ls -la
brw-r----- 1 root disk 8, 0 Mar 28 03:24 /dev/sda
brw-r----- 1 root disk 8, 1 Mar 28 03:24 /dev/sda1
brw-r----- 1 root disk 8, 2 Mar 28 03:24 /dev/sda2
brw-r----- 1 root disk 8, 3 Mar 28 03:24 /dev/sda3
brw-r----- 1 root disk 8, 16 Mar 28 03:24 /dev/sdb
brw-r----- 1 root disk 8, 17 Mar 28 03:25 /dev/sdb1
brw-r----- 1 root disk 8, 18 Mar 28 03:24 /dev/sdb2
brw-r----- 1 root disk 8, 19 Mar 28 03:25 /dev/sdb3
brw-r----- 1 root disk 8, 32 Mar 28 03:24 /dev/sdc
brw-r----- 1 root disk 8, 33 Mar 28 03:24 /dev/sdc1
brw-r----- 1 root disk 8, 48 Mar 28 03:24 /dev/sdd
brw-r----- 1 root disk 8, 49 Mar 28 03:24 /dev/sdd1
brw-r----- 1 root disk 8, 64 Mar 28 03:24 /dev/sde
brw-r----- 1 root disk 8, 65 Mar 28 03:24 /dev/sde1
brw-r----- 1 root disk 8, 80 Mar 28 03:24 /dev/sdf
brw-r----- 1 root disk 8, 81 Mar 28 03:24 /dev/sdf1
brw-r----- 1 root disk 8, 96 Mar 28 03:24 /dev/sdg
brw-r----- 1 root disk 8, 97 Mar 28 03:24 /dev/sdg1
brw-r----- 1 root disk 8, 112 Mar 28 03:24 /dev/sdh
brw-r----- 1 root disk 8, 113 Mar 28 03:24 /dev/sdh1


You can see the 8,0 is actually the major and minor device number assigned to the device.

Next, you can add this device mapping to kSAR to see regular disk names within the kSAR disk graphs.



[ 18 comments ]   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | Next> Last>>