Machine with 16GB ram:
raid6, 6x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32000M 72651 92 63525 13 60124 11 83520 91 354492 17 180.1 0
raid5, 3x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32020M 47882 61 46185 9 33790 4 83385 88 188044 2 100.8 0
raid1, 2x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32020M 51751 67 70976 15 41676 1 81790 86 101873 1 303.5 0
raid0, 2x500GB disk:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
32100M 67954 86 148093 30 80271 7 81854 88 205836 4 384.7 0
Recently deployed Xen virtualisation on hardware which had no special onboard RAID support or RAID controller. The setup was limited by 6 onboard sata ports for disks (500GB/1000rpm). The original idea was to put all the vm images to single RAID6 filesystem. After the setup was done we run a bonnie++ and found the performance is not enough to serve all the vms together.
We started to think about using RAID5 and distribute the vm images across two separate RAID5 filesystems. This was also not so fast as we expected.
Surprisingly, the latest option build upon three separate RAID1 filesystems looked better.
DomU benchmarks using bonnie++ and iozone yet to be published.
DomU on file in raid on VM which has 1GB mem, test file is 16GB and 2GB. The 2GB file can be cached partially. As you can see the RAM size of VM can be a big improvement for filesystem:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
16000 18031 24 18281 4 30305 16 45792 80 80177 8 78.8 5
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
2G 70912 96 96401 33 52077 40 50938 89 131777 20 2198 81
[ add comment ] ( 2 views ) | [ 0 trackbacks ] | permalink
To watch server performance meaningfully you can use the kSAR java application which understands the sar output. The kSAR will provide you with nice graphs for almost all of the data collected. kSAR can even do the pdf for later presentation. kSAR seems to be a perfect tool for performance analysis.
documentation:
http://heanet.dl.sourceforge.net/source ... -4.0.4.pdf
kSar download:
http://sourceforge.net/project/showfile ... _id=179805
Within the small window choose launch ssh command and see the graphs...
It is useful sometimes to collect the disk statistics. The linux sysstat package for CentOS/RedHat and others is set by default to not collect them. Therefore 'sar -d' will output that no data is available, specificaly it will say 'Requested activities not available in file'. To watch the disk statistics for your system with sar you have to modify /usr/lib64/sa/sa1 script which comes with sysstat package and is scheduled to run 1/10 min by cron (/etc/cron.d/sysstat).
The switch you should have to add for the sadc is '-d':
#!/bin/sh
# /usr/lib64/sa/sa1.sh
# (C) 1999-2006 Sebastien Godard (sysstat <at> wanadoo.fr)
#
umask 0022
ENDIR=/usr/lib64/sa
cd ${ENDIR}
if [ $# = 0 ]
then
# Note: Stats are written at the end of previous file *and* at the
# beginning of the new one (when there is a file rotation) only if
# outfile has been specified as '-' on the command line...
exec ${ENDIR}/sadc -d -F -L 1 1 -
else
exec ${ENDIR}/sadc -d -F -L $* -
fi
Then you have to clear the sar statisctisc for the day and sar will start to collect data. If you issue the 'sar -d' you will get some similar to this:
09:20:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
09:30:01 AM dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
09:30:01 AM dev8-16 1.72 0.11 98.74 57.36 0.00 0.95 0.26 0.04
09:30:01 AM dev8-32 0.69 2.56 10.48 18.76 0.00 3.79 2.56 0.18
09:30:01 AM dev8-48 0.76 2.53 11.49 18.38 0.00 6.23 2.93 0.22
09:30:01 AM dev8-64 0.75 2.37 11.35 18.25 0.01 8.29 2.77 0.21
09:30:01 AM dev8-80 0.79 1.79 12.65 18.20 0.00 5.50 2.77 0.22
09:30:01 AM dev8-96 0.87 1.65 13.57 17.54 0.00 4.22 2.41 0.21
09:30:01 AM dev8-112 0.73 2.63 10.12 17.42 0.00 4.95 2.82 0.21
09:30:01 AM dev9-0 2.68 3.41 18.01 8.00 0.00 0.00 0.00 0.00
Average: dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: dev8-16 2.16 4.97 103.67 50.37 0.00 1.15 0.67 0.14
Average: dev8-32 1.35 9.30 15.13 18.10 0.01 5.72 2.54 0.34
Average: dev8-48 1.47 9.08 17.29 17.95 0.01 5.46 2.40 0.35
Average: dev8-64 1.40 9.27 16.22 18.23 0.01 6.19 2.56 0.36
Average: dev8-80 1.42 9.06 16.59 18.10 0.01 5.73 2.44 0.35
Average: dev8-96 1.43 8.74 16.97 17.96 0.01 5.32 2.45 0.35
Average: dev8-112 1.35 9.33 15.26 18.28 0.01 5.99 2.57 0.35
Average: dev9-0 4.83 34.11 25.74 12.38 0.00 0.00 0.00 0.00
The question you might have is how the 'dev8-0' relates to the disk. To reveal this just type:
[tech@xen ~]$ find /dev -name sd* | xargs ls -la
brw-r----- 1 root disk 8, 0 Mar 28 03:24 /dev/sda
brw-r----- 1 root disk 8, 1 Mar 28 03:24 /dev/sda1
brw-r----- 1 root disk 8, 2 Mar 28 03:24 /dev/sda2
brw-r----- 1 root disk 8, 3 Mar 28 03:24 /dev/sda3
brw-r----- 1 root disk 8, 16 Mar 28 03:24 /dev/sdb
brw-r----- 1 root disk 8, 17 Mar 28 03:25 /dev/sdb1
brw-r----- 1 root disk 8, 18 Mar 28 03:24 /dev/sdb2
brw-r----- 1 root disk 8, 19 Mar 28 03:25 /dev/sdb3
brw-r----- 1 root disk 8, 32 Mar 28 03:24 /dev/sdc
brw-r----- 1 root disk 8, 33 Mar 28 03:24 /dev/sdc1
brw-r----- 1 root disk 8, 48 Mar 28 03:24 /dev/sdd
brw-r----- 1 root disk 8, 49 Mar 28 03:24 /dev/sdd1
brw-r----- 1 root disk 8, 64 Mar 28 03:24 /dev/sde
brw-r----- 1 root disk 8, 65 Mar 28 03:24 /dev/sde1
brw-r----- 1 root disk 8, 80 Mar 28 03:24 /dev/sdf
brw-r----- 1 root disk 8, 81 Mar 28 03:24 /dev/sdf1
brw-r----- 1 root disk 8, 96 Mar 28 03:24 /dev/sdg
brw-r----- 1 root disk 8, 97 Mar 28 03:24 /dev/sdg1
brw-r----- 1 root disk 8, 112 Mar 28 03:24 /dev/sdh
brw-r----- 1 root disk 8, 113 Mar 28 03:24 /dev/sdh1
You can see the 8,0 is actually the major and minor device number assigned to the device.
Next, you can add this device mapping to kSAR to see regular disk names within the kSAR disk graphs.
[ 80 comments ] ( 198 views ) | [ 0 trackbacks ] | permalink
[root@xen ~]# rpm -qf /usr/bin/sar
sysstat-7.0.2-1.el5
[ add comment ] ( 3 views ) | [ 0 trackbacks ] | permalink
For 2.6 kernels, make sure you have mounted sysfs and done
'modprobe i2c_sensor'!
If the package is unable to find i2c bus information try to run the probe for your system after package install:
# sensors-detect
modprobe i2c-isa
modprobe i2c-ipmi
modprobe bmcsensors
# sleep 2 # optional
/usr/bin/sensors -s # recommended
[ add comment ] ( 3 views ) | [ 0 trackbacks ] | permalink
[ add comment ] ( 5 views ) | [ 0 trackbacks ] | permalink
[ add comment ] ( 6 views ) | [ 0 trackbacks ] | permalink
XEN bridging, OpenSUSE Xen 3.2.0 bridging more interfaces example.
[ 1 comment ] ( 9 views ) | [ 0 trackbacks ] | permalink
To manage the XEN vm will autostart after the service xen is up link the vm config within the auto directory:
ln –s /etc/xen/MachineFile /etc/xen/auto/MachineFile
Is some cases the change in config.sxp helps to get the machines up:
(on_xend_start start)
(on_xend_stop shutdown)
[ add comment ] ( 6 views ) | [ 0 trackbacks ] | permalink
#!/bin/sh
USERNAME="your-ftp-user-name"
PASSWORD="your-ftp-password"
SERVER="your-ftp.server.com"
# local directory to pickup *.tar.gz file
FILE="/tmp/backup"
# remote server directory to upload backup
BACKUPDIR="/pro/backup/sql"
# login to remote server
ftp -ni $SERVER <<EOF
user $USERNAME $PASSWORD
cd $BACKUPDIR
mput $FILE/*.tar.gz
quit
EOF
source
[ add comment ] ( 6 views ) | [ 0 trackbacks ] | permalink
[ 1 comment ] ( 12 views ) | [ 0 trackbacks ] | permalink