Connect securely to Windows Vista Remote Desktop 
Connect securely to Windows Vista Remote Desktop link.

[ add comment ] ( 5 views )   |  [ 0 trackbacks ]   |  permalink
Linux: LVM unsorted 
Recovering from LVM Mirror Failure

LVM Administrator's Guide

What is an Highly Available LVM (HA LVM) configuration and how I implement it?



Snapshot:
lvcreate --size 100m --snapshot --name snap --permission r --verbose /dev/VolGroup00/LogVol00


 1058  dmsetup remove /dev/vg01
1061 dmsetup remove vg01-linux_pokus_1_5G
1062 cd /sys/block/dm-0/dev
1063 cat /sys/block/dm-0/dev
1064 cat /sys/block/dm-0/stat
1065 cat /sys/block/dm-0/uevent
1066 cat /sys/block/dm-0/dev
1067 cat /sys/block/dm-0/holders/
1068 cd /dev/vg01/
1069 ls -la
1070 rm linux_pokus_1_5G
1071 ls -la
1072 cd ..
1073 ls -la
1074 ls vg01/
1075 rm -rf vg01/
1076 ls -la
1077 lvs
1084 lvreduce --removemissing
1091 ll /dev/mapper/vg01-linux_pokus_1_5G
1092 rm /dev/mapper/vg01-linux_pokus_1_5G
1093 lvs
1100 pvcreate /dev/sde
1101 pvcreate /dev/sdf
1102 vgcreate vg01 /dev/sde /dev/sdf
1107 lvcreate -n sbs2008_programs_and_data_90G -L 90G vg01
1112 lvs
1113 lvremove /dev/vg01/sbs2008_programs_and_data_90G


Display disks in volume group
vgdisplay -v vg01 


[ add comment ] ( 6 views )   |  [ 0 trackbacks ]   |  permalink
XEN VM server with Linux HA 
The setup below uses XEN and HA components to provide smooth backgroud for running services on VM machines. The setup consists of two servers interconnected by Gbit crossover link which is mandatory for fast and reliable live VM migration and is also used for DRBD and heartbeat.

Software used here is DRDB, Linux heartbeat and XEN server with live VM migration functionality.

The /etc/hosts file:

192.168.50.3            xen1
192.168.50.4 xen2


Components installed for heart-beat:

heartbeat-stonith-2.1.3-3.el5.centos *
heartbeat-2.1.3-3.el5.centos *
heartbeat-gui-2.1.3-3.el5.centos *
heartbeat-devel-2.1.3-3.el5.centos
heartbeat-pils-2.1.3-3.el5.centos *
heartbeat-ldirectord-2.1.3-3.el5.centos


DRBD components:

drbd82-8.2.6-1.el5.centos
kmod-drbd82-xen-8.2.6-2
kmod-drbd-xen-8.0.13-2


The HA setup file (/etc/ha.d/ha.cf):

use_logd yes
bcast eth2
node xen1 xen2
crm on


The HA setup file (/etc/ha.d/authkeys):

cat auth 1
1 sha1 SOME_ID_STRING


XENd config file (/etc/xen/xend-config.sxp):

(xend-relocation-server yes)
(xend-address 'xen1')
(xend-relocation-hosts-allow '^xen1$ ^xen2$ ^localhost$')
(xend-unix-server yes)
(xend-unix-path /var/lib/xend/xend-socket)
(network-script network-bridge)
(vif-script vif-bridge)
(dom0-min-mem 256)
(dom0-cpus 0)
(vncpasswd '')


The other server must have:

(xend-relocation-server yes)
(xend-address 'xen2')
(xend-relocation-hosts-allow '^xen1$ ^xen2$ ^localhost$')
(xend-unix-server yes)
(xend-unix-path /var/lib/xend/xend-socket)
(network-script network-bridge)
(vif-script vif-bridge)
(dom0-min-mem 256)
(dom0-cpus 0)
(vncpasswd '')


Our DRBD file (/etc/drbd.conf) which is the same on both xen servers:

common {
protocol C;
}

resource r0 {

startup {
become-primary-on both;
}

syncer {
rate 51200;
}

net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

device /dev/drbd1;
disk /dev/loop7;
meta-disk internal;

on xen1 {
address 192.168.50.3:7789;
}
on xen2 {
address 192.168.50.4:7789;
}
}


To spawn heartbeat gui:

# /usr/bin/hb_gui &


To spawn XEN manager:

vm-manager


To check the cluster status:

crm_mon


============
Last updated: Sun May 25 09:46:30 2008
Current DC: xen1 (3ff2a9ca-13be-41a5-a8f8-2657402e32f2)
2 Nodes configured.
1 Resources configured.
============

Node: xen1 (3ff2a9ca-13be-41a5-a8f8-2657402e32f2): online
Node: xen2 (e61ad97b-2750-4cf4-a307-2a9c48929a2a): online

vm05_r (heartbeat::ocf:Xen): Started xen2

Failed actions:
vm05_r_monitor_0 (node=xen1, call=6, rc=1): Error


The DomU resource in Linux HA was created using HA management gui. You will need to set the 'Parameters' on at 'Resource' as follows: allow_migrate=true, this allows live migration of virtual machines.

Do not forget to set the path to virtual machine configuration file using xmfile parameter.

[ add comment ] ( 2 views )   |  [ 0 trackbacks ]   |  permalink
Apache: protecting directory content using .htpasswd 
Create htaccess definition file: /var/www/html/directory/.htaccess:

AuthType Basic
AuthName "Please enter username and password"
AuthUserFile /var/www/html/directory/.htpasswd
Require valid-user


Create new password file for the user and set initial password:

# htpasswd -c passwordfile username


cat /var/www/html/directory/.htpasswd
username:hAsHhaSH


Allow the apache to use the config in /etc/httpd/conf/httpd.conf:

<Directory>
#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride All
</Directory>

#
# AccessFileName: The name of the file to look for in each directory
# for additional configuration directives. See also the
# AllowOverride directive.
#
AccessFileName .htaccess

#
# The following lines prevent .htaccess and .htpasswd files from
# being viewed by Web clients.
#
<Files ~ "^\.ht">
Order allow,deny
Deny from all
</Files>

<VirtualHost host.domain.cz:80>
AccessFileName /var/www/html/directory/.htaccess
DocumentRoot /var/www/html/directory/
ServerName domainname.dom
ServerAdmin administrator@domainname.dom
</VirtualHost>


Do not forget to edit the other AllowOverride

#
# AllowOverride controls what directives may be placed in .htaccess files.
# It can be "All", "None", or any combination of the keywords:
# Options FileInfo AuthConfig Limit
#
AllowOverride All


[ add comment ] ( 1 view )   |  [ 0 trackbacks ]   |  permalink
Apache /var/log/httpd/error_log: unable to check htaccess file, ensure it is readable 
It has nothing to do with htaccess file, the right reason is the apache "cannot read the directory or file". Set the folder permission to "750" and the folder group owner to apache.

Also check if the directory context is set to apache content when you are runing SeLinux.

# ls -lZ /directory/
# chcon -R -h -t httpd_sys_content_t /directory/path/


[ add comment ] ( 5 views )   |  [ 0 trackbacks ]   |  permalink
Minority Report' OS brought to life (from http://www.engadget.com/) 
Engadget: One of the science advisors from the Steven Spielberg film -- along with a team of other zany visionaries -- has created an honest-to-goodness, real-world implementation of the computer systems seen in the movie.






[ add comment ] ( 6 views )   |  [ 0 trackbacks ]   |  permalink
Linux: udev rules (persistent iSCSI device mapping) 
The following udev rule example runs the script which provides consistent and transparent iSCSI mapping to simple /dev/short_name mapping. If the target share is created with the recognized format (for example iqn.2008-redhat.com:iscsi1.vm07) the script maps the target to /dev/vm07.

/etc/udev/rules.d/95-iscsi.rules

KERNEL=="sd?", BUS=="scsi", ENV{ID_MODEL}=="VIRTUAL-DISK", \
ENV{ID_PATH}=="*iscsi*", RUN+="/etc/xen/scripts/iscsi \
$env{ID_PATH} %k"


/etc/xen/scripts/iscsi

#!/bin/bash
LINK_NAME=`/bin/echo $1 | awk -F":" '{ print $3 }' | \
awk -F"." '{ print $2 }'`
/bin/ln -s /dev/$2 /dev/$LINK_NAME


Some udev commands to query udev data and testing rules:

udevinfo -a -p $(udevinfo -q path -n /dev/sda)
udevinfo -q env -p $(udevinfo -q path -n /dev/sda)
udevtest $(udevinfo -q path -n /dev/sdf)


[ add comment ] ( 7 views )   |  [ 0 trackbacks ]   |  permalink
Linux: mirroring raw or targets for HA 

# yum -y install drbd82 kmod-drbd82-xen


Configuration steps here. The DRBD site (from which the diagram below is taken) is here.



Prepare the raw filesystem space and mount via losetup (in /var/raw/brd1.sh, the /etc/init.d/drbd calls this script):


# losetup /dev/loop7 /var/raw/brd1.img


Installl xen-related drbd and drbd-kernel module.


# yum install kmod-drbd82-xen.x86_64 -y
# yum install kmod-drbd-xen.x86_64 -y


Configure /etc/drbd.conf:

common {
protocol C;
}

resource r0 {

startup {
become-primary-on both;
}

syncer {
rate 512M;
}

net {
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
}

device /dev/drbd1;
disk /dev/loop7;
meta-disk internal;

on labt03.dev.xxx.com {
address 192.168.50.3:7789;
}
on labt04.dev.xxx.com {
address 192.168.50.4:7789;
}
}


Create device metadata.


# drbdadm create-md resource


Associates the DRBD resource with its backing device.


# drbdadm attach resource


Connects the DRBD resource with its counterpart on the peer node.


# drbdadm connect resource


DRBD's virtual status file in the /proc filesystem. Similar to /proc/mdstat.


# cat /proc/drbd


Start the initial full synchronization.


# drbdadm -- --overwrite-data-of-peer primary resource


Split brain:


# drbdadm secondary r0
# drbdadm -- --discard-my-data connect r0


[ add comment ] ( 5 views )   |  [ 0 trackbacks ]   |  permalink
Linux: kpartx mount block device from file 
kpartx zpřístupní jednotlivé diskové oddíly (podle tabulky oddílů) jako samostatná bloková zařízení. Přiklad ukazuje mapování oddílů ze souboru.


losetup /dev/loop0 file
kpartx -a /dev/loop0


Nové devices jsou k dispozici pod /dev/mapper


# ls -la /dev/mapper/*


Vytiskne seznam nalezených oddílů:


kpartx -l device


Odstraní mapování oddílů:


kpartx -d device


Prezentace k LVM2... ze které je to opsáno.

Adding more loop devices in /etc/modprobe.conf:
options loop max_loop=16
options netloop nloopbacks=8


[ add comment ] ( 6 views )   |  [ 0 trackbacks ]   |  permalink
Linux: XEN virtualization and live migration (2x XEN servers, 1x iSCSI target) 
Setup is two XEN servers and one iSCSI server. Both XEN servers do have 1Gbit dedicated connection to the iSCSI target. The XEN servers allow LIVE migration of the VM machines. The platform is CentOS 5.2.

XEN node setup

# yum install xen kernel-xen virt-manager virt-install


If you don't have it already installed, add X related packages to the system. If you plan to access virt-manager from Windows environment, install vnc-server.


# yum install xfs
# yum install xorg-x11*
# yum install xterm
# yum install vnc-server

# chkconfick xfs on
# service xfs start

# su - my_vnc_user
$ vncserver
$ exit
# echo "VNCSERVER="8:my_vnc_user" >> /etc/sysconfig/vncservers
# echo "VNCSERVERARGS[8]="-geometry 1024x768 -nolisten tcp -nohttpd" \
>> /etc/sysconfig/vncservers

# chkconfig vncserver on
# service vncserver start


Make the default boot kernel the xen-kernel.

# cat /etc/grub.conf
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You do not have a /boot partition. This means that
# all kernel and initrd paths are relative to /, eg.
# root (hd0,0)
# kernel /boot/vmlinuz-version ro root=/dev/cciss/c0d0p1
# initrd /boot/initrd-version.img
#boot=/dev/cciss/c0d0
default=0
timeout=5
splashimage=(hd0,0)/boot/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.18-92.1.13.el5xen)
root (hd0,0)
kernel /boot/xen.gz-2.6.18-92.1.13.el5
module /boot/vmlinuz-2.6.18-92.1.13.el5xen ro root=LABEL=/1
module /boot/initrd-2.6.18-92.1.13.el5xen.img
title CentOS (2.6.18-92.1.13.el5)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-92.1.13.el5 ro root=LABEL=/1
initrd /boot/initrd-2.6.18-92.1.13.el5.img
title CentOS (2.6.18-92.el5)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-92.el5 ro root=LABEL=/1
initrd /boot/initrd-2.6.18-92.el5.img


Change the default kernel settings to xen-kernel.

# cat /etc/sysconfig/kernel
# UPDATEDEFAULT specifies if new-kernel-pkg should make
# new kernels the default
UPDATEDEFAULT=yes

# DEFAULTKERNEL specifies the default kernel package type
DEFAULTKERNEL=xen-kernel


Then run virt-manager remotely or virsh locally.

[x_win_client] # ssh -X xen_machine.some.org virt-manager


Single host VM example

XEN vm config file below uses full virtualisation (not the paravirtualisation), ethernet bridging and virtual disk on file. The vm system boots cdrom first (iso file).

If you would like to keep vm to boot from cdrom iso you have to specify disk = [ "file:/var/lib/xen/images/vm03.img,hda,w", "file:/var/images/rhel45-i386-boot.iso,hdc:cdrom,r" ] and the boot order as boot = "dc". The boot = "c" option makes the system boots from virtual harddrive only.

name = "vm03"
uuid = "8a25f77a-30bc-7fc4-670c-22fa34f1c471"
maxmem = 512
memory = 512
vcpus = 1
builder = "hvm"
kernel = "/usr/lib/xen/boot/hvmloader"
boot = "dc"
pae = 1
acpi = 1
apic = 1
localtime = 0
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
device_model = "/usr/lib64/xen/bin/qemu-dm"
sdl = 0
vnc = 1
vncunused = 1
keymap = "en-us"
disk = [ "file:/var/lib/xen/images/vm03.img,hda,w", "file:/var/images/rhel45-i386-boot.iso,hdc:cdrom,r" ]
vif = [ "mac=00:16:3e:6c:08:ff,bridge=xenbr0" ]
serial = "pty"


As we do have VM running, we can proceed to iSCSI configuration. The iSCSI provides diskspace to both the XEN server nodes simultaneously. This will allow us to migrate VM between the XEN servers.

Configure iSCSI target:


# yum install scsi-target-utils -y
# yum install lsscsi -y
# yum install iscsi-initiator-utils -y


Creating harddisk file for VM:


dd if=/dev/zero of=vm07.img bs=1024 count=16777216
dd if=/dev/zero of=vm08.img bs=1024 count=16777216


Disabling seLinux temporary, start all the daemons:


# setenforce 0
# service iscsid start
# chkconfig iscsid on
# service tgtd restart
# chkconfig tgtd on


Create targets:


tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2008-redhat.com:iscsi1.vm07
tgtadm --lld iscsi --op new --mode target --tid 2 -T iqn.2008-redhat.com:iscsi1.vm08


Or delete them alike:


delete: tgtadm --lld iscsi --op delete --mode target --tid 1 -I ALL


Show what is configured:


tgtadm --lld iscsi --op show --mode target


Bind taget to file:


# tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /iscsi/vm07.img
# tgtadm --lld iscsi --op new --mode logicalunit --tid 2 --lun 1 -b /iscsi/vm08.img


Or delete binding:


delete: tgtadm --lld iscsi --op delete --mode logicalunit --tid 2 --lun 1 -b /iscsi/vm08.img


Allow access to ALL:


tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
tgtadm --lld iscsi --op bind --mode target --tid 2 -I ALL


Check again:


tgtadm --lld iscsi --op show --mode target


Put all the commands to the config file and command the system to execute it while system boot, it's not the standard (or check this):


edit /etc/tgt/targets.conf


On client (XEN nodes)


# yum install iscsi-initiator-utils
# service iscsid start
# chkconfig iscsid on


Discovery or log to the targets:


iscsiadm --mode discovery --type sendtargets --portal 192.168.100.100
iscsiadm -d 255 --mode discovery -t sendtargets -p 192.168.200.100 -l


Scan for a new disk/s:


partprobe
fdisk -l
mkfs ...


As you need to keep the path to the iscsi share static (even after migration from one node to another) you have to use the /sys/disk/by-id/iscsi-string-identifier path to identify the iscsi vm raw disks or write some rules in udev. The entries /dev/sd[xx] could change next boot time. The example is here.

Migration:

The VM machine config. Check out the disk path, which points to /sys/disk/by-id/iscsi-string-identifier.


name = "vm08"
uuid = "d138c27b-6d1a-11fd-ea0a-ca8f884b5446"
maxmem = 512
memory = 512
vcpus = 1
builder = "hvm"
kernel = "/usr/lib/xen/boot/hvmloader"
boot = "c"
pae = 1
acpi = 1
apic = 1
localtime = 0
on_poweroff = "destroy"
on_reboot = "restart"
on_crash = "restart"
device_model = "/usr/lib64/xen/bin/qemu-dm"
sdl = 0
vnc = 1
vncunused = 1
keymap = "en-us"
disk = [ "phy:/dev/disk/by-id/scsi-16465616462656166313a310000000000000000000000 0000,hda,w", "file:/var/images/win-xp-64.iso,hdc:cdrom,r" ]
vif = [ "mac=00:16:3e:06:79:47,bridge=xenbr0" ]
serial = "pty"


For live migration you have to change several lines in default /etc/xen/xend-config.sxp. The block below showl only the live migration xen config changes. The config must be edited on all the nodes involved in live vm migration. The actual VM is configured on one of the VM servers only.


(xend-relocation-address '')
(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$ ^xen1$ ^xen2$')


There is a 1Gbit cross cable between the xen1 and xen2 to speed-up live migration and keep some level of security while migrating the VM machine (RAM) between servers.


192.168.100.100 iscsi1
192.168.50.3 xen1
192.168.50.4 xen2


The actual command you have been waiting for, the live VM migration between two hosts:


[xen1-server] # xm migrate -l vm08 xen2


Some links below:

RedHat cluster manual.



[ 9 comments ] ( 42 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | Next> Last>>