It is not necessary to mark the traffic with iptables and generate a whole new routing table while system scripts already prepared the base for us to use the eth0. Just a little tweaking and your squid http/s will go though eth1 (in our demo case its ISP2).
First, tell the squid to use the IP address bound to eth1 (ISP2 interface, let's say IP is 192.168.16.253). Squid.conf now inherits:
tcp_outgoing_address 192.168.16.253
If you not add the line above, squid will use the first card's IP (IP of eth0) as its outgoing address.
When two network interfaces are present in system while each one is connected to different provider the default system routing option is to pass everything except neighbour network address using the default gateway.
In our case all is still going through eth0 even we set the squid to use eth1 IP. This is what we do not want.
To modify default behaviour you are supposed to modify a routing table. But why to modify a whole table, when system already prepared useful records in default. We will just add. Yes, we will add rule which says the second's interface ranged packets
will use ISP2 default gateway.
We will put the additional routing information in the ip routing table (called ISP2). First, choose number for ISP2 table and write the definition to /etc/iproute2/rt_tables file. It is simple:
echo "110 ISP2" >> /etc/iproute2/rt_tables
Then add the script which will tell the system to route all what is originating from eth1 card and range via default gateway for your ISP2. Following script will do that:
#!/bin/bash
# check if defaul route for ISP2 already exists, if yes exit
RETVAL=`/sbin/ip route show table ISP2 | grep default | wc -l`
[ $RETVAL -eq 1 ] && exit
ip route add FIRST_CARD_NETWORK/28 dev eth0 src FIRST_IFACE_IP table ISP2
ip route add SECOND_CARD_NETWORK/24 dev eth1 src SECOND_IFACE_IP table ISP2
ip route add default via DEFAULT_ROUTE_FOR_ISP2 table ISP2
# set the rule for our routing
ip rule add from SECOND_NETWORK_SUBNET/28 table ISP2
# flush cache
ip route flush cache
The most important line says "route all traffic which source is a.b.c.d using other routing table than default":
ip rule add from SECOND_NETWORK_SUBNET/28 table ISP2
The trick is the table ISP2 contains the new default gateway.
To show a new routing table for specific ISP:
ip route show table NAME
[ add comment ] ( 5 views ) | [ 0 trackbacks ] | permalink
The Maximum Data Packed Size
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Adapters
The size of the Receive Window
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The size of the Default Time To Live
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The Timestamps and Window Scaling options
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The number of duplicate ACKs
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The automatic MTU discovery
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The ACK (SACK - RFC 2018) support
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The number of simultaneous connections per HTTP1.0 (MaxConnectionsPerServer)
\Software\Microsoft\Windows\CurrentVersion\Internet Settings
The number of simultaneous connections per HTTP1.1 (MaxConnectionsPerServer)
\Software\Microsoft\Windows\CurrentVersion\Internet Settings
The maximum number of TCP connections
\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
The default SizReqBuf value
\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters
[ add comment ] ( 3 views ) | [ 0 trackbacks ] | permalink
You can change the behaviour of the prefetch via registry entry (setting the keys in the following path):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\PrefetchParameters\EnableSuperfetch
1 prefetches boot processes,
2 prefetches applications
3 is to enable both
Disabling all prefetch in registry (setting 0) will generate error in logs:
The Superfetch service terminated with the following error:
The operating system is not presently configured to run this application.
If you decided to disable prefetch completely, it is better to disable the whole prefetch service via services (just tyoe services (%SystemRoot%\system32\services.msc).
http://www.davidnaylor.co.uk/
[ add comment ] ( 5 views ) | [ 0 trackbacks ] | permalink
#!/usr/bin/bash
PROCID=$$
SHRUNDIRECTORY=$PWD
SCRIPTSHELLPATH=`pmap $PROCID | head -1 | awk '{ print $3 }'`
MYPATH="${SHRUNDIRECTORY}${SCRIPTSHELLPATH}"
echo $MYPATH | gawk '{ sub(/\./, ""); print }'
[ add comment ] ( 4 views ) | [ 0 trackbacks ] | permalink
I've been surprised while reading this. It's almost universal.
We are making progress, slower in some cases but nevertheless I value your opinions to help overcome these hurdles and we should view them as challenges to improve our customer orientation, collaboration and people development.
[ add comment ] ( 6 views ) | [ 0 trackbacks ] | permalink
hauser -add username
haclus -modify Administrators -add username
Change password:
hauser -update username
Display basic info (clustername, administrators, users, versions...)
haclus -display
Display system state
hasys -state
#System Attribute Value
system1 SysState RUNNING
system2 SysState RUNNING
List service groups (ClusterService and Common are running on all nodes):
# hagrp -list
Staus of all service groups, their state (frozen etc):
hastatus -summary
Veritas Cluster Cheat sheet page.
Another useful commands:
haconf -makerw - Allow updates to cluster config
haconf -dump -makero - Write config file to disk and lock config file
hagrp -freeze <group> [-persistent] - Prevent a service group from failing over
hagrp -unfreeze <group> - Thaw a frozen service group
hastart - Start HA on a cluster node
hastart -force - Force start HA when config is "stale"
hastop -all - stops VCS and service groups on all nodes
hastop -local - fails over services to another node, and then stops VCS
hastop -local -force - immediately stops VCS on current server, but leaves all services running
Note: make sure the configuration file is in read-only mode before you use the "-force" option, or otherwise the config file will be marked as stale.
hagrp -switch <group> -to <sys> - failover a group to another server
hagrp -offline <group> -sys <sys> - offline a service group
hagrp -online <group> -sys <sys> - online a service group
hagrp -flush <group> - sys <sys> - flush a service group an enable corrective action. All resources in that group that are waiting to go online automatically transition to not waiting. Any failovers in progress are cancelled.
hacli -cmd <command> [-sys system] - invokes a command on any system in the cluster
[ add comment ] ( 3 views ) | [ 0 trackbacks ] | permalink
http://www.makelinux.net/kernel_map
Dizzy.
[ add comment ] ( 4 views ) | [ 0 trackbacks ] | permalink
V souboru /etc/sysconfig/networking/host_ip_table jsou napsány všechny IP adresy klientů routeru s udáním garantované šířky pásma downloadu, ve skriptu pak v proměnné MAX_LOAD celková šířka pásma přidělaná providerem.
Shaping lze aplikovat pouze na vstupu do routeru/interfejsu. Při omezení odchozího/uploadu provozu klientů LAN (kteří sedí např. na eth1) je tedy nutné rovněž nastavit shape uploadu, například upravit tento skript pro download následovně: (INTERFACE="eht1"), řádek s politikou tc [tc filter add dev ${INTERFACE} parent 1:1 protocol ip u32 match ip dst ${host_ip} flowid 1:$((10 + ${flowid}))] změnit na ip src.
cat /etc/sysconfig/networking/host_ip_table
74.125.39.33,200
74.125.39.34,100
74.125.39.35,50
74.125.39.35,50
74.125.39.36,100
74.125.39.37,150
74.125.39.38,50
74.125.39.44,500
#!/bin/bash
INTERFACE="eth0"
MAX_LOAD="1024" # 1 mbit in kbit
CLIENT_IP_TABLE="/etc/sysconfig/networking/host_ip_table"
echo executing: tc qdisc del dev ${INTERFACE} root
tc qdisc del dev ${INTERFACE} root
echo executing: tc qdisc add dev ${INTERFACE} root handle 1: htb default 1
tc qdisc add dev ${INTERFACE} root handle 1: htb default 1
echo executing: tc class add dev ${INTERFACE} parent 1: classid 1:1 htb rate ${MAX_LOAD}kbit
tc class add dev ${INTERFACE} parent 1: classid 1:1 htb rate ${MAX_LOAD}kbit
flowid=1
for HOST_LINE in `cat $CLIENT_IP_TABLE`
do
host_ip=`echo ${HOST_LINE} | awk -F"," '{ print $1 }'`
host_bw=`echo ${HOST_LINE} | awk -F"," '{ print $2 }'`
echo "# rule for host ${host_ip}"
echo executing: tc class add dev ${INTERFACE} parent 1:1 classid 1:$((10 + ${flowid})) htb rate ${host_bw}Kbit ceil ${MAX_LOAD}kbit
tc class add dev ${INTERFACE} parent 1:1 classid 1:$((10 + ${flowid})) htb rate ${host_bw}Kbit ceil ${MAX_LOAD}kbit
echo executing: tc qdisc add dev ${INTERFACE} parent 1:$((10 + ${flowid})) handle $((10 + ${flowid})): sfq perturb 10
tc qdisc add dev ${INTERFACE} parent 1:$((10 + ${flowid})) handle $((10 + ${flowid})): sfq perturb 10
echo executing: tc filter add dev ${INTERFACE} parent 1:1 protocol ip u32 match ip dst ${host_ip} flowid 1:$((10 + ${flowid}))
tc filter add dev ${INTERFACE} parent 1:1 protocol ip u32 match ip dst ${host_ip} flowid 1:$((10 + ${flowid}))
flowid=$(( $flowid + 1 ))
done
[ add comment ] ( 3 views ) | [ 0 trackbacks ] | permalink
$ db-control report | grep "DATA_TBS\|Table"
Tablespace Size Used Avail Use%
DATA_TBS 5.3G 3.6G 1.7G 68% (was 94%)
The actual command to extend the tablespace is:
$db-control extend DATA_TBS
(as oracle)[ add comment ] ( 4 views ) | [ 0 trackbacks ] | permalink
To enable smtp filtering using TrendMicro/VirusWall:
edit /etc/postfix/main.cf, add:
content_filter = smtp:127.0.0.1:2526
then, edit /etc/postfix/master.cf, add the lines below. Due to the SELinux restriction use port 50000 to listen for the checked mail returned to the Postfix queue from VirusWall:
localhost:50000 inet n - n - 0 smtpd
-o content_filter=
Using GUI in TrendMicro (https://hostname:9241) set VirusWall/SMTP listenning on 2526 and to return the mail to 50000.
[ add comment ] ( 4 views ) | [ 0 trackbacks ] | permalink