Summary: This article describes how to associate more than one user account with a single local profile. This is especially useful for portable computer users that have a domain account they use while in the office, but use a local account when they are away from the office.
Steps:
1. Create a local user account on your Windows XP desktop. (Example: username).
2. Next create a domain user account on the Domain controller that the Windows XP desktop is a member of. (Example: username). Remember, the local user can have the same name as the domain account since desktop maintains a local database of users separate from the domain controller.
3. Depending on your environment, you may wish to skip this step. Log into the Windows XP desktop as the local Administrator account. Go into Computer Management->Users and Groups. Add the local account and domain account you created in steps 1 and 2 to the local Administrators group. This allows our users some flexibility in being able to do various things on their desktop.
4. Now log in as the local account you created (tester). This will create the default local profile (username) and adds the path to the ProfileList in the registry. Log off when complete.
5. Now log in as the domain account you created (tester). This will create the default local profile (username.domain) and add the path top the ProfileList in the registry.
6. Depending on your environment, you may wish to skip this step. Before you log off as the domain account, go into System Properties->Advanced->User Profiles. Verify that the domain account profile is set to local and not roaming. If it is set to roaming, you will need to change that to local. Once complete logoff.
7. Reboot the machine. This clears up Windows processes that are still using the .dat files for the accounts we logged in as. Failure to do so might yield errors in latter steps. Specifically: “The file is in use by another process”.
8. Log into the Windows XP desktop as the local Administrator account.
9. Edit the permissions on the profile to enable your domain account to access it. Start Regedt32 and go to HKEY_USERS. With HKEY_USERS selected, click the Load Hive option from the Registry menu. Select the file "C:\Documents and Settings\username\Ntuser.dat, where username is your local account name that we created in step 1.
10. When prompted to enter a key name, type in your user name and press ENTER. You can now see an entry for your user name under HKEY_USERS. Select it and click Permissions from the Security menu. Add your domain account name to the list of permissions, granting the account full control. Click OK when you are finished.
11. To save this change, select your username, and then click Unload Hive from the Registry menu.
12. Next we need to alter the path that points to the profile. In Regedt32, go to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
13. Under this key, you can see a list of Security Identifiers (SIDs). To find the SID corresponding to your new local account, open each key and look at the value for the ProfileImagePath. When you find the value that matches username.domain, modify the ProfileImagePath data so that it points to your local account profile path.
14. Close Rededt32 and log on with your local account. You can see your familiar profile.
15. Reboot the Windows XP desktop.
16. Close Rededt32 and log on with your domain account. You can see your familiar profile.
17. The results from steps 14 and 16 should provide you with the same desktop settings and customizations.
18. This procedure gives users desktop consistency whether they are using their domain account in the office or their local account in their home office.
Saturday, December 01, 2007
Friday, November 30, 2007
Replacing Failed Disk in Linux Software Raid
Overview:
1) Determine which disk has failed. Physical inspection of server or "cat /proc/mdstat" can accomplish this.
2) Remove failed disk from metadevice using mdadm.
3) Physically replace disk.
4) Partition new disk using sfdisk.
5) Add new disk back into raid metadevice using mdadm.
6) Confirm array is rebuilding by "cat /proc/mdstat".
Example:
1) Below, md3 and md4 have a failed device sdc, which contains slices sdc1 and sdc2.
#cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 5
md2 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]
resync=DELAYED
md0 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 sdb3[1] sda3[0]
33366912 blocks [2/2] [UU]
[===>.................] resync = 16.5% (5511296/33366912) finish=46.2min speed=10027K/sec
md3 : active raid5 sde1[2] sdd1[1]
35342720 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
md4 : active raid5 sde2[2] sdd2[1]
35744384 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
2) We need to use mdadm to remove failed slices from array. Note this is not always needed, but we will show example for practicle purposes.
#mdadm -r /dev/md3 /dev/sdc1
#mdadm -r /dev/md4 /dev/sdc2
3) Physically remove the disk(s) from the system.
4) Partition the new disk, using a partition table from another member(device) of the array. This example will use /dev/sde and we will dump
partition table out to a file and then read it into the new device using sfdisk.
#sfdisk -d /dev/sde>/tmp/partition.out
#sfdisk /dev/sdc
5) Add the device slices back into corresponding raid metadevice using mdadm.
#mdadm -a /dev/md3 /dev/sdc1
#mdadm -a /dev/md4 /dev/sdc2
6) Cat /proc/mdstat for results.
#cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 8
md2 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]
resync=DELAYED
md0 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 sdb3[1] sda3[0]
33366912 blocks [2/2] [UU]
[==========>..........] resync = 52.8% (17633280/33366912) finish=25.6min speed=10228K/sec
md3 : active raid5 sdc1[3] sde1[2] sdd1[1]
35342720 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
[============>........] recovery = 60.9% (10776268/17671360) finish=11.2min speed=10233K/sec
md4 : active raid5 sdc2[3] sde2[2] sdd2[1]
35744384 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
Note: md4 will not start rebuilding until md3 is complete, since both raids contain slices from the same physical disk.
1) Determine which disk has failed. Physical inspection of server or "cat /proc/mdstat" can accomplish this.
2) Remove failed disk from metadevice using mdadm.
3) Physically replace disk.
4) Partition new disk using sfdisk.
5) Add new disk back into raid metadevice using mdadm.
6) Confirm array is rebuilding by "cat /proc/mdstat".
Example:
1) Below, md3 and md4 have a failed device sdc, which contains slices sdc1 and sdc2.
#cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 5
md2 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]
resync=DELAYED
md0 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 sdb3[1] sda3[0]
33366912 blocks [2/2] [UU]
[===>.................] resync = 16.5% (5511296/33366912) finish=46.2min speed=10027K/sec
md3 : active raid5 sde1[2] sdd1[1]
35342720 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
md4 : active raid5 sde2[2] sdd2[1]
35744384 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
2) We need to use mdadm to remove failed slices from array. Note this is not always needed, but we will show example for practicle purposes.
#mdadm -r /dev/md3 /dev/sdc1
#mdadm -r /dev/md4 /dev/sdc2
3) Physically remove the disk(s) from the system.
4) Partition the new disk, using a partition table from another member(device) of the array. This example will use /dev/sde and we will dump
partition table out to a file and then read it into the new device using sfdisk.
#sfdisk -d /dev/sde>/tmp/partition.out
#sfdisk /dev/sdc
5) Add the device slices back into corresponding raid metadevice using mdadm.
#mdadm -a /dev/md3 /dev/sdc1
#mdadm -a /dev/md4 /dev/sdc2
6) Cat /proc/mdstat for results.
#cat /proc/mdstat
Personalities : [raid1] [raid5]
read_ahead 1024 sectors
Event: 8
md2 : active raid1 sdb1[1] sda1[0]
80192 blocks [2/2] [UU]
resync=DELAYED
md0 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
resync=DELAYED
md1 : active raid1 sdb3[1] sda3[0]
33366912 blocks [2/2] [UU]
[==========>..........] resync = 52.8% (17633280/33366912) finish=25.6min speed=10228K/sec
md3 : active raid5 sdc1[3] sde1[2] sdd1[1]
35342720 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
[============>........] recovery = 60.9% (10776268/17671360) finish=11.2min speed=10233K/sec
md4 : active raid5 sdc2[3] sde2[2] sdd2[1]
35744384 blocks level 5, 64k chunk, algorithm 0 [3/2] [_UU]
Note: md4 will not start rebuilding until md3 is complete, since both raids contain slices from the same physical disk.
Wednesday, November 28, 2007
Perl One Liner to Find Duplicate RPM's in Linux
rpm --last -qa | perl -n -e '/^(\S+)-\S+-\S+/; print "$&\n" if $SEEN{$1}; $SEEN{$1} ||= $_;' | uniq > duplicates.txt
Perl Script to Restart PVM hosts
#!/usr/bin/perl
### This script restarts PVM's that have failed ###
use Parallel::Pvm;
use Net::Ping;
### Set PVM Hostfile location ###
$user="schmaus";
$pvmhostfile="/home/$user/.pvmhostfile";
### Nothing to change below this line ###
open (PVMHOST, $pvmhostfile);
while ($hostname =) {
chomp($hostname);
$status = Parallel::Pvm::mstat("$hostname");
chomp($status);
if ($status ne "0") {
$alive="1";
$p = Net::Ping->new();
$alive="0" if $p->ping($hostname);
$p->close();
if ($alive ne "0") {
print "$hostname: Offline-Down\n";
} else {
Parallel::Pvm::addhosts("$hostname");
print "$hostname: Offline-Restarted\n";
}
} else {
print "$hostname: Online\n";
}
}
close(PVMHOST);
exit;
### This script restarts PVM's that have failed ###
use Parallel::Pvm;
use Net::Ping;
### Set PVM Hostfile location ###
$user="schmaus";
$pvmhostfile="/home/$user/.pvmhostfile";
### Nothing to change below this line ###
open (PVMHOST, $pvmhostfile);
while ($hostname =
chomp($hostname);
$status = Parallel::Pvm::mstat("$hostname");
chomp($status);
if ($status ne "0") {
$alive="1";
$p = Net::Ping->new();
$alive="0" if $p->ping($hostname);
$p->close();
if ($alive ne "0") {
print "$hostname: Offline-Down\n";
} else {
Parallel::Pvm::addhosts("$hostname");
print "$hostname: Offline-Restarted\n";
}
} else {
print "$hostname: Online\n";
}
}
close(PVMHOST);
exit;
Tuesday, November 27, 2007
Nagios Remote Radius Monitor Script
This script written in Perl can be used as a monitor within Nagios to monitor the state of a Radius server.
#!/usr/bin/perl use Authen::Radius; my $numberargs = $#ARGV + 1; if ($numberargs ne 5) { print "Usage: check_radius \n"; exit(0); } my $hostname = $ARGV[0]; my $port = $ARGV[1]; my $radiussecret = $ARGV[2]; my $username = $ARGV[3]; my $password = $ARGV[4]; my $radius = new Authen::Radius(Host => "$hostname:$port", Secret => $radiussecret); if (!defined $radius) { print "No Response from Host\n"; exit(2); } else { my $r = $radius->check_pwd($username, $password); if($r) { print "Service OK\n"; exit(0); } else { print "Service Critical\n"; exit(2); } } exit(2);
Wednesday, January 24, 2007
Script to Check StorEdge 3510 Disk Status
#!/usr/local/bin/zsh
#################################################################
# This script checks 3510 status #
#################################################################
EMAILADDR="sysadmin@software.umn.edu"
FAILSTAT="0"
FAILSTAT1="0"
for DISKS in `/usr/local/sccli/sbin/sccli -l|/usr/local/bin/cut -d" " -f1 -`
do
SCCLI=`/usr/local/sccli/sbin/sccli $DISKS show logical-drives 2>&1 > /tmp/lds.out`
LDS=`/usr/local/bin/cat /tmp/lds.out 2>&1|grep ld|grep -v Good`
if [ $? = 1 ]; then
FAILSTAT="0"
else
FAILSTAT1="1"
fi
SCCLI=`/usr/local/sccli/sbin/sccli $DISKS show enclosure-status 2>&1 > /tmp/sccli.out`
for TESTS in Topology Fan PS Temp Voltage DiskSlot
do
CHECK=`/usr/local/bin/cat /tmp/sccli.out 2>&1|grep $TESTS|grep -v OK`
if [ $? = 1 ]; then
FAILSTAT="0"
else
FAILSTAT1="1"
fi
done
if [ $FAILSTAT1 = 1 ]; then
MERGE=`/usr/local/bin/cat /tmp/sccli.out>/tmp/report.out`
MERGE=`echo>>/tmp/report.out`
MERGE=`/usr/local/bin/cat /tmp/lds.out>>/tmp/report.out`
SYSTEM=`hostname`
SUBJECTLINE="StoreEdge Error -- HOST: $SYSTEM DISK: $DISKS"
/bin/mailx -s $SUBJECTLINE $EMAILADDR < /tmp/report.out
fi
done
exit
#################################################################
# This script checks 3510 status #
#################################################################
EMAILADDR="sysadmin@software.umn.edu"
FAILSTAT="0"
FAILSTAT1="0"
for DISKS in `/usr/local/sccli/sbin/sccli -l|/usr/local/bin/cut -d" " -f1 -`
do
SCCLI=`/usr/local/sccli/sbin/sccli $DISKS show logical-drives 2>&1 > /tmp/lds.out`
LDS=`/usr/local/bin/cat /tmp/lds.out 2>&1|grep ld|grep -v Good`
if [ $? = 1 ]; then
FAILSTAT="0"
else
FAILSTAT1="1"
fi
SCCLI=`/usr/local/sccli/sbin/sccli $DISKS show enclosure-status 2>&1 > /tmp/sccli.out`
for TESTS in Topology Fan PS Temp Voltage DiskSlot
do
CHECK=`/usr/local/bin/cat /tmp/sccli.out 2>&1|grep $TESTS|grep -v OK`
if [ $? = 1 ]; then
FAILSTAT="0"
else
FAILSTAT1="1"
fi
done
if [ $FAILSTAT1 = 1 ]; then
MERGE=`/usr/local/bin/cat /tmp/sccli.out>/tmp/report.out`
MERGE=`echo>>/tmp/report.out`
MERGE=`/usr/local/bin/cat /tmp/lds.out>>/tmp/report.out`
SYSTEM=`hostname`
SUBJECTLINE="StoreEdge Error -- HOST: $SYSTEM DISK: $DISKS"
/bin/mailx -s $SUBJECTLINE $EMAILADDR < /tmp/report.out
fi
done
exit
Script to Check Solaris DiskSuite States
This Kourne shell script will check for any errors on metadevices with Disksuite on Solaris.
#!/bin/ksh ######################################################################### # check_disks - check metadevices for errors and alert the user. # # USAGE: check_disks [ -m (address) ] # ######################################################################### TZ=CST5CDT SUBJECT="MetaDisk Errors on `uname -n`" GREP_CMD="/bin/egrep" GREP_ARG="-s" METASTAT_CMD=/usr/opt/SUNWmd/sbin/metastat METADB_CMD=/usr/opt/SUNWmd/sbin/metadb DISKSUITE_PRESENT=1 MAIL_OUTPUT='' MAIL_RECIP='' DEFAULT_MAIL_RECIP='' OUTPUT_MSG="" RESULT='' OUTPUT_CODE=0 ######################################################################### # FUNCTION DEFINITIONS # ######################################################################### function error_out { RETURN_CODE=$2 ERROR_MSG=$1 print - $ERROR_MSG exit $RETURN_CODE } ######################################################################### # MAIN PROGRAM # ######################################################################### while getopts :m: c do case $c in m ) MAIL_OUTPUT="YES" # any non-null string will do MAIL_RECIP=$OPTARG if [[ -z $MAIL_RECIP ]] then MAIL_RECIP=$DEFAULT_MAIL_RECIP fi ;; ?) error_out "Usage: check_disks [ -m mail_addr ]" 1 ;; esac done if [ ! -x $GREP_CMD ] then GREP_CMD="/usr/xpg4/bin/egrep" fi if [ ! -x $GREP_CMD ] then print "ERROR: egrep not executable or not found" exit 1 fi PATH=$PATH:/usr/sbin METASTAT_BIN=$(/bin/which metastat) if [ ! -x $METASTAT_CMD ] then if [ -x $METASTAT_BIN ] then METASTAT_CMD=$METASTAT_BIN DISKSUITE_PRESENT=1 else DISKSUITE_PRESENT=0 fi fi METADB_BIN=$(/bin/which metadb) if [ ! -x $METADB_CMD ] then if [ -x $METADB_BIN ] then METADB_CMD=$METADB_BIN DISKSUITE_PRESENT=1 else DISKSUITE_PRESENT=0 fi fi if [[ $DISKSUITE_PRESENT -eq 0 ]] then error_out "DiskSuite has been found." 3 fi ################################################################# # DISKSUITE SECTION # ################################################################# if [ $DISKSUITE_PRESENT -ne 0 ] then $METASTAT_CMD | $GREP_CMD $GREP_ARG aint if [ $? -eq 0 ] then OUTPUT_MSG="${OUTPUT_MSG}Disk requires maintenance on `uname -n`\n" fi $METASTAT_CMD | $GREP_CMD $GREP_ARG "In use" if [ $? -eq 0 ] then OUTPUT_MSG="${OUTPUT_MSG}Disk hot spared on `uname -n`" fi $METADB_CMD | $GREP_CMD $GREP_ARG "[A-Z]" if [ $? -eq 0 ] then OUTPUT_MSG="${OUTPUT_MSG}Metadb problems on `uname -n`\n" fi fi ################################################################# # OUTPUT SECTION # ################################################################# if [[ ! -z $MAIL_OUTPUT ]] then if [[ ! -z "$OUTPUT_MSG" && ! -z $MAIL_RECIP ]] then print $OUTPUT_MSG | mailx -s "$SUBJECT" $MAIL_RECIP OUTPUT_CODE=-1 fi else if [[ ! -z "$OUTPUT_MSG" ]] # or else just print on stdout then print $OUTPUT_MSG OUTPUT_CODE=-1 fi fi return $OUTPUT_CODE
Subscribe to:
Posts (Atom)