Wednesday, October 02, 2013

Perl & Netapp Data Protection Manager Suspend & Resume Datasets

Suppose your using Netapp Data Protection Manager and you need to either suspend or resume many datasets all at once. Doing this through the UI can be cumbersome given you have to do each and everyone individually. However the little Perl scripts below I wrote allows you to suspend or resume all of them automatically. These two Perl scripts should be run on your DFPM host:

To suspend the datasets:


chomp(@list = `dfpm dataset list`);
foreach $list (@list) {
                $list =~ s/\s+//;
                ($id) = split(/\s+/,$list);
                print "$id\n";
                $cmd = `dfpm dataset suspend $id`;
                print "$cmd\n";
}

To resume the datasets:

chomp(@list = `dfpm dataset list`);
foreach $list (@list) {
                $list =~ s/\s+//;
                ($id) = split(/\s+/,$list);
                print "$id\n";
                $cmd = `dfpm dataset resume $id`;
                print "$cmd\n";
}
 
Now you could add an argument option and merge both scripts and depending on if your argument was suspend or resume execute the correct section of code.

Saturday, June 23, 2012

HDS Device Manager HiRDB Cluster Setup Error

I recently was setting up a Hitachi Device Manager and Tuning Manager cluster on Windows 2008.  During the configuration phase I ran into the follow error when trying to setup the HiRDB database onto the shared cluster disk.
 
HOSTNAME c:\Program Files (x86)\HiCommand\Base\bin\hcmdsdbclustersetup /createcluster /databasepath D:\database /exportpath E:\empty2 /auto
 
KAPM06577-E An attempt to acquire a value has failed.
 
This error has to do with C:\Program Files (x86)\HiComand\Base\conf\cluster.conf file.  The hcmdsdbclustersetup command references this file which you create before you run the cluster setup command.  Specifically it looks for the mode line which can either be online or standby.  However if that line is not there, has a typo or the file is missing completely, the cluster setup command will fail.

In my case I had a typo in the file.  Once I corrected the typo the command completed successfully.

Wednesday, May 09, 2012

Netapp Interconnect Troubleshooting


In a Netapp metro cluster configuration there may sometimes be the need, for troubleshooting, to reset the cluster interconnect and/or see the status.  If the interconnect is experiencing issues it could prevent the filer from failing over.   The following command details provide some insight on how to view the interconnect and reset it.

filer> priv set diag
Warning: These diagnostic commands are for use by NetApp
         personnel only.

filer*> ic ?
Usage:
        ic status
        ic stats performance
        ic stats error [ -v ]
        ic ood
        ic ood stats
        ic ood stats performance show [ -v ] [nv_vi_|vi_if_]
        ic ood status
        ic ood stats error show
        ic dumpq
        ic nvthruput [-bwmsrndkc:g:] [-c count]
        ic reset nic [0|1]
        ic reset link
        ic dump hw_stats
        ic ifthruput [chan]

filer*> ic status
        Link 0: up
        Link 1: up
        cfo_rv connection state :                       CONNECTED
        cfo_rv nic used :                               1
        cfo_rv2 connection state :                      CONNECTED
        cfo_rv2 nic used :                              1

To reset one of the interconnect links from the filer side:
   
filer*> ic reset nic 0

Hidden Options In Netapp


If you have ever wondered what all the hidden options were on a Netapp filer, you are not alone.  To view all the options both visible and hidden you only need to do the following:

filer>priv set advanced
filer*>registry walk options

A list of all the options both visible and hidden will be displayed.

Monday, March 26, 2012

Using Netapp DFM and Perl to Manage Netapp Filer /etc Files

Sometimes you have to manage the /etc/passwd and /etc/group files on your Netapp filer and seemingly the only options available are to use rdfile and wrfile or a text editor like vi via a NFS mount or Notepad++ via a CIFS share.   None of which appeal to me when trying to create something that a less technical person could use to manipulate these files.

Below is the rough framework that could be used to build a full fledged file manipulator for Netapp files under the /etc directory.  In the example below we are looking at the /etc/passwd file, however it could be expanded to manipulate any files on the filer through DFM.   Further, you could use Win32:GUI or Perl/TK to provide a GUI to the script as opposed to running it via the command line.

The breakdown of the script is as follows:


Standard Perl interpreter line.  In this example we are on Windows.

     #!/Perl64/bin/perl

This section of the script is a basic variable assignment of what I want my new line to be in the /etc/passwd file.  However you could have an input prompt here and/or have it read from a file.
     $newentry = "Your passwd entry\n";

This line grabs the existing /etc/passwd file and loads it into a perl array called rpasswd.  It is using the DFM command set to run rdfile on the filer.

     chomp (@rpasswd = `dfm run cmd -t 120 faseslab1 rdfile /etc/passwd`);

  This section cleans up the rpasswd values and only puts in the lines that match Stdout from the DFM output into a second array called passwd.

    foreach $line (@rpasswd) {
                     if ($line =~ /^Stdout:\s+/) {
                                     $line =~ s/^Stdout:\s+//g;
                                     push(@passwd,$line);
                     }
     }

This line  places the new entry into the passwd array.

     push (@passwd,$newentry);

This line backs up the existing passwd file using DFM and mv command.

     $result = `dfm run cmd -t 120 faseslab1 mv /etc/passwd /etc/passwd.bak`;
This line writes the new passwd file using the passwd array and DFM to write out the new file.

     foreach $line (@passwd) {
                     $result = `dfm run cmd -t 120 faseslab1 wrfile -a /etc/passwd $line`;
     }
     exit;

 Again, this is a rough example, but it gives you the idea of what can be done using Perl and DFM.

Monday, August 15, 2011

Determine Core Count on Solaris 10 Hosts

This question always comes up time and time again. How does one figure out how to get the correct core count on a Solaris 10 host?  Below is the one line answer:

echo "`hostname` has `kstat cpu_info |grep core_id|uniq|wc -l` cores"

Thursday, August 04, 2011

Solaris 10 & What Process is Using Port

The other day I had a request to find out what process was using a specific port on a Solaris 10 server.  I came up with this little gem to do the work and provide the PID of the process using the port.

get_port.sh

#!/bin/bash
if [ $# -lt 1 ]
then
echo "Usage: ./get_port.sh port#"
exit
fi

echo "Searching for PID using port... "
for i in `ls /proc`
do
pfiles $i | grep AF_INET | grep $1
if [ $? -eq 0 ]
then

echo The port is in use by PID: $i
fi
done

Tuesday, July 26, 2011

Sun Cluster 3.2 & SCSI Reservation Issues



If you have worked with luns and Sun Cluster 3.2, you may have discovered that if you ever want to remove a lun from a system, it may not be possible because of the scsi3 reservation that Sun Cluster places on the disks.  The example scenario below walks you through how to overcome this issue and proceed as though Sun Cluster is not even installed.

Example:  We had a 100GB lun off of a Hitachi disk array that we were using in a metaset that was controlled by Sun Cluster. We had removed the resource from the Sun Cluster configuration and removed the device with configadm/devfsadm, however when the storage admin attempted to remove the lun id from the Hitachi array zone, the Hitach array indicated the lun was still in use.  From the Solaris server side, it did not appear to be in use, however Sun Cluster has set the scsi3 reservations on the disk.

Clearing the Sun Cluster scsi reservation steps:

1) Determine what DID device the lun is mapped to using /usr/cluster/bin/scdidadm -L
2) Disable failfast on the DID device using /usr/cluster/lib/sc/scsi -c disfailfast -d /dev/did/rdsk/DID
3) Release the DID device using  /usr/cluster/lib/sc/scsi -c release -d /dev/did/rdsk/DID
4) Scrub the reserve keys from the DID device using  /usr/cluster/lib/sc/scsi -c scrub -d /dev/did/rdsk/DID
5) Confirm reserve keys are removed using /usr/cluster/lib/sc/scsi -c inkeys -d /dev/did/rdsk/DID
6) Remove lun from zone on machine or whatever procedure you were trying to complete.

Configuring Persistent Bindings on Solaris 10

If you have tape devices attached to your Solaris 10 host and you often find that after a reboot of the host, the tape devices are no longer in the same order they were before, you can use the following Perl script to configure the /etc/devlink.tab file to make the tape devices persist.  Script is below:


#!/usr/bin/perl
#################################################################
# This script maps fiber attached tape drives to persistently                                       #
# bind to the same device across reboots.                                                                 #
# (C) 2011 Benjamin Schmaus                                                                                 #
#################################################################
use strict;
my($junk,$path,$devices,$dev,$file);
my(@devices,@file);
my $date = `date +%m%d%Y`;
$file = `/usr/bin/cp /etc/devlink.tab /etc/devlink.tab.$date`;
@file = `cat /etc/devlink.tab`;
@file = grep !/type=ddi_byte:tape/, @file;
open (FILE,">/etc/devlink.tab.new");
print FILE @file;
close (FILE);
@devices = `ls -l /dev/rmt/*cbn|awk {'print \$9 \$11'}`;
open (FILE,">>/etc/devlink.tab.new");
foreach $devices (@devices) {
                chomp($devices);
                ($dev,$path) = split(/\.\.\/\.\./,$devices);
                $dev =~ s/cbn//g;
                $dev =~ s/\/dev\/rmt\///g;
                $path =~ s/:cbn//g;
                ($junk,$path) = split(/st\@/,$path);
                print FILE "type=ddi_byte:tape;addr=$path;\trmt/$dev\\M0\n";
}
close (FILE);
$file = `/usr/bin/mv /etc/devlink.tab.new /etc/devlink.tab`;      
exit;

Saturday, April 16, 2011

Comparing DAS, NAS, iSCSI, SAN


Purpose:

The purpose of this document is to briefly explain the different types of storage options available and there advantages and disadvantages.

Storage Use Considerations Factors:

  • Available budget
  • Data security requirements
  • Network infrastructure
  • Data availability requirements, etc.

Storage Types:

  • Direct Attached Storage (DAS),
  • Network Attached Storage (NAS),
  • Storage Area Networks (SAN).

Direct Attached Storage:

Direct Attached Storage is a system of hard drives addressed directly via system buses within the computer (IDE, SCSI); the network interface is managed by the operating system. As these buses can only bridge short distances within the decimeter range, DAS solutions are limited to the respective computer casing. Depending on the bus type, DAS systems are also restricted to a relatively small number of drives - Wide-SCSI achieves the maximum of 16 directly addressable drives. Due to these limitations and the need for more flexible storage, the importance of DAS is declining. Although DAS in terms of terabyte is still growing by 28% annually, the need for storage is increasingly being covered by networked storage like NAS and iSCSI systems.

Network Attached Storage:

NAS systems are generally computing-storage devices that can be accessed over a computer network (usually TCP/IP), rather than directly being connected to the computer (via a computer bus such as SCSI). This enables multiple computers to share the same storage space at once, which minimizes overhead by centrally managing hard disks. NAS devices become logical file system storage for a local area network. NAS was developed to address problems with direct attached storage, which included the effort required to administer and maintain “server farms”, and the lack of scalability, reliability, availability, and performance. They can deliver significant ease of use, provide heterogeneous data sharing and enable organizations to automate and simplify their data management.



NAS Application Uses (Low Performance):

  • File/Print server
  • Application specific server
  • Video Imaging
  • Graphical image store
  • Centralized heterogeneous file sharing
  • File system mirroring
  • Snap shot critical data
  • Replacement of traditional backup methods
  • Medical imaging
  • CAD/CAM
  • Portable centralized storage for offsite projects
  • Onsite repository for backup data

Advantages of NAS:

  • NAS systems offer a number of advantages:
  • Heterogeneous OS support. Users running different types of machines (PC, Apple iMac, etc.) and running different types of operating systems (Windows, Unix, Mac OS, etc.) can share files.
  • Easy to install and manage. NAS appliances are “plug-and-play” meaning that very little installation and configuration is required beyond connecting them to the LAN.
  • NAS appliances can be administrated remotely, i.e. from other locations.
  • Less administration overhead than that required for a Unix or Windows file server.
  • Leverages existing network architecture since NAS are on LANs.
  • NAS server OSs are smaller, faster, and optimized for the specialized task of file serving and are therefore undemanding in terms of processing power.
  • A NAS appliance is a standalone file server and can free up other servers to run applications. Compared to iSCSI an additional host server is not necessary.
  • Compared to iSCSI, NAS appliances already include integrated mechanisms for backup, data synchronization and data replication.

Disadvantages of NAS:

  • Heavy use of NAS will clog up the shared LAN negatively affecting the users on the LAN. Therefore NAS is not suitable for data transfer intensive applications.
  • Somewhat inefficient since data transfer rides on top of standard TCP/IP protocol.
  • Cannot offer any storage service guarantees for mission critical operations since NAS operates in a shared environment.
  • NAS is shared storage. As with other shared storage, system administrators must enforce quotas without which a few users may hog all the storage at the expense of other users.
  • NAS is less flexible than a traditional server.
  • Most database systems such as Oracle or Microsoft Exchange are block-based and are therefore incompatible with file-based NAS servers (except for SQL).

Storage Area Network:

Storage Area Networks (SAN), which also include iSCSI, are distinguished from other forms of network storage by using a block based protocol and generally run over an independent, specialized storage network. Data traffic on these networks is very similar to those used for internal disk drives, like ATA and SCSI. With the exception of SAN file systems and clustered computing, SAN storage is still a one-to-one relationship. That is, each device (or Logical Unit Number (LUN)) on the SAN is “owned” by a single computer (or initiator). SANs tend to increase storage capacity utilization, since multiple servers can share the same growth reserve. Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server.


iSCSI/SAN Application Uses(High Performance):

  • Offer power users disk space on demand
  • Databases (Oracle, MS-SQL, MySQL)
  • Video Imaging
  • Graphical image store
  • File system mirroring
  • Snap shot critical data
  • Replacement of traditional backup methods
  • Medical imaging
  • CAD/CAM
  • Onsite repository for backup data
 
Advantages of iSCSI:

  • Ease of scaling disk storage. With iSCSI the disks are remote from the server, therefore adding a new disk just requires the use ofdisk manager or if replacing the whole server re-mapping the data to the server using iSCSI. With iSCSI you can easily create huge storage pools with volumes in the range of several tera- or petabytes.
  • In comparison to NAS, which provides a file-level interface, iSCSI provides a block level interface and are therefore compatible with database applications such as Oracle or Microsoft Exchange, that also use a block level interface. Leverages existing network architecture since iSCSI are on LANs.
  • iSCSI storage appliances can be seamlessly integrated into existing SAN environments, since it also runs on block level storage.
  • iSCSI can provide significant benefits for providing failover for high availability configurations. iSCSI allows IP based replication, mirroring and clustering of data and offers integrated MPIO (Multi-Path I/O).
  • iSCSI can also be configured as a particularly flexible DAS system – the local SCSI bus is so to speak extended by the network.
  • As iSCSI is an underlying technology to the OS and uses the native file system of the applications, it is fully compatible with all file systems.



Disadvantages of iSCSI:

  • The demands of accommodating SCSI commands and SCSI data packets in TCP/IP packets require extensive hardware resources: CPU performance should be at least that of a 3 GHz Pentium processor, Gigabit Ethernet (GbE) should accordingly be used as a network interface, and the RAM requirement is also significant.
  • For sharing iSCSI targets with multiple initiators, additional server or specific (client) software for shared data access is required. Known providers of SAN data sharing software are Adic, Sanbolic, IBM, Polyserve, Dataplow and SGI.
  • As iSCSI is an underlying technology to the OS and application, anything that an organization currently has can be used. On the other hand this means that extra licenses for OS and software applications might be needed.
  • Comparing to NAS, an iSCSI Target is not a standalone device and an additional host server is necessary.
  • For sharing centralized storage pool disk among heterogeneous OS requires additional sharing software.
  • In iSCSI appliances mechanisms for backup, data synchronization and data replication are not integrated and must be configured. Comparing to NAS, iSCSI behaves like a local hard drive in the network.
 
Advantages of SAN:

  • A SAN will have all the advantages of iSCSI.
  • A SAN has higher throughput since it runs over dedicated fibre channel topology and not the LAN.
  • A SAN will reduce the overhead that iSCSI places on system resources.

Disadvantages of SAN:

  • Implementing a SAN infrastructure will cost more then NAS or iSCSI due to the additional equipment needed to build out the fibre channel topology.

Summary:

  • Direct Attached Storage is probably not going to feasibly allow us to grow our disk capacity in the future.
  • Network Attached Storage (NAS) is the obvious choice for a storage solution wherever the main focus is on storing and archiving files and shared access to these over a network – even from different client operating systems. Small and medium-sized businesses, typing pools, legal or agency offices, and even end users with large amounts of multimedia files will find an affordable storage solution for their needs in NAS.
  • For storing database systems - other than SQL-based database systems - on a network, NAS is however not a feasible solution. For requirements of this type the industry has developed the Storage Area Network (SAN) technology, which can often be implemented using iSCSI components. Advantages of iSCSI: An IP-based SAN allows administrators to use their familiar management tools and security mechanisms and rely on their existing know-how. However, iSCSI only makes sense in connection with a fast LAN infrastructure: At a throughput of approximately 120 Mbyte/s, the performance of a 1 Gbit Ethernet will be sufficient for database applications for approximately 100 users (data volume: approx. 15 MByte/s). Only high-end storage systems will require a 10 GbE infrastructure. Somewhat inefficient since data transfer rides on top of standard TCP/IP protocol. In contrast, SAN uses protocols designed especially for data transfer (though the advantage disappears if a server on the LAN is used to provide a file interface to a SAN).

Mapping Global Zone Name Inside Solaris 10 Zone

Log into global zone where local zone(s) reside.

Using zonecfg, configure the following for each zone:
# zonecfg -z (zone)
add fs
set type=lofs
set options=ro
set special=/etc/nodename
set dir=/etc/globalname
end
verify
commit
exit

Create mount point within local zone directory structure:
# touch /zones/(zone)/root/etc/globalname

Mount lofs file system manually:
# mount -F lofs -o ro /etc/nodename /zones/(zone)/root/etc/globalname (or path where the root of the zone resides)

Confirm local zone can access file:
# zlogin (zone) cat /etc/globalname

Change /tmp size on Solaris 10 Zone


This blog describes the steps on how to change the tmp size on an existing Solaris 10 zone.

First log into global zone where local zone(s) reside using zlogin:

# zlogin –z (zone)
As root edit the /etc/vfstab:
# vi /etc/vfstab
Find the line in vfstab for the /tmp filesystem:
swap - /tmp tmpfs - yes size=512mb (this could be any value, 512 is example)
Change the value of size=512mb to the requested value (in MB):
swap - /tmp tmpfs - yes size=2048mb
Save the vfstab and exit back to the global zone.

To make the changes take effect the zone must be stopped and booted:

# zoneadm –z (zone) halt
# zoneadm –z (zone) boot
  Log back into local zone and confirm changes by reviewing df –h output:
# zlogin –z
# df –h|grep /tmp
swap                   2.0G     0K   2.0G     0%    /tmp

Sunday, October 17, 2010

Chinook CT80c for Apple IIc

I have seen a few discussions on the internet about the elusive Chniook CT80c for the Apple IIc, but in all those discussions there never seems to be any photographs of the device.   I figured now was the time to display some shots of the device that finally provided an external hard drive to the closed system Apple IIc.

The Chinook CT80c, like it smaller cousins, the CT20c and CT40c were external based hard drives that connected to an Apple IIc and IIc+ computer using Apple's Smartport protocol.  The drive could be daisy chained along with 3.5" UniDisk drives and 5.25" drives, albeit the 5.25" drives needed to be last in the chain.

This is an external view of the CT80c.  The case was made of a sturdy all aluminum shell.  There were two LED's, one for power and one for hard drive activity.  My hard drive activity light burned out so I replaced it with a yellow LED.
This is the back side of the CT80c.  I originally had serial number 0100102, but that drive was DOA on arrival.  Chinook gladly sent me another drive before I even returned the first.
This is the hard drive side on the inside of the CT80c.   They used Conner drives and this one was a CP3100 which is actually a 104mb drive, not a 80mb drive.   So I actually ended up with 20mb more of space once I partitioned it.
This is the circuit board side of the CT80c.  Notice that it had a 6502 processor, Hyundai 8K x 8-bit CMOS SRAM, 120ns, and a Zilog Z0538010PSC SCSI processor.
Here is a closer shot of the circuit board.

Monday, February 22, 2010

Expire List of Tapes in Netbackup

When working in a large Netbackup environment, there often comes a time when you need to expire a large amount of tapes all at once.

This was exactly the scenario that happened to me when a company I consulted at changed their retention periods.  The change of retention periods from 1 year to 1 month meant they wanted to free up all the tapes that contained images that were more then a month old.

The solution to the problem was a script that basically does the following:

1) Reads in a list of media id's from a file.
2) Determine which media server the media id is assigned.
3) Expire the media id from the Netbackup catalog.

The script is here:

#!/usr/bin/perl
$master="hostname of master server";
open DATA, "cat /home/schmaubj/media.list|";
while () {
 $mediaid = $_;
 chomp($mediaid);
 open DATA2, "/usr/openv/netbackup/bin/admincmd/bpmedialist -U -m $mediaid|";
 while () {
  $line = $_;
  chomp($line);
  if ($line =~ /Server Host/) {
   ($junk,$mhost) = split(/=/,$line); 
   chomp($mhost);
    $mhost =~ s/ *$//;  
          $mhost =~ s/^ *//; 
  }
 }
 close (DATA2);
 print "Media ID: $mediaid\n";
 print "Media Server Host: $mhost\n";
 print "Expiring now...\n";
 $expire=`/usr/openv/netbackup/bin/admincmd/bpexpdate -force -d 0 -m $mediaid -host $mhost -M $master`;
}
close (DATA);

Sunday, February 21, 2010

Netbackup Auto Discovery of VM's

When I first started to work with Netbackup and VMWare VCB backups, the biggest complaint I often found was that Netbackup could not auto discover new VM's and automatically insert them into a given policy. Along with that I often found that if I was running more then one VCB backup against my Netbackup proxy server, and one or more of those VM's resided on the same VMWare datastore, I would often get a snapshot error, as both tried to compete to get a snapshot on the datastore. These issues made Netbackup VCB backups problematic and management intensive for the large environment of 50+ VM's.

Pondering this issue, I came up with an elegant yet useful solution. The idea was to create multiple policies, one for each datastore in my VMWare cluster. The policy would be configured to run one client at a time, and all the policies would run at the same time. In the end you would end up with VCB backup jobs running against the proxy server, but they would not be on the datastore since each policy was datastore specific. This eliminated the snapshot errors and also allowed for me to have the policies auto updated.

Like I said, I was able to have my policies auto updated with new VM clients without any intervention. This was made possible by writing a script that used the VMWare Perl API's and made calls into both VMWare and then Netbackup. Here is how the setup works:

1) Create a Netbackup policy for each datastore in your VMWare environment. Make sure the policies all have a naming convention. Example: NB_VCB_datastore1, NB_VCB_datastore2, NB_VCB_datastore3, etc...
2) Use the script below and run from the Unix/Linux master server or any Unix/Linux server that has the Netbackup admincmd directories installed and is allowed access to the Netbackup master server. Make sure you have the VMWare Perl API libraries installed with Perl on the master server.

Usage: autovcb -v (vc server) -u (username) -p (password) -d (datacenter in vc) -os (Windows|Solaris|Linux|Suse|Redhat) -pp (netbackup policy prefix) -m (master server) -dsp (datastore name|ALL) [-pre (datastore match prefix)] [-update]


Note: The Netbackup policy prefix is the naming convention value you used before the datastore in the policy name. In my example it is "NB_VCB_". Also, be aware that forward and reverse DNS must be functional on the VM hosts to ensure proper inclusion in the policy.

The script allows you to only match on some datastore or all datastores. You can do a full discovery and populate or just an update of the policies. You can also chose to only add clients of a specific OS.

This script was tested in a Netbackup 6.5.4 and VMWare ESX 3.5 environment.

The script is here:

#!/usr/bin/perl
#################################################################################
# Script that autodiscovers VM's and places the client name into prexisting #
# policies based on the datastore names discovered    #
# (C) Benjamin Schmaus May, 2009      #
################################################################################# 
use strict;
use warnings;
use Socket;
use VMware::VIRuntime;
use VMware::VILib;
use VMware::VIM25Stub;
use Getopt::Long;

### Main Logic ###
my $bpplclients = '/usr/openv/netbackup/bin/admincmd/bpplclients';
my ($help,$prefix,$vcserver,$username,$password,$dc,$os,$polpr,$datastorepol,$update,$master);
my ($datastorepoltmp,$listds,$esxhost,$vmname,$adhostname,$ipaddress,$toolstat,$guestos,$datastore,$hostname,$junk,$junk2,$junk3,$client,$datastorefix,$polname,$gueststate);
my @dslist = ();
options();
print "update is set to: $update\n";
my $policy = "$polpr$datastorepol";
my $url = "https://$vcserver/sdk/vimService";
Vim::login(service_url => $url, user_name => $username, password => $password);
my $datacenter_views = Vim::find_entity_views(view_type => 'Datacenter',filter => { name => $dc });
my $dcds = Vim::find_entity_view(view_type => "Datacenter", filter => {'name' => "Fridley"} );
my $ds = $dcds->datastore;

if ($datastorepol eq "ALL") {
 getlistds();
 print "\n";
}
if ($update eq "0") {
 polrmclients();
 print "\n";
}

if ($datastorepol eq "ALL") {
 print "Discovering new clients...\n";
 # go through all policies and add new clients for all datastores
 foreach (@dslist) {
  $datastorepoltmp = $_; 
  print "\tDatastore: $datastorepoltmp\n";
  print "\tPolicy: $polpr$datastorepoltmp\n";
  polgetclients();
  print "\t\n";
 }
} else {
 print "Discovering new clients...\n";
 # go through all policies and add new clients for given datastore
 $datastorepoltmp = $datastorepol;
 print "\tCurrent Datastore: $datastorepoltmp\n";
 print "\tCurrent Policy: $$polpr$datastorepoltmp\n";
 polgetclients();
 print "\t\n";
}
print "\n\n";
Vim::logout();
exit;


sub polgetclients {
 foreach (@$datacenter_views) {
  my $datacenter = $_->name;
  my $host_views = Vim::find_entity_views(view_type => 'HostSystem',begin_entity => $_ );
  foreach (@$host_views) {
   $esxhost = $_->name;
   #print "\tESX Host Server: $esxhost\n";
   my $vm_view = Vim::find_entity_views(view_type => 'VirtualMachine',begin_entity => $_ , filter => { 'guest.guestState' => 'running' });
   foreach (@$vm_view) {
    $vmname = $_->name;
    $adhostname = $_->summary->guest->hostName;
    $ipaddress = $_->summary->guest->ipAddress;
    $toolstat = $_->summary->guest->toolsStatus->val;
    $guestos = $_->summary->guest->guestFullName;
    $datastore = $_->summary->config->vmPathName;
    $gueststate = $_->guest->guestState;
    ($datastorefix,$junk) = split(/\] /,$datastore);
    $datastorefix =~ s/\[//g;
    $datastorefix =~ s/ /_/g;
    if ($ipaddress) {
     $hostname = gethostbyaddr(inet_aton($ipaddress), AF_INET);
     #print "$ipaddress $hostname\n";
     #if (!$hostname) { $hostname = '---'; }
     #$hostname =~ s/.asd.udlp.com/.mpls.udlp.com/g;
     #if (($hostname !~ /.mpls.udlp.com/) && ($hostname !~ /udlp.com/)) {
     # $hostname = "$hostname.mpls.udlp.com";
     #}
    }
    if (!$hostname) { $hostname = '---'; }
    $policy = "$polpr$datastorefix";
    if ($datastorefix eq $datastorepoltmp) {
     if ($toolstat =~ /toolsOk/) {
      if ($guestos =~ /$os/) {
       if ($hostname ne "---" && $hostname ne "localhost") {
        my $addtopolicy = `$bpplclients $policy -M $master -add $hostname VMware Virtual_Machine > /dev/null 2>&1`;
        if ($? eq "0") {
         print "\t\tAdded $hostname\n";
        } else {
         print "\t\tSkipped $hostname: Already in policy\n";
        }
       } else {
        print "\t\tSkipped $vmname: Reverse DNS does not exist\n";
       }
      } else {
       print "\t\tSkipped $vmname: OS does not match guestos\n";
      }
     } else {
      print "\t\tSkipped $vmname: Tools not okay - $toolstat\n";
     }
    }
   }
  }
 }
}

sub polrmclients {
 if ($datastorepol eq "ALL") {
  foreach (@dslist) {
   $policy = "$polpr$_";
   print "Removing clients from $policy...\n";
   my @polrm = `$bpplclients $policy -noheader`;
   foreach (@polrm) {
    ($junk,$junk2,$client) = split(/ /,$_);
    chomp($client);
    my $remove = `$bpplclients $policy -M $master -delete $client > /dev/null 2>&1`;
    if ($? eq "0") {
                                 print "\t\tRemoved $client\n";
                                } else {
                                 print "\t\tSkipped $client: Not in policy\n";
    }
   }
  }
 } else { 
  $policy = "$polpr$datastorepol";
  my @polrm = `$bpplclients $policy -noheader`;
  print "Removing clients from $policy...\n";
  foreach (@polrm) {
   ($junk,$junk2,$client) = split(/ /,$_);
   chomp($client);
   my $remove = `$bpplclients $policy -M $master -delete $client > /dev/null 2>&1`;
   if ($? eq "0") {
                         print "\t\tRemoved $client\n";
                        } else {
                         print "\t\tSkipped $client: Not in policy\n";
                        }
  }
 }
}

sub getlistds {
 print "Getting list of datastores...\n";
 my $counter = 0;
 foreach(@$ds) {
         my $ds_ref = Vim::get_view(mo_ref => $_);
         my $ds = $ds_ref->info->name;
         if (($ds =~ /$prefix/) && ($prefix ne "---")) {
                 $dslist[$counter] = $ds_ref->info->name;
                 $dslist[$counter] =~ s/ /_/g;
   $dslist[$counter] =~ s/\(|\)//g;
   print "\tFound datastore: $dslist[$counter]\n";
                 ++$counter;
         } elsif  ($prefix eq "---") {
                 $dslist[$counter] = $ds_ref->info->name;
                 $dslist[$counter] =~ s/ /_/g;
   $dslist[$counter] =~ s/\(|\)//g;
   print "\tFound datastore: $dslist[$counter]\n";
                 ++$counter;
         }
 }
}

sub options {
        $vcserver="";$username="";$password="";$dc="";$os="";$polpr="";$master="";$update="0";$prefix="";$datastorepol="";
        GetOptions ('h|help'=>\$help,'v|vcserver=s'=>\$vcserver,'u|username=s'=>\$username,'p|password=s'=>\$password,'d|datacenter=s'=>\$dc,'os=s'=>\$os,'pp|polpr=s'=>\$polpr,'m|master=s'=>\$master,'dsp=s'=>\$datastorepol,'pre|prefix=s'=>\$prefix,'update'=>\$update);
        if ($help) {
                print "Usage: autovcb -v  -u  -p  -d  -os  -pp  -m  -dsp <(datastore name|ALL)> [-pre ] [-update]\n";
                exit;
        }
        if (($vcserver eq "") || ($username eq "") || ($password eq "") || ($dc eq "") || ($os eq "") || ($polpr eq "") || ($master eq "") || ($datastorepol eq "")) {
  print "Missing required parameters - Type -help for options\n";
                exit;
        }
 if (($prefix) && ($datastorepol ne "ALL")) {
  print "Cannot use prefix option when datastore is not set to ALL - Type -help for options\n";
  exit;
 }

        if (($os ne "Windows") && ($os ne "Solaris") && ($os ne "Linux") && ($os ne "Suse") && ($os ne "Redhat")) {
  print "Incorrect OS specified - Type -help for options\n";
                exit;
        }
}

Tuesday, February 16, 2010

VMWare Tool Check

If you have ever worked in a large VMWare environment, you know what a pain it can be to determine if the VMWare tools are up to date on a VMWare host. That is why I wrote the following Perl script. The script enumerates through the VMWare hosts and tells you if the VMWare tools are up to date or not.


The link to the script is here:

http://home.comcast.net/~schmaustech/source/vmtoolchk.zip

Perl & VMWare Snapshots & Netbackup

In VmWare, Virtual Center has the ability for you to take snapshots. These might come in handy before you install a new application, perform a OS upgrade or make any other significant change to the VM.

Netbackup also creates a snapshot when you do a VCB backup or incremental backup against a VM. Sometimes these snapshots are left behind when a backup of the VM fails. The result is that you have to go into Virtual Center, find the VM and remove the snapshot. This can be time consuming and lead me to develop a tool in Perl as a solution.

The Perl solution I came up with was inspired by the sample snapshot Perl script that comes with the VMWare Perl API SDK. This script however was not specific for my needs. So I came up with a script that would delete snapshots based on a pattern match of the name of the snapshot.

Basically the script will connect to a Virtual Center server and enumerate through the hosts associated with that Virtual Center. While enumerating through the hosts, it checks for snapshots and then compares the pattern entered to the name of the snapshot. If they match it will remove it. In the case of a Netbackup VCB backup snapshot, the names always start with "VCB_". Thus if you use "VCB_" as your pattern you can remove all snapshots on all your hosts. You can extend this to all snapshots if you use a naming convention when you create snaps.

The script is here:

#!/usr/bin/perl
#################################################################################
# Script that autodiscovers VM's and places the client name into prexisting     #
# policies based on the datastore names discovered                              #
# (C) Benjamin Schmaus May, 2009                                                #
#################################################################################
use strict;
use warnings;
use Socket;
use VMware::VIRuntime;
use VMware::VILib;
use VMware::VIM25Stub;
use Getopt::Long;
my ($help,$vcserver,$username,$password,$dc,$removesnapshot,$children,$snapshotname);
$children="0";
options();
my $url = "https://$vcserver/sdk/vimService";
Vim::login(service_url => $url, user_name => $username, password => $password);
my $datacenter_views = Vim::find_entity_views(view_type => 'Datacenter',filter => { name => $dc });
list_snapshot();
exit;

sub list_snapshot {
 my $datacenter_views = Vim::find_entity_views(view_type => 'Datacenter',filter => { name => $dc });
    foreach (@$datacenter_views) {
  my $datacenter = $_->name;
  my $host_views = Vim::find_entity_views(view_type => 'HostSystem',begin_entity => $_ );
  foreach (@$host_views) {
   my $esxhost = $_->name;
   my $vm_view = Vim::find_entity_views(view_type => 'VirtualMachine',begin_entity => $_ , filter => { 'guest.guestState' => 'running' });
   foreach (@$vm_view) {
          my $mor_host = $_->runtime->host;
          my $hostname = Vim::get_view(mo_ref => $mor_host)->name;
    my $ref = undef;
    my $nRefs = 0;
          my $count = 0;
          my $snapshots = $_->snapshot;
          if(defined $snapshots) {
             Util::trace(0,"\nSnapshots for Virtual Machine ".$_->name . " under host $hostname\n");
             printf "\n%-47s%-16s %s %s\n", "Name", "Date","State", "Quiesced";
             print_tree ($_->snapshot->currentSnapshot, " ", $_->snapshot->rootSnapshotList);
     if(defined $_->snapshot) {
                                         ($ref, $nRefs) = find_snapshot_name ($_->snapshot->rootSnapshotList, $snapshotname);
                                 }
     if ($snapshotname =~ /$removesnapshot/) {
      print "Snapshot name: $snapshotname matches for delete\n";

            if (defined $ref && $nRefs == 1) {
               my $snapshot = Vim::get_view (mo_ref =>$ref->snapshot);
               eval {
                   $snapshot->RemoveSnapshot (removeChildren => $children);
                    Util::trace(0, "\nRemove Snapshot ". $snapshotname . " For Virtual Machine ". $_->name . " under host $hostname" ." completed sucessfully\n");
       };
               if ($@) {
                   if (ref($@) eq 'SoapFault') {
                       if(ref($@->detail) eq 'InvalidState') {
                          Util::trace(0,"\nOperation cannot be performed in the current state of the virtual machine");
                       } elsif(ref($@->detail) eq 'HostNotConnected') {
                          Util::trace(0,"\nhost not connected.");
                       } else {
                          Util::trace(0, "\nFault: " . $@ . "\n\n");
                       }
                   } else {
                       Util::trace(0, "\nFault: " . $@ . "\n\n");
                   }
               }
            } else {
               if ($nRefs > 1) {
                   Util::trace(0,"\nMore than one snapshot exits with name" ." $snapshotname in Virtual Machine ". $_->name ." under host ". $hostname ."\n");
               }
               if($nRefs == 0 ) {
                   Util::trace(0,"\nSnapshot Not Found with name" ." $snapshotname in Virtual Machine ". $_->name ." under host ". $hostname ."\n");
               }
      }
     }
          #} else {
            # Util::trace(0,"\nNo Snapshot of Virtual Machine ".$_->name ." exists under host $hostname\n");
          }
   }
  }
 }
}

sub options {
        $vcserver="";$username="";$password="";$dc="";$removesnapshot="";
        GetOptions ('h|help'=>\$help,'v|vcserver=s'=>\$vcserver,'u|username=s'=>\$username,'p|password=s'=>\$password,,'d|datacenter=s'=>\$dc,'sm|snapmatch=s'=>\$removesnapshot);
        if ($help) {
                print "Usage: snapper.pl -v  -u  -p  -d  -sm \n";
                exit;
        }
        if (($vcserver eq "") || ($username eq "") || ($password eq "") || ($dc eq "") || ($removesnapshot eq "")) {
                print "Missing required parameters - Type -help for options\n";
                exit;
        }

}

sub print_tree {
 my ($ref, $str, $tree) = @_;
    my $head = " ";
    foreach my $node (@$tree) {
        $head = ($ref->value eq $node->snapshot->value) ? " " : " " if (defined $ref);
        my $quiesced = ($node->quiesced) ? "Y" : "N";
        $snapshotname = $node->name;
        printf "%s%-48.48s%16.16s %s %s\n", $head, $str.$node->name,
              $node->createTime, $node->state->val, $quiesced;
        print_tree ($ref, $str . " ", $node->childSnapshotList);
    }
    return;
}

sub find_snapshot_name {
 my ($tree, $name) = @_;
    my $ref = undef;
    my $count = 0;
    foreach my $node (@$tree) {
        if ($node->name eq $name) {
           $ref = $node;
           $count++;
        }
        my ($subRef, $subCount) = find_snapshot_name($node->childSnapshotList, $name);
        $count = $count + $subCount;
        $ref = $subRef if ($subCount);
    }
    return ($ref, $count);
}

Sunday, February 14, 2010

Netapp Aggregate/Volume Report

Getting information from the Netapp is easy with the web interface and the command line interface. However sometimes I want information in a Excel format and I want it delivered to me in a automated easily readable format.

This was the case when I created the following Perl based script that gathers all the volume/aggregate usage information from a Netapp filer or group of Netapp filers and sends that information via SMTP in a easy to read Excel format.

The link to the script is here:

http://home.comcast.net/~schmaustech/source/netapp-aggr.zip

Netapp Coverage Report for Symantec Netbackup

Anyone who has used Symantec Netbackup should be familiar with the coverage report script provided in the goodies directory. The script provides the ability to see what filesystems are being backed up on the clients in your Netbackup policies. The problem with the script is that it relies on accessing clients that have the Netbackup client installed on them. This works great except when it comes to NDMP backup policies for Netapps.

The reality is that the coverage report supplied does not provide details on what volume paths are being covered and which ones are being missed when using Netbackup to backup a Netapp filer. However, after being approached to write a solution to the issue, I came up with a script that provides that missing coverage information.

The script I wrote is Perl based and at this point needs to be run from a Unix/Linux master server (may work on Windows but was never tested). The script, when edited and provided with the proper parameters, will go out and gather the path information from one or more Netapp filers and then compare those hosts and paths to what you have in your Netbackup polices for NDMP backups. The results are then emailed off in an Excel formatted spreadsheet.

The script is here:

#!/usr/bin/perl
#########################################################
# Netapp Netbackup Coverage Report                      #
#########################################################
use strict;
use Net::SSH::Perl;
use Spreadsheet::WriteExcel;
my @hosts = ("netapp1","netapp2");
my (@output);my $hcount = 0;my $user="root";my $password="password";
my (@netpaths,@tmpnetpaths,$host,$tmparray,$output,$name,$path,$comment,$coverage);
my $subject = "Netapp_Backup_Coverage_Report";
my @emails = ('benjamin.schmaus\@schmaustech.com');
my (%saw,$scon,$errors,$exit,$netpaths,@tmparray);
my ($include,$policyname,$junk,@ppaths,$ppaths,$loop);
my ($emails,$mailit,$message);
my $num="2";

### Setup Excel Worksheet ###
my $workbook  = Spreadsheet::WriteExcel->new('/tmp/ncoverage.xls');
my $worksheet = $workbook->add_worksheet();
my $format = $workbook->add_format();
my $header = $workbook->add_format();
create_excel();

foreach $host (@hosts) {
 print "Gathering paths from $host...\n";
 $scon = Net::SSH::Perl->new ("$host",protocol => 2);
 $scon->login("$user","$password");
 ($output[$hcount],$errors,$exit) = $scon->cmd("cifs shares;logout telnet");
 $hcount++;
}
foreach $output (@output) {
 @tmparray = split(/\n/,$output);
 foreach $tmparray (@tmparray) {
  if ($tmparray =~ /\/vol/) {
   ($name,$path,$comment) = split(' ',$tmparray);
   $path =~ tr/[A-Z]/[a-z]/;
   push(@tmpnetpaths,$path);
  }
 }
}
print "Sorting paths from Netapps...\n";
undef %saw;
@saw{@tmpnetpaths} = ();
@netpaths = sort keys %saw;
foreach $host (@hosts) {
 print "Gathering backup selections from Netbackup for $host...\n";
        open DATA, "/usr/openv/netbackup/bin/admincmd/bppllist -allpolicies -U -byclient $host|";
        while () {
                chomp();
  $_ =~ s/\s//g;
  if ($_ =~ /PolicyName/) {
                        ($junk,$policyname) = split(/:/);
  }
  if ($_ =~ /\/vol\//) {
                 $_ =~ s/\s//g;
   $_ =~ s/Include://g;
   $_ =~ s/\/*$//g;
   $_ =~ tr/[A-Z]/[a-z]/;
   push (@ppaths,"$policyname:$_");
  }
        }
        close (DATA);
}
#### Reconcile time ####
print "Reconcile paths...\n";
foreach $netpaths (@netpaths) {
 $path = $netpaths;
 $loop ="0";
 foreach $ppaths (@ppaths) { 
  ($policyname,$include) = split(/:/,$ppaths);
  if ($netpaths =~ /$include/) {
   $coverage="COVERED";
   cell();
   $loop ="1";
  }
 }
 if ($loop eq "0") {
  $coverage="UNCOVERED";
  $policyname="NONE";
  cell();
 }
}

$workbook->close();


### Mail Off Results ###
print "Mailing off results.\n";
mailit();
exit;

### Setup Excel Format Subroutine ###
sub create_excel {
        $format->set_bold();
        $format->set_size(16);
        $format->set_align('center');
        $header->set_bold();
        $header->set_align('center');
        $worksheet->set_column(0, 0, 40);
        $worksheet->set_column(1, 1, 20);
        $worksheet->set_column(2, 2, 20);
        $worksheet->write(1, 0,  'Netapp Path', $header);
        $worksheet->write(1, 1,  'Netbackup Policy', $header);
        $worksheet->write(1, 2,  'Status', $header);
        $worksheet->merge_range('A1:F1','Netapp Backup Coverage Report',$format);
}

### Mail Subroutine ###
sub mailit {
        $message = `echo "'Netapp Backup Coverage Report">/tmp/ncr-body.txt`;
        $message = `echo "">>/tmp/ncr-body.txt`;
        $message = `/usr/bin/uuencode /tmp/ncoverage.xls ncoverage.xls > /tmp/ncr-attachment.txt`;
        $message = `cat /tmp/ncr-body.txt /tmp/ncr-attachment.txt > /tmp/ncr.txt`;
        foreach $emails (@emails) {
                $mailit = `/usr/bin/mailx -s $subject $emails < /tmp/ncr.txt`;
        }
}

sub cell {
 #print "NETPATH: $path\n";
 $worksheet->write($num,0,$path);
        $worksheet->write($num,1,$policyname);
        $worksheet->write($num,2,$coverage);
        $num = $num + 1;
}

Saturday, November 29, 2008

Intregrating SAMBA\WINBIND on AIX 4.3.3 with Microsoft Active Directory

Overview: This document is a road map on how you can integrate SAMBA with your Active Directory environment. This configuration will allow your Samba server to appear as a member of Active Directory. It will also allow your telnet sessions to use Active Directory for authentication.

AIX Setup:
Verify your system has all the BOS sub packages from the AIX install CD's.

Install rpm package manager (rpm.rte) with installp:

installp -qacXgd rpm.rte rpm.rte

Install the following rpms (http://www-1.ibm.com/servers/aix/products/aixos/linux/download.html)
If they are all in the same directory, you can do this by doing the following:

rpm -ivh --nodeps *.rpm

Packages Required:

autoconf-2.53-1.aix4.3.noarch.rpm
automake-1.5-1.aix4.3.noarch.rpm
bash-2.05a-1.aix4.3.ppc.rpm
bison-1.34-2.aix4.3.ppc.rpm
db-3.3.11-3.aix4.3.ppc.rpm
flex-2.5.4a-6.aix4.3.ppc.rpm
gawk-3.1.0-2.aix4.3.ppc.rpm
gettext-0.10.39-2.aix4.3.ppc.rpm
glib-1.2.10-2.aix4.3.ppc.rpm
glib-devel-1.2.10-2.aix4.3.ppc.rpm
glib2-2.2.1-3.aix4.3.ppc.rpm
glib2-devel-2.2.1-3.aix4.3.ppc.rpm
gzip-1.2.4a-7.aix4.3.ppc.rpm
libtool-1.4.2-1.aix4.3.ppc.rpm
m4-1.4-14.aix4.3.ppc.rpm
make-3.79.1-3.aix4.3.ppc.rpm
openldap-2.0.21-4.aix4.3.ppc.rpm
openldap-devel-2.0.21-4.aix4.3.ppc.rpm
pkgconfig-0.15.0-1.aix4.3.ppc.rpm
rpm-3.0.5-30.aix4.3.ppc.rpm
sed-3.02-8.aix4.3.ppc.rpm
tar-1.13-4.aix4.3.ppc.rpm

Update PATH and LD_LIBRARY_PATH:

PATH=/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/local/bin:/usr/local/sbin:/usr/local/samba/bin:/usr/local/samba/sbin
LD_LIBRARY_PATH=/usr/lib:/usr/local/lib:/lib

Download binutils and gcc binaries:

binutils.2.9.1.tar.gz (http://sunsite.lanet.lv/ftp/unix/aix-binaries/uclapub/binutils/RISC/4.2/exec/)

gcc.3.3.4.tar.Z (http://aixpdslib.seas.ucla.edu/packages/gcc.html)

Download source code for the following:

krb5-1.3.5.tar.gz (http://web.mit.edu/kerberos/www/dist/)
openldap-2.2.18.tar.gz (http://www.openldap.org/software/download/)
samba-3.0.8pre2.tar.gz (http://www.samba.org)

Install binutils:

gzip -d binutils.2.9.1.tar.gz
cp binutils.2.9.1.tar /
tar -xvf binutils.2.9.1.tar
rm /binutils.2.9.1.tar
**Note** Untar the binutils from the / directory so the files are placed into the proper locations.


Install gcc:

gzip -d gcc.3.3.4.tar.Z
cp gcc.3.3.4.tar /
tar -xvf gcc.3.3.4.tar
rm /gcc.3.3.4.tar
**Note** Untar the binutils from the / directory so the files are placed into the proper locations.


Build and install Kerberos:

gzip -d krb5-1.3.5.tar.gz
tar -xvf krb5-1.3.5.tar
cd krb5-1.3.5
./configure --enable-dns --enable-dns-for-kdc --enable-dns-for-realm
make
make install

Build and install OpenLDAP:

gzip -d openldap-2.2.18.tar.gz
tar -xvf openldap-2.2.18.tar
cd openldap-2.2.18
./configure --disable-slurpd --disable-bdb --disable-slapd --without-threads
make
make install

Build and install Samba:

gzip -d samba-3.0.8pre2.tar.gz
tar -xvf samba-3.0.8pre2.tar
cd samba-3.0.8pre2
./configure --with-winbind --with-ldap --with-ads --with-krb5=/usr/local
make
make install

Configure Kerberos:

Edit /etc/krb5.conf to reflect the following (substitute DOMAIN.COM with your domain):

[logging]
default = FILE:/var/log/krb5/libs.log
kdc = FILE:/var/log/krb5/kdc.log
admin_server = FILE:/var/log/krb5/admin.log

[libdefaults]
ticket_lifetime = 24000
default_realm = DOMAIN.COM
forwardable = true
proxiable = true
dns_lookup_realm = false
dns_lookup_kdc = false

[realms]
DOMAIN.COM = {
default_domain = domain.com
kdc = :88
admin_server = :749
}

[domain_realm]
.domain.com = DOMAIN.COM
domain.com = DOMAIN.COM

[kdc]
profile = /var/kerberos/krb5kdc/kdc.conf

[pam]
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false


Configure Samba:

Edit /usr/local/samba/lib/smb.conf to reflect the following (substitute DOMAIN with your domain):
**Note** That the shares are examples and may be different.

[global]
workgroup = DOMAIN
netbios name = HOSTNAME
server string = HOSTNAME
security = ADS
realm = DOMAIN.COM
password server =
wins server =
client use spnego = yes
client signing = yes
encrypt passwords = yes
printcap name = cups
disable spoolss = Yes
show add printer wizard = No
idmap uid = 15000-20000
idmap gid = 15000-20000
winbind separator = +
winbind use default domain = Yes
winbind enum users = yes
winbind enum groups = yes
template homedir = /home/%U
template shell = /bin/bash
use sendfile = Yes
printing = cups
ldap suffix = "dc=DOMAIN, dc=com"
winbind cache time = 0
#Uncomment to allow these options
#log level = 8
#log file = /var/log/samba.log
#max log size = 5000000
#debug timestamp = yes
browseable = yes
obey pam restrictions = yes
auth methods = winbind

[homes]
comment = User Home
path = /home/%U
force group = %U
read only = No
browseable = No

[alpha]
comment = OSCAR Alpha Code (Read/Write)
path = /apps/oscar/alpha
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
read only = No
browseable = Yes

[beta]
comment = OSCAR Beta Code (Read Only)
path = /apps/oscar/beta
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
read only = Yes
browseable = Yes

[scripts]
comment = OSCAR Scripts (Read Only)
path = /apps/oscar/scripts
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
read only = Yes
browseable = Yes

[logs]
comment = OSCAR Logs (Read Only)
path = /apps/logs
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
force user = oscar
force group = dev
read only = Yes
browseable = Yes

[archive]
comment = OSCAR Archive (Read Only)
path = /apps/archive
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
force user = oscar
force group = dev
read only = Yes
browseable = Yes

[apps]
comment = OSCAR
path = /apps
valid users = @dev, @REDHAT
admin users = @dev, @REDHAT
read only = No
browseable = Yes

[public]
comment = test
path = /usr/local/source
read only = No
browseable = Yes

**Note** Do not start Samba yet!

Active Directory Integration:

Obtain a kerberos ticket from your AD server by issuing the command:

kinit Administrator

You will then be asked for a password. Put in the Administrator password for your Domain.

To verify the ticket was issued do the following:

klist

The results should appear as follows:

# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: Administrator@DOMAIN.COM

Valid starting Expires Service principal
11/03/04 14:26:23 11/04/04 00:26:22 krbtgt/DOMAIN.COM@DOMAIN.COM
renew until 11/04/04 14:26:23


Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached

Once you have obtained kerberos ticket you can join the computer to the domain:

net ads join

Now start the Samba and Winbind:

/usr/local/samba/sbin/smbd -D
/usr/local/samba/sbin/nmbd -D
/usr/local/samba/sbin/winbindd

Winbind and Active Directory Authentication:

First you will need to copy the WINBIND file from where is was created when you compiled Samba to /usr/lib/security:

cp /path/to/samba-3.0.8pre2/nsswitch/WINBIND /usr/lib/security

Next you will need to add a stanza to the file /usr/lib/security/methods.cfg:

WINBIND:
program = /usr/lib/security/WINBIND
options = authonly

Finally you will need to edit /etc/security/users and make sure under the default stanza that SYSTEM is set to WINBIND:

default:
admin = false
login = true
su = true
daemon = true
rlogin = true
sugroups = ALL
admgroups =
ttys = ALL
auth1 = SYSTEM
auth2 =
tpath = nosak
umask = 022
expires = 0
SYSTEM = "WINBIND"
logintimes =
pwdwarntime = 0
account_locked = false
loginretries = 0
histexpire = 0
histsize = 0
minage = 0
maxage = 0
maxexpired = -1
minalpha = 0
minother = 0
minlen = 0
mindiff = 0
maxrepeats = 8
dictionlist =
pwdchecks =

Test your authentication by issuing a telnet to the aix box and login using your Active Directory credentials.

Friday, August 01, 2008

Netbackup Schedules in 5.x or 6.x

Anyone who has managed Netbackup in a large environment knows how difficult it can be to schedule backup jobs. It is not the creation of the schedule that is the issue, but rather the allocation of resources and knowing when you should schedule a job to run. The issue becomes more cumbersome over time due to new policies being created and older jobs taking longer to run.

I personally faced this challenge myself and found that using the built-in function bpschedreq -predict or nbpemreq -predict was just not adequate. Not only was it cumbersome, I found that some of the time it did not even display jobs that I knew were going to run.

To resolve this issue, I wrote the nbpol script. This script when used will allow you to peer into a specific date and see what schedules will kick off and at what time. It also allows you to print out a histogram summarizing where your volume of jobs run. The syntax is as follows:

Usage: nbpol -y (year) -m (month) -d (day) -t (am|pm|all) (-graph)


Below is the actual perl code:

#!/usr/bin/perl
#########################################################################################
# nbpol: Script to determine which policies will run on a given day when policies #
#  are calendar based.        #
# written: Benjamin Schmaus         #
# date: 070108          #
#########################################################################################
use DateTime;
use Getopt::Long;
use CGI;
use Time::Local;
use POSIX qw(strftime);
$first = "0";$last = "0";
$schd1 = "SCHED ";$schd2 = "SCHED";
$counter = "0";
$hour = "0";$minute = "0";$second = "0";
$arraycount = 0;
@gac = 0;
options();
@policies = `/usr/openv/netbackup/bin/admincmd/bppllist -l`;
datetime2();
printf "%-30s %-20s %-15s %-15s\n","Policy","Schedule","Start Time","Duration";
printf "%-30s %-20s %-15s %-15s\n","------","--------","----------","--------";
foreach $policies (@policies) {
 $counter = 0;
 chomp($policies);
 open DATA, "/usr/openv/netbackup/bin/admincmd/bpplsched $policies -l|";
 while () {
  $line = $_;
  chomp($line);
  if ($line =~ /$schd1/ && $first eq "0") { first(); }
  if ($line =~ /$schd2/ && $first eq "1") { checks(); }
 }
 close DATA;
 for ($out = 0; $out < $counter; $out++) {
  $policies =~ s/\s*$//g;
  if ($schedcaldayoweek[$out] =~ /$datum/) {
   parseit();
   if ($windowl > 0) {
    if (($time eq "am") && ($starttime < 12)) {
     printf "%-30s %-20s %-15s %-15s\n",$policies,$schedule2,$starttime,$windowl;
     $gac[int($starttime)] = $gac[int($starttime)] + 1;
    } elsif (($time eq "pm") && ($starttime > 11.99)) {
     printf "%-30s %-20s %-15s %-15s\n",$policies,$schedule2,$starttime,$windowl;
     $gac[int($starttime)] = $gac[int($starttime)] + 1;
    } elsif ($time eq "all") {
     printf "%-30s %-20s %-15s %-15s\n",$policies,$schedule2,$starttime,$windowl;
     $gac[int($starttime)] = $gac[int($starttime)] + 1;
    }
   }
  }
 }
}
if ($graph = "1") {
 graphit();
}
exit; 

sub graphit {
 print "\n\n";
 print "Hr\tNumber of Jobs\n";
 print "--\t---------------\n";
 for ($loop = 0; $loop < 24; $loop++) {
  print "$loop\t";
  for ($loop2 = 0; $loop2 < $gac[$loop]; $loop2++) {
   print "*";
  }
  print "\n";
 }
}

sub options {
 $help="";$year="" ;$month="";$day="";$time="";$graph="";
 GetOptions ('h|help'=>\$help,'y|year=s'=>\$year,'m|month=s'=>\$month,'d|day=s'=>\$day,'t|time=s'=>\$time,'graph'=>\$graph);
 if ($help) {
  print "Usage: nbpol -y  -m  -d  -t  [ -graph ]\n";
  exit;
 }
 if (($year eq "") || ($month eq "") || ($day eq "") || ($time eq "")) {
  print "Usage: nbpol -y  -m  -d  -t  [ -graph ]\n";
  exit;
 }
}

sub parseit {
 $field2 = ($dow*2);
 $field1 = ($dow*2)-1;
 @schedtmp = split(/[ \t]+/,$schedule[$out]);
 $schedule2 = $schedtmp[1];
 @schedwintmp = split(/[ \t]+/,$schedwin[$out]);
 $starttime = ($schedwintmp[$field1]/(60*60));
 $starttime =~ s/(^\d{1,}\.\d{2})(.*$)/$1/;
 $windowl = ($schedwintmp[$field2]/(60*60)); 
 $windowl =~s/(^\d{1,}\.\d{2})(.*$)/$1/;
}

sub datetime2 {
 $dt = DateTime->new(year=>$year, month=>$month,day=>$day,hour=>$hour,minute=>$minute,second=>$second,nanosecond=>00,time_zone=>'America/Chicago',);
 print "$dt\n";
 $dow = $dt->day_of_week;      ##### 1-7 (Monday is 1) - also dow, wday
 $wod = $dt->weekday_of_month();  ##### 1-5 weeks
 if ($dow eq "7") { $dow = "1"; } else { $dow = $dow +1; }
 $datum = "$dow,$wod";
 chomp($datum);
}


sub first {
 $first = "1";
 $schedule[$counter] = "$line";
}

sub checks {
 if ($line =~ /SCHEDCALENDAR/) {
  $schedcalendar[$counter] = "SCHEDCALENDAR enabled";
 }
        if ($line =~ /SCHEDCALDAYOWEEK/) {
                $schedcaldayoweek[$counter] = "$line";
        }
        if ($line =~ /SCHEDWIN/) {
                $schedwin[$counter] = "$line";
        }
        if ($line =~ /SCHEDRES/) {
                $schedres[$counter] = "$line";
        }
        if ($line =~ /SCHEDPOOL/) {
                $schedpool[$counter] = "$line";
        }
        if ($line =~ /SCHEDRL/) {
                $schedrl[$counter] = "$line";
        }
        if ($line =~ /SCHEDFOE/) {
                $schedfoe[$counter] = "$line";
  $first = "0";
  $counter = $counter+1;
        }
}

Wednesday, May 14, 2008

How to get tomorrow's date in BASH shell

Sometimes when you write a script, you need to get tomorrow's date. This can come in handy if you want to see if it is the first day of the next month, and therefore execute a monthly job that should always run at the end of the month.

TOMDATE=$(TZ=CDT-24 /bin/date +%d)

What Perl modules are installed?


Sometimes the package manager of our favorite Linux distribution does not have the Perl modules we need for a development project we are working on.   In that case we have to resort to using CPAN which is analogous to Pip for Python modules.

perl -MCPAN -e shell
cpan> install package-name

Now that process works very well and even prepends dependent modules. However this leads to one pesky problem. How do you know what modules have already been installed and their version?

Well a friend of mine pointed out this gem which is not readily documented on Google:

perl -MCPAN -e 'print CPAN::Shell->r '

This command will tell you what modules and version is installed on the system it is run on.

The other option of course, and I did this at the University of Minnesota, is to download the source Perl modules and make RPM packages. However I find using the CPAN shell to be more convenient.

Saturday, December 01, 2007

Sharing Local Profile in Windows XP

Summary: This article describes how to associate more than one user account with a single local profile. This is especially useful for portable computer users that have a domain account they use while in the office, but use a local account when they are away from the office.

Steps:

1. Create a local user account on your Windows XP desktop. (Example: username).

2. Next create a domain user account on the Domain controller that the Windows XP desktop is a member of. (Example: username). Remember, the local user can have the same name as the domain account since desktop maintains a local database of users separate from the domain controller.

3. Depending on your environment, you may wish to skip this step. Log into the Windows XP desktop as the local Administrator account. Go into Computer Management->Users and Groups. Add the local account and domain account you created in steps 1 and 2 to the local Administrators group. This allows our users some flexibility in being able to do various things on their desktop.

4. Now log in as the local account you created (tester). This will create the default local profile (username) and adds the path to the ProfileList in the registry. Log off when complete.

5. Now log in as the domain account you created (tester). This will create the default local profile (username.domain) and add the path top the ProfileList in the registry.

6. Depending on your environment, you may wish to skip this step. Before you log off as the domain account, go into System Properties->Advanced->User Profiles. Verify that the domain account profile is set to local and not roaming. If it is set to roaming, you will need to change that to local. Once complete logoff.

7. Reboot the machine. This clears up Windows processes that are still using the .dat files for the accounts we logged in as. Failure to do so might yield errors in latter steps. Specifically: “The file is in use by another process”.

8. Log into the Windows XP desktop as the local Administrator account.

9. Edit the permissions on the profile to enable your domain account to access it. Start Regedt32 and go to HKEY_USERS. With HKEY_USERS selected, click the Load Hive option from the Registry menu. Select the file "C:\Documents and Settings\username\Ntuser.dat, where username is your local account name that we created in step 1.

10. When prompted to enter a key name, type in your user name and press ENTER. You can now see an entry for your user name under HKEY_USERS. Select it and click Permissions from the Security menu. Add your domain account name to the list of permissions, granting the account full control. Click OK when you are finished.

11. To save this change, select your username, and then click Unload Hive from the Registry menu.

12. Next we need to alter the path that points to the profile. In Regedt32, go to the following key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList

13. Under this key, you can see a list of Security Identifiers (SIDs). To find the SID corresponding to your new local account, open each key and look at the value for the ProfileImagePath. When you find the value that matches username.domain, modify the ProfileImagePath data so that it points to your local account profile path.

14. Close Rededt32 and log on with your local account. You can see your familiar profile.

15. Reboot the Windows XP desktop.

16. Close Rededt32 and log on with your domain account. You can see your familiar profile.

17. The results from steps 14 and 16 should provide you with the same desktop settings and customizations.

18. This procedure gives users desktop consistency whether they are using their domain account in the office or their local account in their home office.