Monday, May 25, 2015

UCS Vmedia Policy with XML & Perl


In Cisco UCS there is a concept called a VMedia policy.  This policy allows you to designate a bootable ISO image from another server via HTTP protocol and then have it available as a boot device for a Cisco UCS blade in UCSM.   The following script is a rough framework in Perl that uses XML calls to configure such a policy.    This script could be elaborated on to take inputs for some of the various variables that I pre-populate as an example of how the framework would work.


#!/usr/bin/perl
use strict;
use LWP::UserAgent;
use HTTP::Request::Common;
my $ucs = "https:///nuova";       # This is the URL to your UCSM IP
my $username = "admin";           # Admin or other user to manager UCSM
my $password = "password";        # Password for user above
my $server = "ls-servername0001"; # Server name as defined in UCS convention
my $server2 = "servername0001";   # Server name from friendly view
my $policyname = "$server2";      # Policy name (derived from server name in this example)
my $mntename = "$server2";        # Mount name (derived from server name in this example)
my $type = "cdd";                 # Policy mount type (in this case CD-ROM)
my $image = "$server2.iso";       # Name of ISO image
my $imagepath = "/";              # Image path within the URL of remote host serving ISO
my $mountproto = "http";          # Protocol used to access ISO image
my $remotehost = "";              # Remote host IP serving ISO image
my $serverdn = "org-root/org-corp/$server";     # Server DN within UCSM  

###  Everything below remains constant###
### Get Cookie ###

my ($xmlout,@xmlout,$cookie);
my $login="";
my $userAgent = LWP::UserAgent->new;
my $response = $userAgent->request(POST $ucs, Content_Type => 'text/xml', Content => $login);

(@xmlout)= split(/\s+/,$response->content);

### Process Cookie ###

foreach $xmlout (@xmlout) {
        if ($xmlout =~ /outCookie/) {
                $cookie=$xmlout;
                $cookie =~ s/outCookie=\"|\"//g;
                print "$cookie\n";
        }
}

###Setup Vmedia Policy String###

my $crpolicy = "";

###Configure Mount Policy Within Vmedia Policy String###

my $mountentry = " ";

###Add Vmedia Policy String###

my $addvmedia = "  ";

###Execute Setup Vmedia Policy String###

$response = $userAgent->request(POST $ucs, Content_Type => 'text/xml', Content => $crpolicy);
(@xmlout)= split(/\s+/,$response->content);
printxml();
print "\n";

###Execute Configure Mount Policy Within Vmedia Policy String###

$response = $userAgent->request(POST $ucs, Content_Type => 'text/xml', Content => $mountentry);
(@xmlout)= split(/\s+/,$response->content);
printxml();
print "\n";

###Execute Add Vmedia Policy String###

$response = $userAgent->request(POST $ucs, Content_Type => 'text/xml', Content => $addvmedia);
(@xmlout)= split(/\s+/,$response->content);
printxml();
print "\n";
exit;

###Parse XML response###

sub printxml {
        foreach $xmlout (@xmlout) {
                if ($xmlout =~ /=/) {
                        $xmlout =~ s/\'|\"|\/|\>//g;
                        $xmlout =~ s/=/ = /g;
                        print "$xmlout\n";
                }
        }
}

Sunday, May 24, 2015

Deleting Duplicate Hosts In Satellite or Spacewalk


Sometimes when you register hosts to Satellite or Spacewalk you end up with duplicate hosts being registered.   The old one that will no longer check in and just be an orphan and a new one which will check in and get updates and packages.   I saw this behavior a lot in environments where people were using Vagrant and/or Openstack where they would continuously launch the same host with the same hostname and register it to Satellite. 

The script  below can be used to clean out those duplicate hosts and can be setup to run from cron daily.   The script assumes you run this as the root user and have configured root to run spacecmd without specifying username and password at the command line.  This was tested with Satellite 5.6.

#!/usr/bin/perl
### Delete duplicate hosts in Satellite or Spacewalk via Spacecmd ###
@duphosts = `spacecmd -q system_list | uniq -d`;
foreach $system (@duphosts) {
        chomp($system);
        $spacecmd = `spacecmd system_details $system 2>&1 |grep $system |grep =|sed 's/^.*=/=/'`;
        $spacecmd =~ s/\s+//g;
        $spacecmd =~ s/=//g;
        @ids = split(/\,/,$spacecmd);
        $count=0;
        foreach $ids (@ids) {
                chomp($ids);
                $count++;
                if ($count > $#ids) {
                        print "Duplicates removed for system: $system\n";
                        last;
                }
                $cmd = `spacecmd -y system_delete $ids`;
                print "$cmd\n";
                sleep (2);
        }
        print "Cleanup of $system complete...\n";
}
print "Cleanup of Satellite complete!\n";

Syncing Redhat Repos With Pulp


The following is a basic installation/configuration guide to setting up Pulp to pull package channels from Redhat's CDN(Content Delivery Network) so that you can leverage Spacewalk, Suse Manager or another repository manager that would not normally be able to access Redhat directly.

Assumptions:  This assumes you are installing Pulp on Redhat 6.6, although I don;t see why this would not work on Redhat 7.0 or another Linux distro for that matter.  The only changes would be the switch from init scripts to systemd in the below documentation.

1)  Register the host with Redhat directly to receive its updates:
      #subscription-manager register --force
      #subscription-manager refresh
      #subscription-manager subscribe --auto

2) Run an update to confirm you can access the Redhat repos properly:
     #yum upgrade

3) Install the Pulp repo and the Linux Epel repo as you will need packages from both:
    #rpm -Uvh https://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
    #rpm -Uvh http://repos.fedorapeople.org/repos/pulp/pulp/rhel-pulp.repo

4) Install, enable and install mongodb server:
    #yum install mongodb-server
    #service mongod start
    #chkconfig mongod on

5) Install, enable and start qpidd:
    #yum install qpid-cpp-server qpid-cpp-server-store
    #service qpidd start
    #service qpidd on
    #chkconfig qpidd on

6) Install Pulp group of packages:
    #yum groupinstall pulp-server-qpid

7) Run Pulp database setup to populate Pulp database:
    #sudo -u apache pulp-manage-db

8) Enable and start web service:
    #service httpd start
    #chkconfig httpd on
    #systemctl enable httpd

9) Enable and start pulp workers, celery beat and pulp resource manager:
    #chkconfig pulp_workers on
    #service pulp_workers start
    #chkconfig pulp_celerybeat on
    #service pulp_celerybeat start
    #chkconfig pulp_resource_manager on 
    #service pulp_resource_manager start

10) Install pulp-admin packages:
      #yum groupinstall pulp-admin

11) Install Pulp consumer qpid package:
      #yum groupinstall pulp-consumer-qpid

12) Edit Pulp admin.conf, consumer.conf and agent.conf to your specifications:
      #vi /etc/pulp/admin/admin.conf
      #vi /etc/pulp/consumer/consumer.conf
      #vi /etc/pulp/agent/agent.conf

At this point Pulp should be ready to consume something from the Redhat Content Delivery Network.   Lets see what setting up a sync looks like in the following steps.

1) Create a repo with the feed location and the correct certs and keys related to accessing that repo feed.
    #pulp-admin rpm repo create --repo-id=rhel-6-server-rpms --feed=https://cdn.redhat.com/content     /dist/rhel/server/6/6Server/x86_64/os --feed-ca-cert=/etc/rhsm/ca/redhat-uep.pem --feed-key=/etc/pki/entitlement/5161085288703435774-key.pem --feed-cert=/etc/pki/entitlement/5161085288703435774.pem

2) (Optional) Configure the repo you created with the number of download workers and max download speed.   This helps if you are pulling packages down over a smaller WAN link and do not want to saturate it.
    #pulp-admin rpm repo update --max-speed=14000 --repo-id=rhel-6-server-rpms
    #pulp-admin rpm repo update --max-downloads=2 --repo-id=rhel-6-server-rpms

3) Configure the repo so that it is served up via the web server so it can be consumed via HTTP:
    #pulp-admin rpm repo update --repo-id=rhel-6-server-rpms  --serve-http=true

4) Sync the repo from the source, in this case in our example Redhat:
    #pulp-admin rpm repo sync run --repo-id=rhel-6-server-rpms

5) (Optional) Setup the sync in step 4 in a cron job to occasionally sync the newer packages to keep your Pulp repo up to date.