Tuesday 22 March 2016

Get all IPs, Mac Addresses, and Network Adapter names for each VM

Recently I had to write a PowerCLI script to gather all of the MAC address, IP Address, and Network adapter information from all VMs in a cluster.  While not super difficult, it proved a bit challenging as I couldn't just use get-networkadapter as that only returns Network Adapter name and MAC address, but not IP information, and conversly the vm.guest.net object returns MAC address and IP information, but not Network Adapter Name.  With a ton of nested loops (which takes forever to run) I was able to gather the info and export it to a CSV

$vms = get-vm | sort Name
foreach($vm in $vms)
{

$VMx = get-view $VM.ID
$HW = $VMx.guest.net
$adapters = get-networkadapter -VM $vm
foreach($dev in $hw)
{
foreach($ip in $dev.ipaddress)
{
foreach($adapter in $adapters)
{
$out += $dev | select @{N="Name";E={$vm.name}}, @{N="AdapterName";E={$adapter.Name}},@{N="IP Address";E={$ip}}, @{N="MAC";E={$dev.macaddress}} | WHERE {$dev.macaddress -eq $adapter.macaddress}

}
}
}
}

$out | export-csv .\VM_MAC_IP.csv -notypeinformation


Saturday 18 January 2014

SELinux and Changing Ports

As with many little Linux projects, what was intended to be a 2 minute activity turned into a 20 minute activity, this time in thanks to our friend SELinux.

In the past, I've always just disabled SELinux- what's the need, after all, as I'm usually just setting up projects out of my home lab and SELinux seems like a bit of overkill.  What's more, even in the Red Hat Certified System Admin course, they have you turn it off, as managing SELinux is more of a RHCE task.

Well, as it so happens, I'm currently studying for my RHCE and figured now is as good a time as any to get some practice in, if only inadvertently.

So the task at hand: change the SSH listening port from 22 to 443 so I can safely browse the interwebs and circumvent those pesky proxies.  So to do so, I log in and edit /etc/ssh/sshd_config:

#       $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $

# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options change a
# default value.

#Port 22
Port 443
#AddressFamily any
#ListenAddress 0.0.0.0

#ListenAddress ::

I add a rule in iptables to allow 443:
[username@localhost ~]$ sudo iptables -I INPUT 4 -p tcp --dport https -j ACCEPT

Then I restart sshd:
[username@localhost ~]$ sudo service sshd restart

and attempt to SSH via 443 from my laptop:
[username@laptop ~]$ ssh usernam@centos01 -p 443
ssh: connect to host centos01 port 443: Connection refused

... WTF? I try telnet:
[username@localhost ~]$ telnet centos01 443
Trying 10.21.4.10... 
telnet: connect to address 10.21.4.10: Connection refused telnet: Unable to connect to remote host
Ah, so we're either blocking or not listening on 443. Let's try locally on the box:
[username@centos01 ~]$ ssh username@localhost -p 443
ssh: connect to host localhost port 443: Connection refused

So we're not listening. Weird. Perhaps this has something to do with SELinux:
[username@localhost ~]$ cat /selinux/enforced
1

Yep, we're enabled. So I temporarily disable SELinux:
[username@localhost ~]$ sudo echo 0> /selinux/enforce 

And let's try that again:

[username@localhost ~]$ ssh username@localhost -p 443
[username@localhost ~]$ The authenticity of host 'centos01 (10.5.10.10)' can't be established.
RSA key fingerprint is 94:21:69:84:1a:87:a7:94:98:64:95:f5:9e:ab:97:c4.
Are you sure you want to continue connecting (yes/no)?

Hooraw! Sure enough, it looks like SELinux is gumming up the works. So how do we allow SSH to listen on port 443? Well a bit of Googling tells us we need a tool called semanage, but it's not installed. Right then:
[username@localhost ~]$ sudo yum provides /usr/sbin/semanage
Loaded plugins: rhnplugin
policycoreutils-python-2.0.83-19.8.el6_0.x86_64 : SELinux policy core python utilities
Repo        : rhel-x86_64-server-6
Matched from:
Filename    : /usr/sbin/semanage
policycoreutils-python-2.0.83-19.1.el6.x86_64 : SELinux policy core python utilities
Repo        : rhel-x86_64-server-6
Matched from:

Filename    : /usr/sbin/semanage
[username@localhost ~]$ sudo yum install policycoreutils-python

Alright, so we have semanage installed, no it's time to append port 443 to the ssh_port_t:
[username@localhost ~]$ sudo semanage port -l | grep ssh
[username@localhost ~]$ ssh_port_t tcp 22
[username@localhost ~]$ sudo semanage port -a -t ssh_port_t -p tcp 443
[username@localhost ~]$ /usr/sbin/semanage: Port tcp/443 already defined 

Balls. Okay, so apparently you can only define a TCP port in one SELinux policy. Makes sense. Where is 443 defined?
[username@localhost ~]$ sudo semanage port -l | grep 443
[username@localhost ~]$ http_port_t tcp 80, 443, 488, 8008, 8009, 8443

Ah, of course. It's defined for HTTP. Now then, let's just remove it from HTTP and add it to SSH:
[username@localhost ~]$ sudo semanage port -d -t http_port_t -p tcp 443
[username@localhost ~]$ /usr/sbin/semanage: Port tcp/443 is defined in policy, cannot be deleted

Double balls. Alright, so we apparently have to modify the port to be included in ssh_port_t:
[username@localhost ~]$ sudo semanage port -m -t ssh_port_t -p tcp 443

[username@localhost ~]$ sudo semanage port -l | grep 443

http_port_t                    tcp      80, 81, 443, 488, 8008, 8009, 8443, 9000
pki_ca_port_t                  tcp      829, 9180, 9701, 9443-9447
pki_kra_port_t                 tcp      10180, 10701, 10443-10446
pki_ocsp_port_t                tcp      11180, 11701, 11443-11446
pki_tks_port_t                 tcp      13180, 13701, 13443-13446
ssh_port_t                     tcp      443, 1255, 22

Sweet! Done and done. Now we can re-enable SELinux enforcement and ssh to our host! Hat tip to m4ccum4ccu for his helpful blog post which I've borrowed from heavily for this one.

Wednesday 20 November 2013

How to Choose Between Hyper-V and vSphere

A short whitepaper from Gartner comparing Microsoft's Hyper-V in Server 2012 and vSphere 5.5. The PoV is high-level, but outlines cost and functionality considerations when comparing the two hypervisors.  Key findings are:

  • Hyper-V has made significant strides towards being an actual competitor with vSphere in terms of functionality and cost with the release of Server 2012
  • Hyper-V may be suitable for small deployments where centralized management is not required.
  • Functionally Hyper-V falls short to vSphere in SRM, non-Windows based guest support (e.g. live Linux snapshotting), DRS, and Storage DRS.
  • Although Hyper-V now has equivalent technologies to VMware's HA and affinity rules, it is more complicated to implement and manage, requiring multiple tools
  • vSphere still has a significant market lead over Microsoft, due in large part to the first-mover advantage and better hybrid cloud offerings
Although Microsoft may be moving from being simply a niche player in the hypervisor space, they are still a far cry from gaining significant market share from VMware.  Hyper-V has a significant OS footprint relative to that of ESXi (5GB vs 144MB respectively), requiring more patching and likely more downtime as a result.  Tools like SRM and DRS are integral to many organization's data center and DR strategies.  Lastly, while Hyper-V offers more hardware support than vSphere, this is really only an advantage for small organizations or home labs, as most enterprises have the resources and IT maturity to standardize hardware or purchase blade server technologies.

Sunday 3 November 2013

Dammit Apple, you ruin everything

Not one to miss out on an opportunity for free software/upgrades, I upgraded my 2011 Macbook Pro to OSX Maverick last weekend.  The upgrade generally went pretty well, although it was slow.  OSX got some minor face lifts, including the launcher menu with an opaque background:

Aside from that, nothing has really changed for me- I don't use Apple Chat/iChat, I don't intend to buy ebooks from Apple ever, and I don't own an iPhone or use iTunes.

What has significantly changed for me is Apple as decided to dumb down its nifty Wireless Diagnostic tool introduced with OSX Lion.  Gone are the days when I could monitor and track useful performance data for my wireless network connectivity from my MacBook.  It has since been replaced with a stripped-down, diluted utility that wraps up logs so you can send them to Apple for support...



F*ck you Apple.  Seriously.  You had such a great, useful, practical utility tucked away in your dumbed-down OS and you managed to ruin it and strip it of any meaningful utility.

Perhaps there is still a way to get the rich monitoring information once before available, but if there is, I haven't figured it out.  I'll continue to dig, but the fact that I have to do so is ridiculous- it was perfect before!

This may be the final tick in the box for me to leave OSX all together to a more useful and practical OS that leaves some semblance of respect for its users. Now where'd I leave that BSD Live CD...

Saturday 26 October 2013

Exporting and Importing Volume Groups

Well this is cool.  I had to copy my music and movies from the disks in my HTPC to my newly-built NAS, but didn't want my home network bogged down with the rsync file copy.  Traditionally this would be pretty easy, as on standard ext4/ntfs/fat filesystems, you can just remove the disk from the originating PC and plug it up in the destination PC and mount it and you're all set.  In my case, an extra level of complexity was introduced since I used LVM to create one logical partition across two disks in the HTPC.  After a bit of Google Love, I learned that LVM can actually export and import volumes very easily:
  1. Umount the volume group
  2. $ umount /var/media
  3. Mark the volume group inactive
  4. $ vgchange -an vgmedia
  5. Export the volume group
  6. $ vgexport vgmedia
  7. Shutdown the machine, remove the disks, and hook them up in the destination system.
  8. Import the volume group
  9. $ pvscan
    $ vgimport vgmedia
  10. Activate the volume group
  11. $ vgchange -ay vgmedia
  12. Mount the filesystem
Hat tip to www.tldp.org for the how-to.  Pretty cool.  Constraining factor is that you have enough bays/ports in the destination machine to accommodate all of the disks in the Volume Group.  Alternatively, you can attempt to remove one or more disks from the Volume Group if you have enough unallocated space on the other disk(s).

Friday 25 October 2013

SAMBA Shares with no Username/Password

Setting up a NAS/share that you want all users on your network to be able to access without a username or password?  If you want to do this in SAMBA 4, you can't use the traditional global setting of:

security = share

as "share" level security is now deprecated. You'll now need to set the parameter map to guest.  Instead, use the following settings in /etc/samba/smb.conf:

security = user
map to guest = Bad Password
passdb backend = tdbsam
guest account = nobody

And if you're doing this, it's a good idea to lock down Samba to your local network:

interfaces = lo eth0 192.168.1.0/24
hosts allow = 192.168.1.0/24

Lastly, don't forget to configure iptables to lock down source ports:

iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport netbios-ssn -j ACCEPT
iptables -A INPUT -p udp -s 192.168.1.0/24 --dport netbios-ssn -j ACCEPT
iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport microsoft-ds -j ACCEPT
iptables -A INPUT -p udp -s 192.168.1.0/24 --dport microsoft-ds -j ACCEPT

Point smbclient/Windows Explorer/Mac Finder to //IP/share_name and you're all set!

Monday 16 September 2013

One way to re-IP your NFS array with VMware

Recently I have been working on a project to replace an old FAS2040 NetApp array with a newer FAS2240. The old FC disk shelves from the 2040 will be re-purposed in the 2240, so will be physically moved to the new array complete with all of the existing VMs in the farm.

This poses an interesting problem though, the existing filers with their current IP's will disappear and the new filer will have a different hostname and IP. This change will cause all of the VMs to go grey because they cannot reach their disks. With a little googleing you can find a couple of scripts that are able to re-register VMs, these can be modified to fix this issue.

To add to the interest in this environment we have VMs with multiple disks on different NFS mounts, so we need to fix the vmx files so they point to the new datastores on the new filer.

So whats the plan then??

  1. Get the names of all your templates
    Get-Template | Select-Object Name | Export-Csv -NoTypeInformation -Path ./templates.csv
  2. Convert all templates to VMs
    Set-Template -ToVM * -Confirm:$false
  3. Run this command to collect the necessary information
    get-view -viewtype virtualmachine -property name, config.files.vmpathname, parent, Runtime.Host | select name, @{n="vmxFilePath"; e={$_.config.files.vmpathname}}, parent, @{n="host"; e={$_.runtime.host}} | Export-Clixml -Path ./vms.xml
  4. Remove all VMs from the inventory
    Get-Datastore <regex to get all affected DS> | Get-VM | Remove-VM -Confirm:$false
  5. Enable SSH on a host
    Get-VMHost <hostname> | Foreach-Object {  Start-VMHostService -Confirm:$False -HostService ($_ | Get-VMHostService | Where { $_.Key -eq "TSM-SSH"} )}
  6. Get the datastore locations
    SSH to your host > enter  ls -l /vmfs/volumes/ 

    Save this info for later
  7. Unmount affected Datastores
    Get-Datastore <regex to match all affected DS> | foreach {Remove-Datastore -Confirm:$false -Datastore $_ -VMHost (Get-VMHost <regex to get all affected hosts>)}
  8. Mount your new datastores, since there are heaps of ways to do this I'll leave it to you
  9. Get the new datastore locations
    Just re-do point 6 above
  10. Copy this sh script over to your host, make sure you replace OLD-DATASTORE and NEW-DATASTORE with the correct UID's from point 6 and 9.#!/bin/sh
         find /vmfs/volumes/ -name '*.vmx' -maxdepth 3 | while read fl; do
         echo $fl
         mv "$fl" "$fl.old"
         sed 's/OLD-DATASTORE/NEW-DATASTORE/g;s/OLD-DAASTORE/NEW_DATASTORE/g' "$fl.old" > "$fl"
         chmod 755 "$fl"
         done
    You can add as many datastores to rename as you like, just use separate them with ; the example above does 2 datastores.
  11. Disable SSH
    Get-VMHost <hostname> | Foreach-Object {  Stop-VMHostService -Confirm:$False -HostService ($_ | Get-VMHostService | Where { $_.Key -eq "TSM-SSH"} )}
  12. Register all your VMs
    Import-Clixml .\vms.xml | foreach { New-VM -VMFilePath $_.vmxfilepath -VMHost (Get-VIObjectByVIView $_.host.toString()) -Location (Get-VIObjectByVIView $_.parent.toString()) -RunAsync}
  13. Convert your templates back to templates
    Import-Csv -Path ./templates.csv | foreach {Set-VM -ToTemplate -VM $_.name -Confirm:$false}
With a little luck you should be all done! :)