Another post in the vBlock tip series…
VCE use static binding on the Cisco 1000V and this combined with the default of 32 ports per VLAN means most people will soon run out of ports on their DV port groups.
Who knows why 32 is the default. It seems a bit conservative to me. Maybe there is a global port limit but I haven’t been able to confirm this.
Either way, 32 doesn’t seem nearly enough ports in most network designs. The good news is the maximum is 1024, so it makes sense to me to increase it substantially depending on the number of VLANs you have.
As soon as your vBlock lands I would definitely review each DV Port Group and increase the max ports assigned.
Static binding is a pain in the arse – it means that any VM whether a template or whether its powered off will use up a port if it is assigned to the DV Port Group. You may only have 5x running VMs on the VLAN but you won’t be able to add and power on a 6th VM if you have 27x VMs\templates powered off and assigned to that same DV Port Group.
For that reason alone I am not sure why VCE don’t just use ephemeral binding. Anyway I am going off topic.
Instructions from VMware KB1035819 on how to increase your max ports for each VLAN (port-profile).
These are the commands I use:
- show port-profile – to find the correct port profile name
- conf t – enter configuration
- port-profile <DV-port-group-name> – change configuration context to the correct port-profile
- vmware max-ports 64 – change max ports to 64
- copy run start – copy running config to startup config
I recently encountered an issue where we ran a SRM Test Failover and afterwards it failed to cleanup correctly.
When the cleanup operation fails what I normally do is run the Force Cleanup and continue on with my life. How wrong I could be…
What happened next is I ran a planned migration and because the force cleanup had not worked correctly, not all virtual machines were protected. When the storage failed over, only 3 of the 8 VMs powered up in the Recovery Site. We ended up in a SRM failed state and had to manually failback the storage and reinstall SRM. It was a complete disaster and a big waste of a weekend.
So… this post outlines what you should do when a cleanup operations fails… As usual I learnt the hard way…!
If a cleanup operations fails:
- Run the force cleanup to try and finish the cleanup operation.
- Once Force Cleanup completes, check the following components manually to confirm that the force cleanup completely successfully.
- Open the Protection Group in SRM and open the protection group status for the virtual machines.
- Select refresh and confirm all VMs are still protected – there status should be ‘OK’
- If any are not OK, select Reprotect VMs to fix the issues and recreate the placeholder VMs
- Change to vcenter datastore view
- Confirm the snap datastore for the Test Failover has been removed
- If the snap datastore still exists in italics or normal text, manually unmount and detach the snap datastore from all hosts.
- Once the datastore has been unmounted and detached from all hosts, right-click the datacenter (DC1 or DC2) and execute a ‘Rescan for Datastores’.
- On the next screen, untick ‘scan for new storage devices’
- Confirm the snap datastore has been removed.
And now you can carry on with your life…. and your planned migrations.
i’ve been pretty quiet recently… work, triathlons and dad duties have meant I’ve been pretty slack…
To get me back in the rhythm – first a fun post. If you’ve been driving around London recently you will have seen the recent explosion of Costa Express dispensers at almost every petrol station.
I was filling up the other day and encountered some technical difficulties while trying to get my daily Cappuccino.
What to do? Reboot of course! Managed to grab some pics while they reset the Dell PC running Windows Vista.
Costa have no taste I tell ya! Dell… urgh… Vista… double urrgh.