Easiest way to confirm VMware Tools Vshield Driver is running


1.     Launch msinfo32.exe on server

2.     Open Software Environment -> System Drivers

3.     Scroll down and make sure vFileFilter exists and it is running

 

Advertisements

How to modify NAS-IP-ADDRESS attribute on Forefront UAG


By default the UAG will pass 127.0.0.1 in the NAS-IP-ADDRESS attribute.

RADIUS NAS-IP-Address Attribute is really useful as it allows an arbitrary IP address to be used as RADIUS attribute 4, NAS-IP-Address, without changing the source IP address in the IP header of the RADIUS packets.

Why would you want to change this from 127.0.0.1?

Some older RADIUS servers need the NAS-IP-ADDRESS attribute to match the source IP address header in the RADIUS packets, so modifying this attribute to the IP address of your internal interface on the UAG will fix this problem.

  1. My repository in UAG is called ‘Identityguard’
  2. Copy \von\InternalSite\samples\respository_for_radius.inc to repositoryname.inc (in my case IdentityGuard.inc) into the \von\InternalSite\inc\CustomUpdate\ folder.
  3. Then if you want to update the NAS-IP-ADDRESS field to the UAG internal interface, set: param_ip.Value = “10.x.x.x”

This is documented in this KB article – KB960302.

Alternatively if you are trying to integrate the UAG with Risk Based Authentication features included with Entrust IdentityGuard Enterprise Server, you will want the UAG to pass remote access client’s IP addresses in the NAS-IP-ADDRESS field.

Here’s how:

  1. My repository in UAG is called ‘Identityguard’
  2. Copy \von\InternalSite\samples\respository_for_radius.inc to repositoryname.inc (in my case IdentityGuard.inc) into the \von\InternalSite\inc\CustomUpdate\ folder.
  3. If you want to set the NAS-IP-ADDRESS field to the Client’s IP address set:  param_ip.Value = g_source_ip

and client source IP addresses will be passed through to the IdentityGuard Server now.

What to do if a reprotect fails in SRM… Protection group has protected VMs with placeholders which need to be repaired


I had this issue today where a reprotect failed after a Planned Migration. I thought it was worth running through what I had to do to resolve the issue without performing a ‘Force Cleanup’ Reprotect as there is currently no KB article describing this workaround.

In my case the planned migration went ahead as planned without issues. All VMs were powered on at the Recovery Site and the Recovery Plan completed successfully.

When it came to the reprotect however, the reprotect failed on Step 3 ‘Cleanup storage’ with the error ‘Error – Protection group ‘xxx’ has protected VMs with placeholders which need to be repaired.’

The reprotect cancelled and when I looked at the protection group, only 3 of the 11 virtual machines were in an ‘OK’ state. The other 8 had a number of different error messages, including SRM Placeholder datastore could not be accessed, insufficient space, etc. Nothing that seemed to correlate.

I tried the reprotect again, without the force cleanup option and it failed again, so I removed protection from all the VMs with errors, and ran the Reprotect again. This time it worked fine after a few tense minutes.

To get SRM back to a protected state, I then had to delete from disk the placeholder VMs in the Recovery Site and manually reprotect all the VMs.

 

Hope this helps…