I had to scratch around to find the way to export the SSL key for Deep security Manager. The trick is to find the password for the keystore file that is generated during the Deep Security installation. The rest is then pretty easy. Here is the way I did this :

Obtaining Keystore Password

On the DSM Server, navigate to the following directory :

  • c:\Program Files\Trend Micro\Deep Security Manager\installfiles\

In this folder find the following file :

  • genkey.bat

Open the file and look for the “-storepass” parameter. The syntax next to the -storepass is the keystore password. Note this password down.Keystorepass

Exporting the DSM Certificate

The Keystore file is located in the following directory :

  • c:\Program Files\Trend Micro\Deep Security Manager

The Keystore file name is :

  • .keystore

The keytool.exe file is located in the following folder :

  • c:\Program Files\Trend Micro\Deep Security Manager\jre\bin\

To export the current SSL certificate execute the following command from the keystore file location:

  • jre\bin\keytool.exe -export -alias tomcat -keystore .keystore -file dsm.crt

When prompted enter the keystore password.

Once competed you will have the Deep Security Manager Certificate.DSMCRT



I was busy with a Exchange 2013 Design that included a DAG. My initial setup I did on my home desktop using VMware Workstation. The setup was as follow :

  • 1x AD Server – Windows 2012R2 Server
  • 2x Desktops – Windows 8.1
  • 1x PKI Server
  • 2x Exchange 2013 Servers

I had to take this and do a design taking into account the client already have a Cluster and this DAG VM’s would be added. Here is some of the Design Considerations I took into account:

  • Virtual Machine Configuration
    • Used VMXNET 3 Adapter
    • Used “Virtual Socket” to allocate vCPU’s and not “Cores per Socket”
    • Used Memory reservations
      • Note the impact to HA when using Reservations…design your HA configuration around/for this
    • Note NUMA configurations. If you have to gave the VM 10 vCPU and you only have 6 pCPU’s per socket(2 sockets =12 pCPU’s) you the VM will not be NUMA optimized.
      • For this I had a look at the CPU Contention on the Cluster and it was low
    • Created multiple vDisk as follow:
      • OS drive (Exchange was also installed here)
      • Database Drive – DB 1 from Server 1
      • Database Drive – DB 2 From Server 2 (basically the remote server DAG Replicated DB would be on this drive for both servers)
  • ESXi Host Configuration
    • This is a tricky one as the “Best way ” to configure the ESXi CPU’s is to disable Hyper-Threading. Now if you have a Cluster with 10 hosts and you will only have 2 VM’s that need Hyper-Treading disabled…makes no sense to disable the whole Cluster. Thus consider the “Resource -> Advance CPU -> HT Sharing -> None” option in the VM configuration. Make sure the number of vCPU’s is not more than the number of pCPU’s on the processor(NUMA comes to play here)
  • Networking
    • I used an additional vLan for the Replication network and had no routing for this vLan
    • One have to evaluate the Replication traffic needed, taking into account the ESXi host network card speeds, number of network cards in the server and load balancing that might be needed on the pNic’s. I wanted to ensure the replication traffic had enough bandwidth without impact to other VM traffic.
      • The best option here is to have a VDS switch and ensure that “Load Balance on pNic” is enabled (in my case the client DID NOT have a VDS…)
      • What I did not want is to have the Replication traffic “flat line” the pNic on the vSwitch. Thus I created a Port Group and enabled Traffic Shaping on the Port Group. Limit the Traffic on this Replication Port Group to 750Mb (The servers had 1Gb Nic’s). Thus there “should” always be bandwidth available on a pNic for other VM traffic. If you have vCOPS in the environment you can always evaluate this setting and adjust as need later.
      • I also had a look at the pNic usage on the servers and they were in the low 50-100Mb usage at the time.
    • Don’t be fooled by the VMXNET 3 in guest indication that the speed if 10Gb…if you have 1Gb nic’s in the server the in guest will still state 10Gb. The two have nothing to do with each other. Thus your speed between ESXi hosts will be at 1Gb and not 10Gb.
  • Storage
    • All the documents that one read states the huge IO improvement in Exchange 2013. But you still need to make sure you will have enough IO. Also in my case I already had a storage unit…so have to make do with that.
    • I placed the Database vDisks on different Raid Groups(not just Lun’s…ensured they also on diff Raid Groups)
    • The Hosts already have multi pathing enabled
  • Cluster Settings
    • DRS
      • Created DRS Rules to “Keep the VM’s Appart” that was part of the DAG group and the Witness Server. Thus there is 3 Servers that is part of the rule.
    • HA
      • Disabled Guested Monitoring for the DAG VM’s
      • I disabled HA for the DAG Servers. I did not want the Server to auto start in case of the a Host Failure.
        • Since we have a DAG the DB would fail over to the other Exchange server.
        • If there is any issues on the “failed” VM when starting up we did not want to to have any impact on the Exchange Servers DB’s.
        • We added the following process:
          • After Host failure ensure that all DB’s are mounted on the remaining DAG member
          • Ensure users hare connected to new DB on reaming DAG Server
          • Make sure backups were successful on remaining Dag Server
          • Power up Failed DAG Server
          • Make sure Replication is working
          • Activate the DB on its original Server
  • DRP and Backup/Restore
    • Day to day backups was already in place
    • Daily Backups was being replicated of site
    • Point here is to ensure that this topic is not left out of the design

I suppose there is many ways to skin a cat. This was the way I did it for this client given the infrastructure that I had.

  • Can this be done differently – yes…as long as you have reasons for your decisions
  • Explain in your design what other options you looked at (like using in guest iSCSI perhaps)
  • State where you got the information from
  • A “Best Practice Guide” is only generic..it give me guidelines for my design. I need to design for the client what is their Best Practice to implement this solution using external resource that would still validate the design but within the clients guidelines/framework/limits that I was given (in my case: existing storage, vSwitch’s..ect)

Here is some of the documents that I used for my design:

When ESXi 5.5 Update 2 was released I read in the release notes that the vShield Driver in VMtools was renamed. During an installation I did this week I thought to take a screenshot of the new driver. The new name is called: Guest Introspection Drivers. As per the release notes it is just  a name change, noting more. Read here about this change in the release notes. Below is a screenshot of the VMtools with the new driver name:


I was busy downloading Deep Security 9.5 for a project I am working on when I noticed a new document with the Title : Supported Features by Platform. Must say that this document was really good to read!

So what is this about ? Well there is a few types of Agents:

  • Windows Agents
  • Windows with Agent Less (Using NSX or vShield Manager on ESXi)
  • Linux Agents
  • Linux with Agent Less

Then there is the Features of the Deep Security Agent that is supported on each of these. For Anti-Malware here is the list:

  • File Scan
  • Registry Scan
  • Memory Scan
  • Smart Scan
  • Real Time Scan

Now with Agent Less protection not all of the above is possible. (Note that you can run Agent Protection with Agent Less Protection called Coordinated Approach)

Here is the Deep Security 9.5 Document that gives a great overview of what is supported by which Agent.

I was busy consolidating some standalone ESXi hosts today. Basicly adding them to my new vCenter and then Migrating the VM’s (Powered off) to my new Cluster and Storage. Easy stuff. I noticed the the one VM had about 13 Snapshots (yea..its the developers VM…). I thought well it will consolidate the snapshots anyway on the target. Well after the migration all the snapshots was still there. This make me think about the difference in Cloning and Migrating a VM with Snapshots.

In my lab I tested this. I created a VM with some Snapshots:


SoI did some tests with this VM.

After a Clone of the VM all the Snapshots were consolidated and thus no more snapshots present.

After a Migration of the VM all the snapshots was still present.

I am sure I knew this at some point far far back but always good to just do a test and then you know the correct expected result.

I am busy with a internal project where I needed to create some VM’s that comply to the VMware Best Practice Guide. I created a VM and then added all the below setting to the VMX file. This post is just for anyone that needs to copy all the settings instead of typing them out :-)

isolation.tools.autoInstall.disable = “true”
isolation.tools.copy.disable = “true”
isolation.tools.dnd.disable = “true”
isolation.tools.setGUIOptions.enable = “false”
isolation.tools.paste.disable = “true”
isolation.tools.diskShrink.disable = “true”
isolation.tools.diskWiper.disable = “true”
isolation.tools.hgfsServerSet.disable = “true”
isolation.monitor.control.disable = “true”
isolation.tools.ghi.autologon.disable = “true”
isolation.bios.bbs.disable = “true”
isolation.tools.getCreds.disable = “true”
isolation.tools.ghi.launchmenu.change = “true”
isolation.tools.memSchedFakeSampleStats.disable = “true”
isolation.tools.ghi.protocolhandler.info.disable = “true”
isolation.ghi.host.shellAction.disable = “true”
isolation.tools.dispTopoRequest.disable = “true”
isolation.tools.trashFolderState.disable = “true”
isolation.tools.ghi.trayicon.disable = “true”
isolation.tools.unity.disable = “true”
isolation.tools.unityInterlockOperation.disable = “true”
isolation.tools.unity.taskbar.disable = “true”
isolation.tools.unityActive.disable = “true”
isolation.tools.unity.windowContents.disable = “true”
isolation.tools.unity.push.update.disable = “true”
isolation.tools.guestDnDVersionSet.disable = “true”
isolation.tools.vixMessage.disable = “true”
RemoteDisplay.maxConnections = “1”
log.keepOld = “10”
log.rotateSize = “100000”
tools.setInfo.sizeLimit = “1048576”
isolation.device.connectable.disable = “true”
isolation.device.edit.disable = “true”
tools.guestlib.enableHostInfo = “false”
RemoteDisplay.vnc.enabled = “false”

I have seen a few installations of vShield Manager where the host name of the vShield Manger was not changed to reflect the correct DNS name. Suppose if you only have 1 vShield Manager in your environment maybe it will not be a problem. But consider using Deep Security where you have have multiple vCenters with each of them having a vShield Manager. So I thought to write a article on how to change the vShield Manager Hostname.

  • Open the console and login with the “admin” user and password
  • Type “enable” with the correct password
  • If you in “enable” mode the prompt should have changed from > to #


  • Type ” configure terminal”
  • Type ” hostname myhostname ” (example : hostname vshield01)
  • This command will stop and start the vShield Manager.
  • Type “end”
  • Type “write”  (This will save the configuration)
  • Type “exit” to log out

With regards to the DNS Name. The only why I have seen to change this is by running the “setup” command.

Some other usefull commands :

  • show filesystem
  • show manager log
  • show log
  • show ethernet
  • show version (Gives the build number also)
  • list (will show all the commands with how to use them)

I was busy testing some functions on a project that I am working on. I needed about 1000 VM to test this function on. So I wrote this simple Powershell script that will do the following:

  • Connect to vCenter
  • Clone my VM a few times (while I sleep)
  • Start the Cloned VM’s
  • Disconnect from vCenter

The VM that I used for cloning was a No-VMDK VM that booted from a ISO image. So the cloning process was not to slow as there was no VMDK disks to be copied. But I thought I would share the script I created:

# *******************************************************
# ** Written by : Hugo Strydom                         **
# ** Email : hstrydom@virtualclouds.co.za              **
# *******************************************************

#Login to vCenter

#Get vCenter Details : vCenter name, User, password
Write-Host "Please enter the vCenter Host Name :"
Write-Host " "
$vCenterName = Read-Host vCenter Host Name
$Username = Read-Host Username
$SecurePassword = Read-Host Password -AsSecureString

#Convert Secure Password to Plain Text
$PASS = `
$PlainPassword = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($PASS)
#Connect to vCenter
Connect-VIServer -Server $vCenterName -User $Username -Password $PlainPassword

#Remove Password from Session
Remove-Variable PlainPassword
Remove-Variable SecurePassword
Remove-Variable PASS

# Do Cloning Tasks

#Define all the Variables for Clone VM

$DSKT = "DSKT-"                     # This defines the VM Prefixed that will be used for the name of the VM
$CloneVMName = "DSKT-001"           # This is the VM name that must be used for cloning
$ResourcePool = "VDI-Desktops"      # The Resource Pool where the new VM's will be placed
$DataStore = "DSCL-Resource"        # The Datastore name where the VM's should be pleased 
$FolderLocation = "VDI-Desktops"    # Folder where the VM's must be placed into
$x = 100                            # This the start suffix number that will be used for the VM name (thus DSKT-100 will be the first VM Name)

# Cloning Command

$NewVMName = $DSKT + $x

Write-Host "Creating new VM : $NewVMName "

New-VM -name $NewVMName -VM $CloneVMName -ResourcePool $ResourcePool -Datastore $DataStore -Location $FolderLocation
Start-VM -VM $NewVMName              # This will Power On the New VM
$x = $x + 1                          # Increment $x by 1
} until ($x -eq 1000)                # Can change the 1000 to own number. This defines the number of VM's that will be created
#Clean up tasks
Write-Host "Disconnecting from vCenter Server : $vCenterName"
Disconnect-VIServer -Server $vCenterName -confirm:$false

Before I start with this post…Disclaimer : This post is not to show how to hack systems. Rather it is to be used to learn on how to test your IPS Systems that you have deployed. Now that we have that out of the way…

I was giving training a while ago on Trend Micro Deep Discovery. During the training I ask the question if any one ever test their IPS deployments. The answers I got was that they hope it works…Sadly not a good answer…
So should you test you IPS rules ? My answer is YES ! You should as a Security Professional be able to do some basic tests to see if the rules that you have implemented is working. Simple.
I have some basic tests I do when I do implementations of IPS rules. The most basic one I use is MS12-020. This is a RDP exploit that is present in Windows 7 SP1 and Windows 2008R2. Here is the How To.

The How to…
I use KALI Linux for all my testing. The application inside KALI that I use is Armitage. Armitage uses the MetaExploit rule sets. It is really easy to install. The basics is that you download the ISO, install from the ISO, do an apt-update and then apt-upgrade.
For the OS I used Windows 7 SP1 with no Service Packs installed. Ensure you have RDP enabled but do not select NLM Authentication for RDP Sessions. Also ensure the firewall on the Windows side is disabled. Make sure you not using a production machine…if you run this exploit against an unpatch machine it will Blue Screen the OS!

On the IPS side I used Trend Micro Deep Security. I ensured that I have added the rule set for MS12-020 to my VM as seen below:
Next from KALI I start Armitage, search for MS12_020. Make sure you double click on the Auxiliary ->dos->windows->rdp->ms12_020_maxchannelids exxploit. This will open a window. All you have to do is to enter the IP address of the OS that you want to run the exploit against. Click “Launch”. You can see in the console window in the back that the exploit was run against the host.
IPS Test01
If all goes well you should not have a OS Blue Screen and you should see a event in the IPS events of the VM as below:
IPS Test02
As you can see the IPS Module in Deep Security have done a RESET on the connection and have blocked the exploit.

Learn how to test IPS Systems. It took me a while to learn the basics around using the application and tools sets that is out there. YouTube is your friend in this matter. There is a lot of free tools that you can use….KALI Linux being the best out there that I know of. With regards to IPS systems, I use Deep Security as I know the product well (Use to work for Trend Micro). Happy Testing!!

I am sure by now everyone knows about CVE-2014-6271 aka “Bash Bug”. This article will explain how to protect against this vulnerability by using Trend Micro Deep Security. Deep Security Agents can be deployed in two ways depended on the environment. The first is by using an In-Guest Agent. The other option is Agent-Less which is only available on VMware Hypervisors. Regardless of the type of agent you have this IPS rule will protect your OS against this vulnerability. Note that Deep Security IPS Agents is Host based (Not Perimeter). Thus each Host/OS will have this protection enabled if you apply this IPS rule.

Adding the IPS rule to your Base Policy
In my lab I have a top level base Policy named HomeLab Policy. I added the following IPS rule to this Policy :
Once you have added the IPS rule you will see that it will also add the HTTP Protocol Decoding Rule set. Once done you should have two additional rules. See below :
Things to consider

  • For the IPS rule to be enforced you must place the Policy in “Prevent” mode (Intrusion Prevention Behavior)
  • You can apply the rule to individual VM’s manually or by doing an “Recommendation Scan”.
  • In my Lab I applied the rule to my top level Policy, thus ensuring all OS’s will get this rule applied regardless of the OS type.

Clients that is using Deep Security with the IPS module can use this IPS rule to provide protection until such time that they can install the needed patches in the OS’s.