VMWARE Workspace One, iPhone, iOS 17, Enterprise Wipe after Update. Issue known since 07.07.2023

by butsch 27. September 2023 17:50




VMWARE Workspace One, iPhone, iOS 17, Enterprise WIPI after Update. Issue known since 07.07.2023 to 27.09.2023 and still affects endusers

Well; looks like some things change too fast in the modern world workplace.

We wish to inform you of a current concern pertaining to Workspace ONE UEM-enrolled iOS devices that have undergone an upgrade to iOS 17. It has come to our attention that these devices are occasionally being incorrectly identified as compromised following the upgrade. The VMware development team is actively engaged in addressing this matter.


Upon the upgrade to iOS 17, Workspace ONE UEM-enrolled iOS, mobile devices are intermittently detected as compromised by the Workspace ONE mobile SDK, leading to the execution of an enterprise wipe. This process results in the removal of all enterprise data from these devices and their disconnection from the system.


Our product team is diligently working towards resolving this issue. For devices operating on iOS versions lower than iOS 17, the fix is anticipated to be automatically applied and does not necessitate an SDK/Productivity Apps/UEM upgrade. We estimate that the issue will be resolved by the 29th of September 2023.

Possible Workaround:

To mitigate this problem, re-enrolling the affected devices should provide a solution. As a temporary measure, administrators can disable the 'Compromise Protection' in the SDK profiles within the UEM console. Please be aware that Compromise Protection is a vital security feature and should be re-enabled once the Workspace ONE team issues the fix for this issue.

Note: Enterprise Wipe and/or Device Wipe commands do not land on the device unless the device is unlocked or connected to a network.

Often, it is recommended to use enterprise wipe rather than deleting the device. When the Delete Device command has been completely processed, all commands associated with that DeviceID are purged from the AirWatch DB, including the Enterprise Wipe Command tied to that device's DeviceID.

Therefore, if the Delete Device operation is performed on a device that is turned off/disconnected from the network, it will not receive the Enterprise Wipe Command issued by the console immediately upon the Delete Device command. This is why it is recommended to make use of the Enterprise Wipe Command as this preserves the device records associated with this device's DeviceID.













VMWare | M365/AZURE

VMWARE, ESX, Netapp Storage AFF-C190, NFS 3, Server OS freeze/stall during Veeam Backup or high load

by butsch 19. January 2023 00:26


SOLUTION: performance: NFS3 > NFS4 change of share > performance problem and freeze gone OR change optimize the settings for existing NFS3 (Fine tune)

SOLUTION: NFS 4.1 NETAPP Ontapp Failover fails with NFS 4.1 > This is solved with ESX 7.0 GA and 6.7P04 (Says a Netapp Partner)

SOLUTION: If you have sensitive high end Servers OS and APPS keep those on same ESX machine as the VEEAM Backup Proxy. If you have all sensitive high end servers tune all to perfection regarding NFS or/and install a VEEAM Backup Proxy on EACH ESX Cluster (As we understood)


You find in some older NETAPP KB: NFS.nfsvers: This setting controls the version of NFS that the storage system uses. Using NFSv3 is recommended for optimal performance when using VMware vSphere.


Why we wrote this collection and braindump of last day:

Although I am not responsible for storage planning or installation, I frequently become involved in resolving performance issues that ultimately from expiernce with other AV-products lead to anti-virus software (One of my Expertise). Recently, we encountered a case where teams from multiple departments spent weeks investigating the cause of the issue. Initially as often, they suspected that McAfee or Trellix ENS Virusprotection was to blame, as they are common targets for blame in these situations. However, in most cases, the underlying cause of performance issues is much more complex and cannot be easily explained. Factors such as certificate revocation and code signing cache, as well as APIs between different systems and processes that interact with one another, can all contribute to performance problems.

Modern anti-virus software, like McAfee ENS, is designed to be comprehensive and can sometimes cause performance issues due to its compexity and also because is half a client firewall these days. This was particularly evident in our case, where the impact was mainly seen on Server 2016 with Citrix Terminal Server and ERP solutions. Despite upgrading the operating system, optimizing and trimming McAfee, and thoroughly investigating every process and signature, the source of the problem remained and tickets went back to dispatch several times.

At the end we found out that during the storage migration someone forget to optimize or enhance the parameters on an NFS V3 share on Netapp. After doing a blank new share with NFS4 all is fine and performance. However there seemed to be discussion in other companys about what to use for NFS 3/4/4.1 related to Netapp.

There seemed to be a bug while Upgrading Ontapp with V4.1 for some time.


With NFS 4.1 and the correct ONTAPP Version we see following:

VEEAM backup on Full and incremental raised up in speed and the freeze in OS are gone.


Below tips may help some of you. We fully understand that changing those paramter be by risky and a balance between reliability and performance haunting in the past with V4.1 that has led to problems but ONLY if the Storage people upgraded the backend once a year.

But if it's that slow that a VEEAM Backup can take down such an expensive storage there must be something wrong where things come together.


Main argument other people told us:

Veeam Backup for VMware can be used to backup virtual machines running on VMware vSphere using the Network File System (NFS) protocol. However, if the NFS version being used is version 3 and the storage array is a NetApp device, there is a known issue where the hosts running the virtual machines may freeze or become unresponsive during the backup process.

This can occur because NFS v3 has a number of limitations, including poor performance when dealing with large numbers of small files and poor scalability. The solution is to upgrade to NFS v4, which addresses these limitations and improves performance. Additionally, Veeam recommends to check if there is any specific configuration on NetApp that can help to mitigate this issue.


If you want to stick with NFS3:

There are several performance settings for NFS version 3 (NFSv3) on a NetApp storage system that can be configured to optimize performance when used with VMware vSphere. Some of these settings include:


NFS.tcp.enable: This setting controls whether the storage system uses the TCP protocol for NFS communication. Enabling this setting can improve performance in environments with high network latency.

NFS.udp.enable: This setting controls whether the storage system uses the UDP protocol for NFS communication. Enabling this setting can improve performance in environments with low network latency.

NFS.mount.options: This setting controls the mount options used when mounting the NFS export on the VMware host. Some options that can be used to improve performance include "rsize" and "wsize" which controls the maximum read and write size of the NFS packet.

NFS.nfsvers: This setting controls the version of NFS that the storage system uses. Using NFSv3 is recommended for optimal performance when using VMware vSphere.

NFS.nolargefiles.enable: This setting controls whether or not large files are supported by the storage system. Enabling this setting can improve performance when dealing with large files.

NFS.readahead.enable: This setting controls whether or not the storage system uses readahead when reading data from an NFS export. Enabling this setting can improve performance when dealing with sequential data access.

It's important to note that these setting and their optimal values may vary depending on the specific use case and the environment in which the storage system is being used. Therefore, it is recommended to check with the NetApp documentation or a NetApp support team for specific guidance on performance tuning for your environment.


External Links used

 Lenovo has an extrem good whitepaper:



Some external Links pro/contra NFS V4.1 on Netapp




Setup NFS 3 with ONTAPP


Setup NFS 4.1 with ONTAPP






Veeam Proxy




Screeenshot Showing on which ONTAP OS the problem with NFS 4.1 is solved:


2019 - warning from ingram europe after some customers lost dats after mixing nfs 3 and 4 for migration. That is the problem which should be solved in 2023.


(Old warning so you see whycso man did not chsnge to nfs 4 even they did know v3 had petfornance issue on netapp)

 Aus gegebenem Anlass möchten wir auf eine derzeit bestehende Problematik im Zusammenhang mit VMware (ESX 6.x+) und Datastores der Version NFS v4.1 hinweisen.

Zwar ist NFS nach wie vor die erste Wahl und bietet eine Vielzahl von Vorteilen gegenüber iSCSI oder FC LUNs, jedoch gibt es einen Bug, der einen produktiven Einsatz von NFS v4.1 Datastores verbietet.

Dieser Bug sorgt dafür, dass der Zugriff von ESXen auf NFSv4.1 Datastores bei Aktionen, die unterbrechungsfrei sein sollten, unterbrochen wird. Diese Aktionen beinhalten:

ONTAP Upgrade

LIF Migration

Storage Takeover

Storage Giveback

MetroCluster Switchover


Wir empfehlen Kunden, die bereits NFS v4.1 Datastores produktiv einsetzen auf NFS v3 downzugraden. Dabei ist unebdingt zu beachten, dass NFS v3 und NFS v4 grundsätzlich unterschiedliche Locking Mechanismen verwenden.

Daher gilt es zu vermeiden, ein und denselben Datastore auf verschiedenen ESX Hosts gleichzeitig als NFS v3 und NFS v4 Datastore zu mounten, da dies die Datenintegrität gefährdet. Die sichere Methode ist, parallel zum NFS v4 Datastore einen NFS v3 Datastore zu mounten und die VMs per Storage vMotion umzuziehen.

ESXi6.0U3 and ESXi6.5 NFS v4.1 datastores might become inaccessible during failover

VMWare NFSv4.1 Datastores see Disruption During Failover Events Including Upgrade







February 02/2021 Windows Updates Deinstall Adobe Flash on Server and Clients W10 – Attention VMware vCenter/ESX Admins

by butsch 18. February 2021 19:38

February 02/2021 Windows Updates Deinstall Adobe Flash on Server and Clients W10 – Attention VMware vCenter/ESX Admins

Mit den Februar 2021 Windows Updates wird Adobe Flash (MS) de-installiert. Von Hand installierte Adobe Flash Binary bleiben auf den Systemen. Bei Teils Kunden brauchen wir ja noch FLASH fuer den Web Zugriff auf vCenter/ESX.


Wenn man nicht via HTML5 auf das VMware vCenter drauf kann dann einfach nochmals eine letzte Adobe Flash Version manuell installieren. (https://get.adobe.com/de/flashplayer/about/)


Zugriff vCenter 6.5:



Vmware Produkte welche noch Flash brauchen:







VMWare | Hotfixes / Updates

BACKUP configuration of Vmware ESXI with POWERCLI

by butsch 13. September 2020 15:47

Backup the config from a VMware ESXI free Version up to 6.7.

There is a nice way to backup the free ESXi configuration settings into one file. This us usefull to document the ESXi before you update it and don't want to make 200 screenshots.

  1. Download and Install POWERCLI from VMware.



  1. Start Power shell elevated (As Administrator)
  2. set-executionpolicy remotesigned
  3. run from "C:\Program Files (x86)\VMware\Infrastructure\PowerCLI\Scripts"


  1. Do the backup

set-executionpolicy remotesigned -Confirm:$False

Set-PowerCLIConfiguration -InvalidCertificateAction ignore -Confirm:$False

Connect-VIServer -Server -User root -Password **********

Get-VMHostFirmware -VMHost -BackupConfiguration -DestinationPath C:\20_SERVER_ESX_ASUS_P11C-M4L\03_ESX_CONFIG_BACKUP

Disconnect-VIServer -Server -Confirm:$False




C:\> Set-PowerCLIConfiguration -InvalidCertificateAction ignore -Confirm:$False

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction DisplayDeprecationWarnings WebOperationTimeout


----- ----------- ------------------- ------------------------ -------------------------- -------------------

Session UseSystemProxy Multiple Ignore True 300


AllUsers Ignore



C:\> Connect-VIServer -Server -User root -Password imelsasswohntdegillesim

Name Port User

---- ---- ---- 443 root



C:\> Get-VMHostFirmware -VMHost -BackupConfiguration -DestinationPath C:\20_SERVER_ESX_ASUS_P11C-M4L\03_ESX_CONFIG_BACKUP

Host Data

---- ---- 20_SERVER_ESX...



C:\> Disconnect-VIServer -Server -Confirm:$False






Restore from the backup would be:


set-executionpolicy remotesigned -Confirm:$False

Set-PowerCLIConfiguration -InvalidCertificateAction ignore -Confirm:$False

Connect-VIServer -Server -User root -Password **********

Set-VMHostFirmware -VMHost -Restore -SourcePath C:\20_SERVER_ESX_ASUS_P11C-M4L\03_ESX_CONFIG_BACKUP \configBundle- -HostUser root -HostPassword **

Disconnect-VIServer -Server -Confirm:$False



You can view the config if you UNPACK 10x TAR and ZIP files from the backup file:




VMware Link:




VMWare | Deployment

ESX: VM’s with wrong CPUID mask show bluescreen after 5.X > 6.0

by butsch 1. June 2016 14:14

This is due the NX/CD flag CPUID mask set on machines. Esp. we had seen this on Server 2012R2 which were installed on ESX5.0/5.X and the flag had to be set so machines where running. Be sure to capture this in advance or in time because the SRV2012 will start in Recover Mode at some point.

2008R2 > Bluescreen

2012R2 > Boots into Recovery Console


To resolve this issue, reset the CPUID Mask settings on the affected virtual machine.

To reset the CPUID Mask settings:

  1. Using the vSphere Client, connect to vCenter Server and locate the affected virtual machine.
  2. Power off the virtual machine.
  3. Right-click the virtual machine and click Edit Settings > Options > CPUID Mask > Advanced.
  4. Click Reset All to Default to reset the CPUID Mask.
  5. Click OK > OK, then power on the virtual machine.
  6. The virtual machine now shows the correct EVC mode.

Note: If these steps do not resolve the issue, upgrade the virtual machine's virtual hardware to the latest version. For more information, see Upgrading a virtual machine to the latest hardware version (1010675).

Here is the relevant Link where the Flag was set:


Here is a script to report the flags on all machines:





VMWARE / VDI malware Protection Symantec, Trend and Mcafee

by butsch 24. February 2015 21:21

Symantec Endpoint Protection still has no Agentless Virus scan version like Trend or Mcafee with Move. Those use VSHIELD API from VMware and need no direct Software running in the VM. (http://www.vmware.com/pdf/vshield_55_admin.pdf)

BUT test have shown that even with the Agent in the VM/VDI Symantec SEP 12.X is faster in daily tracking, stable status, scanning but only slow if the machine does Virus pattern updates once a day.

Keep in mind that most virus producers only update the main definitions once a day (mcafee 17:00 CET) and the rest is GTI/0-day releases on all three.

So even with the Agent in VDI machines you over the thumb get more or even performance.

Also keep in mind that Virus API like the one from Microsoft has been sources for a lot of trouble, false events and fights the last few years. You can decide if you want that between:

  1. your antivirus producer and MS


  2. Between your antivirus producer and VMWARE

To mention on that part would be a solution with Hypervisor which mixes up things again.

The problem in general may be not so actual since Netapp and all the new companies who come out with Flash/SSD Storage try to solve it on the other side.

Gartner Magic Quadrant


Mentioned products in terms of VM in those articles:


McAfee's Management for Optimized Virtual Environments (MOVE) has offered optimized anti-malware scanning in virtualized environments for two years, and now MOVE 2.5 offers agentless anti-malware scanning in VMware environments using native vShield API integration.


Symantec does not yet offer an "agentless" version for optimizing anti-malware scanning in virtualized environments (although its shared Insight cache feature can be used to improve performance).

2012 Symantec SEP 12.1 and Mcafee MOVE under VMware 5.X


2012 Symantec SEP 12.1 and Trend


Back in 2011 Trend was faster

2011 Symantec SEP 11, Trend and Mcafee





Server 2012R2, VMware 5.0.X Blue screen/BSOD, Event 41, CPUID Mask Flag

by butsch 19. January 2015 19:43



You have a blue screen and Event 41 under ESX 5.0.0 when you run Server 2012 R2 because you're VMWARE Team it should run

(You ask > why does it not show up under Supported OS then. They say > VMWARE says it's supported it only shows it's not)

Well not in a validated environment for sure!



Event 41

The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.

The computer has rebooted from a bugcheck. The bugcheck was: 0x00000109 (0xa3a01f59c38d1747, 0xb3b72be0160d1792, 0x00000000c0000103, 0x0000000000000007).

A dump was saved in: C:\Windows\MEMORY.DMP. Report Id: 011915-11203-01.


Download NITSOFT BLUESCREEN and install on the Server (Just unpack and run exe) to analyses the memory dump if you

Want to be sure.



Two ways to solve this:

  1. UPGRADE VMware 5.0.0 to 5.0.1 U-something and all behind > Project 2 days
  2. Change a CPUID Mask Flag

Here is what to change if you're HAVE AMD Servers:

  1. Shutdown Server
  2. Edit Settings
  3. Options
  4. Advanced


Before change on AMD (Attention different settings for AMD and INTEL CPU)





Screenshot: This is what to change for AMD

Vmware KB 2060019

To work around this issue, manually create a CPUID mask for the affected virtual machines:


To manually create a CPUID mask for the affected virtual machines:

  • Power down the virtual machine.
  • Right-click the virtual machine and click Edit Settings.
  • Click the Options tab.
  • Select CPUID Mask under Advanced.
  • Click Advanced.
    • For Intel:
      • Under the Register column, locate the edx register under Level 80000001.
      • Paste this into the value field:


        For example,
        cpuid.80000001.edx = ----:0---:----:----:----:----:----:----


    • For AMD:
      • Select the AMD Override tab.
      • Change cpuid.80000001.edx.amd = -----------H-------------------- to
        cpuid.80000001.edx.amd = ----0---------------------------
  • Click OK to close the virtual machine properties.

Remark 12.02.2015

Under ESX 5.5.0 2068190 BOTH the INTEL and AMD Server show the FLAG AS as


without the : (Doublodots). As exmaple on an INTEL XEON X5675, 3.07HGz CPU

On a ESX Server where we could select "Server 2012 X64" as new machine.




Werbung von Drittfirmen (Nicht Butsch Informatik):

Werbung von Drittfirmen via Google Adsense: