Disable Dark Mode for Outlook in OSX?!

Advertise here with BSA


Yes, after the latest update MS Office is now also capable of supporting the OSX Dark Mode. Nice, but rather annoying for MS Outlook I must say, the dark mode just makes reading the different panes very challenging. I went looking for an option in the UI that allowed me to change the default, I couldn’t find it, unfortunately, but I did find a way to disable it through the command line. For MS Outlook, in particular, this is how you do it:

defaults write com.microsoft.Outlook NSRequiresAquaSystemAppearance -bool yes

You can do this for other apps as well if you want, simply change the identifier string of Outlook (com.microsoft.Outlook) with the string of the app you want to disable it for. If you don’t know the string you can do the following to find it:

osascript -e 'id of app "Outlook"'

In the above example, I am looking for the identifier of Outlook, but this could be “Spotify” or anything else as well of course. Figured I would share this, as I found myself searching for half an hour.

The post Disable Dark Mode for Outlook in OSX?! appeared first on Yellow Bricks.

Posted in dark mode, microsoft, osx, outlook, Server | Comments Off on Disable Dark Mode for Outlook in OSX?!

VMware Skyline Customer Update

Happy Holidays from the VMware Skyline team!   VMware Skyline™ Recent Announcements We made several exciting announcements last month at VMworld in Barcelona, including the GA of Version 2.0 and the availability of Skyline Log Assist™, new Inventory Filter feature, Data Segmentation and Enhanced User Access Permissions: Skyline Log Assist Eliminates the need for customers

The post VMware Skyline Customer Update appeared first on VMware Support Insider.

Posted in Datacenter, Inside Scoop, Knowledge Base, Patches, proactive support, Resolution Paths, Support Experience, Tech Talk, Videos | Comments Off on VMware Skyline Customer Update

Top 20 Articles for vSAN, November 2018

Component metadata health check fails with invalid state error “Host cannot communicate with all other nodes in vSAN enabled cluster” error vCenter Server 6.0 Update 2 displays on non-vSAN enabled ESXi hosts displays the message: Retrieve a ticket to register the vSAN VASA Provider Status of TLSv1.1/1.2 Enablement and TLSv1.0 Disablement across VMware products The

The post Top 20 Articles for vSAN, November 2018 appeared first on VMware Support Insider.

Posted in KBTV, Top 20 | Comments Off on Top 20 Articles for vSAN, November 2018

Top 20 Articles for NSX, November 2018

Virtual machine in ESXi is unresponsive with a non-paged pool memory leak Licensing vShield 1.x/5.0 VMs running on ESXi 5.5 with vShield endpoint activated fails during snapshot operations Performing vMotion or powering on a virtual machine being protected by vShield Endpoint fails When using VMware vShield App Firewall, virtual machines fail to connect to the

The post Top 20 Articles for NSX, November 2018 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for NSX, November 2018

Top 20 Articles for NSX, November 2018

Virtual machine in ESXi is unresponsive with a non-paged pool memory leak Licensing vShield 1.x/5.0 VMs running on ESXi 5.5 with vShield endpoint activated fails during snapshot operations Performing vMotion or powering on a virtual machine being protected by vShield Endpoint fails When using VMware vShield App Firewall, virtual machines fail to connect to the

The post Top 20 Articles for NSX, November 2018 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for NSX, November 2018

Top 20 Articles for vSphere, November 2018

“The transaction log for database ‘VIM_VCDB’ is full” error on a Microsoft SQL DB server ESXi 5.5 Update 3b and later hosts are not manageable after an upgrade “Host IPMI system event log status” alarm in vCenter Server Determining where growth is occurring in the vCenter Server database ESXi host disconnects intermittently from vCenter Server

The post Top 20 Articles for vSphere, November 2018 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for vSphere, November 2018

Top 20 Articles for vSphere, November 2018

“The transaction log for database ‘VIM_VCDB’ is full” error on a Microsoft SQL DB server ESXi 5.5 Update 3b and later hosts are not manageable after an upgrade “Host IPMI system event log status” alarm in vCenter Server Determining where growth is occurring in the vCenter Server database ESXi host disconnects intermittently from vCenter Server

The post Top 20 Articles for vSphere, November 2018 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for vSphere, November 2018

New book: VMware vSAN 6.7 U1 Deep Dive

Advertise here with BSA


Cormac Hogan and I have been working late nights and weekends over the past months to update our vSAN book material. Thanks Cormac, it was once again a pleasure working with you on this project! As you may know, we released two versions of a vSAN based book through VMware Press. The book was titled vSAN Essentials. As mentioned before, after restructuring and rewriting a lot of the content we felt that the title of the book didn’t match the content, so we decided to rebrand it to vSAN 6.7 U1 Deep Dive. After receiving very thorough reviews by Frank Denneman and Pete Koehler (Thanks guys!) we managed to complete it this week after we added a great foreword by our business unit’s SVP and General Manager, Yanbing Li.

Cormac and I decided to take the self-publishing route for this book, which allows us to set a great price for the ebook and enable the Amazon matchbook option, giving everyone who buys the paper version through Amazon the option to buy the e-book with a nice discount! As prices will vary based on location I am only going to list the USD prices. Please check your local Amazon website for localized prices. Oh, and before I forget, I would like to recommend buying the ebook flavor! Why? Well:

“On average, each printed book releases 8.85 pounds of carbon dioxide into the environment. Together, the newspaper and book-printing industries cut down 125 million trees per year and emit 44 million tons of CO2.”

We appreciate all support, but we prefer the cleanest option from an environmental stance, this is also the reason we priced the ebook a lot cheaper than the paper version. Anyway, here are the links to the US store, we hope you enjoy the content, and of course as always an Amazon review would be appreciated! Interestingly, it seems we already reached number 1 in the category Virtualization and the category Storage before this announcement, thanks everyone, we really appreciate it!

The post New book: VMware vSAN 6.7 U1 Deep Dive appeared first on Yellow Bricks.

Posted in 6.7, 6.7 u1, cloud, Data Recovery, deep dive, deepdive, Server, Software Defined, Storage, Virtual SAN, VMware, vsan | Comments Off on New book: VMware vSAN 6.7 U1 Deep Dive

New KB articles published for the week ending 8th December,2018

VMware App Volumes Windows Search service is started in-spite of the service disabled in AppVolumes 2.14 Date Published: 03-12-2018 VMware ESXi Virtual machine becomes inaccessible or hangs after a power outage or loss of connectivity to the storage Date Published: 07-12-2018 #PF Exception 14 in world #####:Vol3JournalE Date Published: 07-12-2018 VMware Integrated OpenStack VIO cannot

The post New KB articles published for the week ending 8th December,2018 appeared first on VMware Support Insider.

Posted in KB Digest, Knowledge Base | Comments Off on New KB articles published for the week ending 8th December,2018

VMware vCloud Director 9.5 – Cross-VDC Networking Blog Series – Design Considerations and Conclusion

Design Considerations

Let’s discuss some of the design considerations for Cross-VDC Networking inside of vCD. It is important to note that although Native NSX supports up to 16 sites (or 16 vCenters), vCD 9.5 as of today supports up to four (4) sites only.

Below are applicable considerations pulled from the NSX Cross VC Design Guide.

When deploying a Cross-VC NSX solution across sites, the requirements for interconnectivity between two sites are:

  1. IP Connectivity (Layer 3 is acceptable)
  2. 1600+ MTU for the VXLAN overlay
  3. < 150 ms RTT latency

In addition, it’s important to note, since logical networking is spanning multiple vCenter domains, there must be a common Administrative Domain for both vCenter domains/sites.

The physical network can be any L2/L3 fabric supporting a 1600-byte MTU or greater. The physical network becomes the underlay transport for logical networking and forwards packets across VTEP endpoints. The physical environment is unaware of the logical networks or VXLAN encapsulation as shown in Figure below. Encapsulation/de-encapsulation of the VXLAN header is done by the VTEPs on respective ESXi hosts, but the physical network must support the 1600 MTU to be able to transport the VXLAN encapsulated frames.

Typically, L2/L3 over dedicated fiber or a shared medium like MPLS service from an ISP is used for connectivity between sites with L3 connectivity being preferred for scalability and to avoid common layer 2 issues such as propagation of broadcast traffic over the DCI (data center interconnect) link or STP (spanning tree protocol) convergence issues.

Once the NSX Manager at Site-A is deployed via standard NSX Manager installation procedures (NSX Manager is deployed as an OVF file), it can be promoted to primary role.

Once the primary NSX Manager, is configured, the Universal Control Cluster (UCC) can be deployed from the Primary NSX Manager. Similar to standard design guide recommendations for resiliency, the NSX controllers should be deployed on separate physical hosts; anti-affinity rules can be leveraged to assure multiple NSX controllers don’t end up on the same physical host. If NSX controllers are deployed on the same host, resiliency is lost because a physical host failure can bring down more than one controller or possibly even the entire controller cluster if all controllers are on the same host.

The controllers distribute the forwarding paths to the vSphere hosts and have complete separation from the data plane. If one controller is lost, UCC will keep functioning normally. If two controllers are lost, the one remaining controller will go into read-only mode and new control plane information will not be learned but data will keep forwarding.

If the entire controller cluster is lost, again, the data plane will keep functioning. Forwarding path information on the vSphere hosts do not expire, however, no new information can be learned until at least two controllers are recovered.

We can work around this by enabling the Controller Disconnected Operation (CDO) mode. Controller Disconnected Operation (CDO) mode ensures that the data plane connectivity is unaffected in a multi-site environment, when the primary site loses connectivity. You can enable the CDO mode on the secondary site to avoid temporary connectivity issues related to the data plane, when the primary site is down or not reachable. You can also enable the CDO mode on the primary site for the control plane failure.

CDO mode avoids the connectivity issues during the following failure scenarios:

  1. The complete primary site of a cross-vCenter NSX environment is down
  2. WAN is down
  3. Control plane failure
  4. The CDO mode is disabled by default

When the CDO mode is enabled and host detects a control plane failure, the host waits for the configured time period and then enters the CDO mode. You can configure the time period for which you want the host to wait before entering the CDO mode. By default, the wait time is five minutes.

NSX Manager creates a special CDO logical switch (4999) on the controller. The VXLAN Network Identifier (VNI) of the special CDO logical switch is unique from all other logical switches.

When the CDO mode is enabled, one controller in the cluster is responsible for collecting all the VTEP information reported from all transport nodes and replicating the updated VTEP information to all other transport nodes. After detecting the CDO mode, broadcast packets like ARP/GARP and RARP is sent to the global VTEP list. This allows to vMotion the VMs across the vCenter Servers without any data plane connectivity issues.

Universal Control VM Deployment and Placement

The Universal Control VM is the control plane for the UDLR. Similar to the DLR Control VM in non-Cross-VC NSX deployments the Universal Control VM will be deployed on the Edge cluster and will peer with the NSX Edge appliances. Since Universal Control VMs are local to the vCenter inventory, NSX Control VM HA does not occur across vCenter domains. If deployed in HA mode, the active and standby Control VM must be deployed within the same vCenter domain. There is no failover or vMotion of Universal Control VMs to another vCenter domain. The Control VMs are local to the respective vCenter domain.

A deployment that does not have Local Egress enabled will have only one Universal Control VM for a UDLR. If there are multiple NSX Manager domain/sites, the Control VM will sit only at one site, which will be the primary site and peer with all ESGs across all sites.

In Active/Standby vCD Deployment (Tenant Layer in our case) and upon Active site failure, the Provider will need to manually redeploy the Tenant UDLR Control VM on the Standby (now Active) site. Promoting the Secondary site to active is a pre-requisite that the Provider will have to do upon total Primary site Failure.

A multi-site multi-vCenter deployment that has Local Egress enabled (In our case the Provider Layer) will have multiple Universal Control VMs for a UDLR – one for each respective NSX Manager domain/site; this enables site-specific North/South egress. If there are multiple NSX Manager domain/sites, there will be a Control VM at each site; each control VM will also connect to a different transit logical network peering with the ESGs local only to its site. Upon site failure, no Control VM needs to be manually redeployed at a new primary site, because each site already has a Control VM deployed.

Stateful Services

In an Active/Passive North-South deployment model across two sites, it’s possible to deploy ESG in HA mode within one site where ESG is running stateful services like firewall and load balancer. However, HA is not deployed across sites.

The stateful services need to be manually replicated at each site, this is an important consideration. This can be automated via custom scripts leveraging NSX REST API. The network services are local to each site within both the Active/Passive North/South egress model and the Active/Active North/South egress model.

Graceful Restart

One item to note is Graceful Restart is enabled on ESGs by default during deployment. In a multi-site environment when using ESGs in ECMP mode, this typically should be disabled.

If it’s left at the default and aggressive timers are set for BGP, the ESG will have traffic loss on failover in an ECMP environment due to preserved forwarding state by graceful restart. In this case, even if BGP timers are set to 1:3 seconds for keepalive/hold timers, the failover can take longer. The only scenario where Graceful Restart may be desired on ESG in an ECMP environment is when ESG needs to act as GR Helper for a physical top of rack (ToR) switches that is Graceful Restart capable. Graceful Restart is more utilized in chassis architecture which have dual route processor modules and less so on ToR switches.

Final Cross-VDC Considerations

While Cross-VDC networking presents many new networking capabilities, there are a few things we’ve learned that are not covered as of today. These are important factors to consider when deploying Cross-VDC for your tenants.

  1. Universal Distributed Firewall (UDFW) is not a concept available via vCloud Director 9.5. Any DFW rules will need to be created on a per orgVDC site and managed independently.
  2. Network services within respective OrgVDC Edges will need to be managed independently. Therefore, a NAT rule that is on Site-A does not propagate to Site-B – this is an important factor to consider during failover scenarios.
  3. Proper thought needs to be put into ingress of traffic between multiple sites. Consider using a Global Load Balancer (GLB) technology to manage availability between sites.
  4. As expected, Cross-VDC networking only works with NSX-V. NSX-T has a different interpretation of multi-site capability and this is something we are investigating for future vCD releases.

Conclusion

While this is blog series covered many aspects of Cross-VDC networking within vCloud Director, this is just scratching the surface of design considerations, use case discussion, and feature sets available inside of vCloud Director.

If you have interest in learning more or discussing a potential design, please reach out to your VMware Cloud Provider field team. Thanks again for reviewing our material!

Daniel, Abhinav, and Wissam

 

 

 

 

 

The post VMware vCloud Director 9.5 – Cross-VDC Networking Blog Series – Design Considerations and Conclusion appeared first on VMware Cloud Provider Blog.

Posted in NSX, vCloud Director, VMware Cloud Provider | Comments Off on VMware vCloud Director 9.5 – Cross-VDC Networking Blog Series – Design Considerations and Conclusion