vCloud Director Extender 1.1.0 – GA Announcement

Author – Guy Bartram

Since vCloud Director 9, we have been shipping vCloud Extender (vCloud Extender is VMware’s replacement for vCloud Connector), as of 19th April 2018 vCloud Extender 1.1.0 went GA with new features making on-boarding migrations even easier for providers and tenants alike.

vCloud Extender focuses on making a simple, tenant controlled solution to stretch layer 2 networks and migrate workloads (using the same IP configuration), from an on-premise vCenter to a Cloud Service Provider vCloud Director Virtual Data Center environment. This allows customers to meet their ‘move to cloud’ objectives and cloud service providers to capture the opportunity to move customers to their cloud in a non-disruptive manner.

Moving workloads to cloud, especially hyper scale clouds where the hypervisors are different, often involves complex conversions to different formats, involving risk, re-configuration, time, and cost. VMware is providing consumers with a simple way to migrate to a VMware provider cloud without conversion, without re-configuration, in a UI they know and at their migration pace. Installation for tenants couldn’t be easier; deploying the vCloud Director Extender appliance and running the new standalone Setup UI. Tenant usability also has been improved with the vCloud Extender UI plugin to vCenter providing configuration, monitoring, and historical data for migrations, DC extensions and end point cloud service provider cloud connections.

 

 

The new release of vCloud Extender (1.1.0) provides support for a greater breadth of workload sizes, with a ‘seeded migration’ (removable media or other transfer method) capability and compatibility for legacy customer on-premise vCenter Server 5.5 U3. Cloud service providers can hence capture larger workloads from customers without customers incurring massive bandwidth costs and without incurring long outages for large data sync operations during a migration.

Seeded migration supports warm migrations (continuous file synchronization while the source virtual machines are in a powered-on state) based on a final delta sync of blocks between the source (seed vApp) and target VM. Seed vApps can be created by customer administrators simply by exporting a VM to OVF or creating a clone of a VM. In addition, of course other means of data transfer are supported as long as the target site ultimately receives the source data. When warm migrating a VM using a data seed approach, customers or cloud service providers can initiate a migration immediately or at a specific time with a recovery point objective (RPO) defined.

The other major function vCloud Extender provides is Layer 2 extension from a customer premise to the cloud service provider virtual data center for the tenant. A significant benefit of vCloud Extender is that customers do not need their own instance of NSX on premise to have the layer 2 connectivity, however without NSX, customers are limited to extending only vLANs to VxLAN on the cloud service provider.

vCloud Extender works by deploying an NSX for vSphere Standalone Edge Client (v6.3) from the Extender OVA to manage the connectivity and stretch networks. In the previous version of vCloud Extender there was a restriction for the tenant to add only one virtual datacenter to be configured for uplink gateway with DC Extension, customers couldn’t extend networks from other virtual datacenters. In the latest release, VMware is supporting multiple edge gateway appliance configuration, so customers can configure uplink gateways for two or more datacenters, making setup and configuration a lot simpler and covering more scope of the customer’s estate.

Of course, migrating individual VMs whilst great for those that are individual in purpose, is not representative of typical multiple tiered applications that run across multiple VMs. vCloud Extender supports running multiple migrations of the same type using ‘jobs’.

These jobs can be:
1) Warm migration
2) Warm migration with preloaded seed
3) Cold migration (transfers the VMDK files of the source virtual machine to the target virtual machines) based – providing a lot of flexibility in estates migration by essentially grouping VMs into jobs.

When jobs are combined with the new migration scheduling feature (supporting Warm, Cold and Test migrations) and seeding, customers can plan for migrations in maintenance windows and be sure to complete them in the allocated window. In preparation of cutover customers can test the consistency of a migrated VM, with the new testing feature, after a warm migration but before power on. This validates the readiness for migration cut over, providing assurance that the cut over will execute to plan.

Whilst migrating workloads between sites, visibility is key, status monitoring is of key importance and sometimes troubleshooting is required. vCloud Extender provides a home dashboard for monitoring the health status of migration tasks, the status and the throughput of the L2 VPN tunnel.

Tenants can see the history, status and progress of migration tasks per virtual machine and per job as well as configure log forwarding of the system logs to external syslog servers if necessary.

 

                

 

As expected audit logs are available, but if these are not enough and more data is required for support, now tenants can automatically generate a support bundle, in a single click, including a DB dump if necessary to make absolutely sure you have everything needed for a support request.

Altogether this release of vCloud Extender, along with the vCloud Director 9.1 enhanced 3rd party integration and automation, cloud service providers are now getting significantly more capabilities to help them capture the market move to cloud, as millions of workloads are expected to migrate over the next few years.

So why wait! If you have vCloud Director already, download vCloud Extender 1.1.0 here, documentation here, and release notes here.

The post vCloud Director Extender 1.1.0 – GA Announcement appeared first on VMware Cloud Provider Blog.

Posted in Cloud Services, service provider, vCloud Director, vCloud Director Extender, vCloud Extender, VMware, VMware Cloud Provider, VMware Service Provider, VMware Service Providers | Comments

Top vBlog 2018 starting soon, make sure your site is included

I’ll be kicking off Top vBlog 2018 very soon and my vLaunchPad website is the source for the blogs included in the Top vBlog voting each year so please take a moment and make sure your blog is listed.  Every year I get emails from bloggers after the voting starts wanting to be added but … Continue reading »

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Posted in News | Comments

Configuration maximum changes in vSphere 6.7

A comparison using the Configuration Maximum tool for vSphere shows the following changes between vSphere 6.5 & 6.7.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Posted in News | Comments

Important information to know before upgrading to vSphere 6.7

vSphere 6.7 is here and with support for vSphere 5.5 ending soon (Sept.) many people will be considering upgrading to it. Before you rush in though there is some important information about this release that you should be aware of. First let’s talk upgrade paths, you can’t just upgrade from any prior vSphere version to … Continue reading »

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Posted in News | Comments

vSphere 6.7 Link-O-Rama

Your complete guide to all the essential vSphere 6.7 links from all over the VMware universe. Bookmark this page and keep checking back as it will continue to grow as new links are added everyday. Also be sure and check out the Planet vSphere-land feed for all the latest blog posts from the Top 100 … Continue reading »

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Posted in News | Comments

Summary of What’s New in vSphere 6.7

Today VMware announced vSphere 6.7 coming almost a year and a half after the release of vSphere 6.5. No word on when it will be available but historically it has come a few weeks after the announcement. Below is the What’s New document from the Release Candidate that summarizes most of the big things new … Continue reading »

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Posted in News | Comments

New KB articles published for week ending 15th April 2018

vRealize Automation

End of Availability of vRealize Code Stream Management Pack for IT DevOps
Date Published: 2018/04/12

Automatic update fails after updating the vRA Site Certificate on appliance from self-signed to CA-signed
Date Published: 2018/04/11

Items in vRA show now fields after the vCAC CAFÉ endpoint in vRO was deleted and recreated
Date Published: 2018/04/09

VMware SDDC Manager

The vCenter Server in the management workload domain is inaccessible from a Windows or Linux jump VM after upgrading an unmanaged host
Date Published: 2018/04/12

Workload Domain deployment fails while creating NSX Conroller VMs
Date Published: 2018/04/12

Workload domain creation fails after adding an additional rack to a VMware Cloud Foundation 2.2/2.3 deployment
Date Published: 2018/04/13

VMware vRealize Operations Manager

Meltdown and Specter Effects on vRealize Operations Manager 6.7
Date Published: 2018/04/12

Using the Upgrade Assessment Tool for vRealize Operations Manager 6.7
Date Published: 2018/04/12

Upgrading the Virtual Hardware version of the vRealize Operations Manager 6.x Nodes
Date Published: 2018/04/10

VMware ESXi

Installing VMTools in SUSE Linux Enterprise 15 fails if no rc[0-6].d folds
Date Published: 2018/04/11

The post New KB articles published for week ending 15th April 2018 appeared first on Support Insider.

Posted in KB Digest, Knowledge Base | Comments

vSphere 6.7 announced!

Advertise here with BSA


It is that time of the year again, a new vSphere release announcement! (For those interested in what’s new for vSAN make sure to read my other post.) vSphere 6.7, what’s in a name / release? Well a bunch of stuff, and I am not going to address all of the new functionality as the list would simply be too long. So this list features what I think is worth mentioning and discussing.

  • vSphere Client (HTML-5) is about 95% feature complete
  • Improved vCenter Appliance monitoring
  • Improved vCenter Backup Management
  • ESXi Single Reboot Upgrades
  • ESXi Quick Boot
  • 4K Native Drive Support
  • Max Virtual Disks increase from 60 to 256
  • Max ESXi number of Devices from 512 to 1024
  • Max ESXi paths to Devices from 2048 to 4096
  • Support for RDMA
  • vSphere Persistent Memory
  • DRS initial placement improvements

Note that there’s a whole bunch of stuff missing from this list, for instance there were many security enhancements, but I don’t see the point of me pretending to be an expert on that topic, while I know some of the top experts will have a blog out soon.

Not sure what I should tell about the vSphere Client (h5) at this point. Everyone has been waiting for this, and everyone has been waiting for it to reach ~90/95% feature complete. And we are there. I have been using it extensively for the past 12 months and I am very happy with how it turned out. I think the majority of you will be very very happy with what you will see and with the overall experience. It just feels fast(er) and seems more intuitive.

When it comes to management and monitoring of the vCenter Appliance (https://ip of vcenter:5480) there are a whole bunch of improvements. For me personally the changes in the monitoring tab are very useful and also the services tab is useful. Now you can immediately see when a particular disk is running out of space, as shown in the screenshot below. And you can for instance restart a particular service in the “Services” tab.

Next is vCenter Backup Management, a lot of people have been asking for this. We introduced Backup and Recovery of the appliance a while ago, very useful, but unfortunately it didn’t provide a scheduling mechanism. Sure you could create a script that would do this for you on a regular cadence, but not everyone wants to bother with that. Now in the Appliance Management UI you can simply create a schedule for backup. This is one of those small enhancements, which to me is a big deal! I’m sure that Emad or Adam will have a blog out soon on the topic of vCenter enhancements, so make sure to follow their blogs.

Another big deal is the fact that we shaved off a reboot for major upgrades. As of 6.7 you now only have 1 reboot with ESXi. Again, a tiny thing going from 2 back to 1, but when you have servers taking 10-15 minutes to go through the reboot process and you have dozens to of servers to reboot it makes Single Reboot ESXi Upgrades a big thing. For those on 6.5 right now, you will be able to enjoy the single reboot experience when upgrading to 6.7!

One feature I have personally been waiting for is ESXi Quick Boot. I saw a demo of this last year at our internal R&D conference at VMware and I was impressed. I don’t think many people at that stage saw the importance of the feature, but I am glad it made it in to the release. So what is it? Well basically it is a way to restart the hypervisor without going through the physical hardware reboot process. This means that you are now removing that last reboot, of course this only applies when your used server hardware supports it. Note that with the first release only a limited set of servers will support it, nevertheless this is a big thing. Not just for reboots, but also for upgrades / updates. A second ESXi memory image can be created and updated and when rebooting simply switched over to the latest and greatest instead of doing a full reboot. It will save, again, a lot time. I looked at a pre-GA build and noticed the following platforms are supported, this should be a good indication:

Of course you can also see if the host is supported in the vSphere Client, I found it in the Web Client but not in the H5 Client, maybe I am overlooking it, that could of course be the case.

Then up next are a bunch of core storage enhancements. First 4K Native Drive Support, very useful for those who want to use the large capacity devices. Not much else to say about it other than that it will also be supported by vSAN. I do hope that those using it for vSAN do take the potential performance impact in to account. (High capacity, Low IOPS >> low iops per GB!) Up next is the increase of a bunch of “max values“. Number of virtual disks going from 60 to 256 virtual disks for PVSCSI. And on top of that the number of Paths and Devices is also going up. Number of devices doubled from 512 to 1024 per host, and so has the number of paths as it is going from 2048 to 4096. Some of our largest customers will definitely appreciate that!

Then there’s also the support for RDMA, which is great for applications requiring extremely low latency and very high bandwidth! Note that when RDMA is used most of the ESXi Network stack is skipped, and when used in pass-through mode this also means that vMotion is not available. So that will only be useful for scale-out applications which have their own load balancing and high availability functionality. For those who can tolerate a bit more latency a paravirtualized RDMA adaptor will be available, you will need HW version 13 for this though.​

vSphere Persistent Memory is something that I was definitely excited about. Although there aren’t too many supported server configurations, or even persistent memory solutions, it is something that introduces new possibilities. Why? Well this will provide you performance much higher than SSD at a cost which is lower than DRAM. Think less than 1 microsecond of latency. Where nanoseconds is for DRAM and Flash typically is low milliseconds under load. I have mentioned this in a couple of my sessions so far, NVDIMM will be big, which is the name commonly used for Persistent Memory. For those planning on buying persistent memory, do note that your operating system also needs to understand how to use it. There is a Virtual NVDIMM device in vSphere 6.7 and if the Guest OS has support for it then it will be able to use this byte addressable device. I believe a more extensive blog about vSphere Persistent Memory and some of the constraints will appear on the Virtual Blocks blog soon, so keep an eye on that as well. Cormac already has his fav new 6.7 features up on his blog, make sure to read that as well.

And last but not least, there was a significant improvement done in the initial placement process for DRS. Some of this logic was already included in 6.5, but only worked when HA was disabled. As of 6.7 it is also available when HA is enabled, making it much more likely that you will be able to benefit from the 3x decrease in time that it takes for the initial placement process to complete. A big big enhancements in the DRS space. I am sure though that Frank Denneman will have more to say about this.

The post vSphere 6.7 announced! appeared first on Yellow Bricks.

Posted in 6.7, Management & Automation, Server, Storage, Various, VMware, vSphere, What's new | Comments

What’s new vSAN 6.7

Advertise here with BSA


As most of you have seen, vSAN 6.7 just released together with vSphere 6.7. As such I figured it was time to write a “what’s new” article. There are a whole bunch of cool enhancements and new features, so let’s create a list of the new features first, and then look at them individually in more detail.

  • HTML-5 User Interface support
  • Native vRealize Operations dashboards in the HTML-5 client
  • Support for Microsoft WSFC using vSAN iSCSI
  • Fast Network Failovers
  • Optimization: Adaptive Resync
  • Optimization: Witness Traffic Separation for Stretched Clusters
  • Optimization: Preferred Site Override for Stretched Clusters
  • Optimization: Efficient Resync for Stretched Clusters
  • New Health Checks
  • Optimization: Enhanced Diagnostic Partition
  • Optimization: Efficient Decomissioning
  • Optimization: Efficient and consistent storage policies
  • 4K Native Device Support
  • FIPS 140-2 Level 1 validation

Yes, that is a relatively long list indeed. Lets take a look at each of the features. First of all, HTML-5 support. I think this is something that everyone has been waiting for. The Web Client was not the most loved user interface that VMware produced, and hopefully the HTML-5 interface will be viewed as a huge step forward. I have played with it extensively over the past 6 months and I must say that it is very snappy. I like how we not just ported over all functionality, but also looked if workflows could be improved and if presented information/data made sense in each and every screen. This also however does mean that new functionality from now on will only be available in the HTML-5 client, so use this going forward. Unless of course the functionality you are trying to access isn’t available yet, but most of it should be! For those who haven’t seen  it yet, here’s  a couple of screenshots… ain’t it pretty? 😉

For those who didn’t notice, but in the above screenshot you actually can see the swap file, and the policy associated with the swap file, which is a nice improvement!

The next feature is native vROps dashboards for vSAN in the H5 client. I found this very useful in particular. I don’t like context switching and this feature allows me to see all of the data I need to do my job in a single user interface. No need to switch to the VROps UI, but instead vSphere and vSAN dashboards are now made available in the H5 client. Note that it needs the VROps Client Plugin for the vCenter H5 UI to be installed, but that is fairly straight forward.

Next up is support for Microsoft Windows Server Failover Clustering  for the vSAN iSCSI service. This is very useful for those running a Microsoft cluster. Create and iSCSI Target and expose it to the WSFC virtual machines. (Normally people used RDMs for this.) Of course this is also supported with physical machines. Such a small enhancement, but for customers using Microsoft clustering a big thing, as it now allows you to run those clusters on vSAN without any issues.

Next are a whole bunch of enhancements that have been added based on customer feedback of the past 6-12 months. Fast Network Failovers was one of those. Majority of our customers have a single vmkernel interface with multiple NICs associated with them, some of our customers have a setup where they create two vmkernel interfaces on different subnets, each with a single NIC. What that last group of customers noticed is that in the previous release we waited 90 seconds before failing over to the other vmkernel interface (tcp time out) when a network/interface had failed. In the 6.7 release we actually introduce a mechanism that allows us to failover fast, literally within seconds. So a big improvement for customers who have this kind of network configuration (which is very similar to the traditional A/B Storage Fabric design).

Adaptive Resync is an optimization to the current resync function that is part of vSAN. If a failure has occurred (host, disk, flash failure) then data will need to be resynced to ensure that the impacted objects (VMs, disks etc) are brought in to compliance again with the configured policy. Over the past 12 months the engineering team has worked hard to optimize the resync mechanism as much as possible. In vSAN 6.6.1 a big jump was already made by taking VM latency in to account when it came to resync bandwidth allocation, and this has been further enhanced in 6.7. In 6.7 vSAN can calculate the total available bandwidth, and ensures Quality Of Service for the guest VMs prevails by allocating those VMs 80% of the available bandwidth and limiting the resync traffic to 20%. Of course, this only applies when congestion is detected. Expect more enhancements in this space in the future.

A couple of release ago we introduced Witness Traffic Separation for 2 Node configurations, and in 6.7 we introduce the support for this feature for Stretched Clusters as well. This is something many Stretched vSAN customers have asked for. It can be configured through the CLI only at this point (esxcli) but that shouldn’t be a huge problem. As mentioned previously, what you end up doing is tagging a vmknic for “witness traffic” only. Pretty straight forward, but very useful:

esxcli vsan network ip set -i vmk<X> -T=witness

Another enhancement for stretched clusters is Preferred Site Override. It is a small enhancements, but in the past when the preferred site failed and returned for duty but would only be connected to the witness, it could happen that the witness would bind itself directly to the preferred site. This by itself would result in VMs becoming unavailable. This Preferred Site Override functionality would prevent this from happening. It will ensure that VMs (and all data) remains available in the secondary site. I guess one could also argue that this is not an enhancement, but much more a bug fix. And then there is the Efficient Resync for Stretched Clusters feature. This is getting a bit too much in to the weeds, but essentially it is a smarter way of bringing components up to the same level within a site after the network between locations has failed. As you can imagine 1 location is allowed to progress, which means that the other location needs to catch up when the network returns. With this enhancement we limit the bandwidth / resync traffic.

And as with every new release, the 6.7 release of course also has a whole new set of Health Checks. I think the Health Check has quickly become the favorite feature of all vSAN Admins, and for a good reason. It makes life much easier if you ask me. In the 6.7 release for instance we will validate consistency in terms of host settings and if an inconsistency is found report this. We also, when downloading the HCL details, will only download the differences between the current and previous version. (Where in the past we would simply pull the full json file.) There are many other small improvements around performance etc. Just give it a spin and you will see.

Something that my team has been pushing hard for (thanks Paudie) is the Enhanced Diagnostic Partition. As most of you know when you install / run ESXi there’s a diagnostic partition. This diagnostic partition unfortunately was a fixed size, with the current release when upgrading (or installing green field) ESXi will automatically resize the diagnostic partition. This is especially useful for large memory host configurations, actually useful for vSAN in general. No longer do you need to run a script to resize the partition, it will happen automatically for you!

Another optimization that was released in vSAN 6.7 is called “Efficient Decomissioning“. And this is all about being smarter in terms of consolidating replicas across hosts/fault domains to free up a host/fault domain to allow for maintenance mode to occur. This means that if a component is striped, for other reasons then policy, they may be consolidated. And the last optimization is what they refer to as Efficient and consistent storage policies. I am not sure I understand the name, as this is all about the swap object. Per vSAN 6.7 it will be thin provisioned by default (instead of 100% reserved), and also the swap object will now inherit the policy assigned to the VM. So if you have FTT=2 assigned to the VM, then you will have not two but three components for the swap object, still thin provisioned so it shouldn’t really change the consumed space in most cases.

Then there are the two last items on the list: 4K Native Device Support and FIPS 140-2 Level 1 validation. I think those speak for itself. 4K Native Device Support has been asked for by many customers, but we had to wait for vSphere to support it. vSphere supports it as of 6.7, so that means vSAN will also support it Day 0. The ​VMware VMkernel Cryptographic Module v1.0 has achieved FIPS 140-2, vSAN leverages the same module for vSAN Encryption. Nice collaboration by the teams, which is now showing the big benefit.

Anyway, there’s more work to do today, back to my desk and release the next article. Oh, and if you haven’t seen it yet, Virtual Blocks also has a blog and there’s a nice podcast on the topic of 6.7 as well.

The post What’s new vSAN 6.7 appeared first on Yellow Bricks.

Posted in 6.7, Server, Software Defined, Storage, Various, Virtual SAN, VMware, vsan, vsan 6.7, vSphere | Comments

22 / 23 May 2018 – VMware Technical Support Summit

Advertise here with BSA


A while back I was asked if I could present at the VMware Technical Support Summit and last week I received the agenda. I forgot to blog about it so I figured I would share it with everyone. I was supposed to go to this event last year but I had a clash in my calendar unfortunately. At this event organized by our support team you will have the ability to sit in some extreme deep dive sessions. Below you can find the agenda, and also here’s the registration link if you are interested! Note that Joe Baguley will be doing a keynote, and Cormac Hogan and I will be doing a session on vSAN futures!

The post 22 / 23 May 2018 – VMware Technical Support Summit appeared first on Yellow Bricks.

Posted in BC-DR, Data Recovery, gss, Management & Automation, Storage, summit, support, technical support, Various, Virtual SAN | Comments