Want to play around with Kubernetes? Try MicroK8s!

Advertise here with BSA


Two weeks ago I wanted to play around with Kubernetes for a day or two. I found this training course internally at VMware that allowed me to go through some labs. I asked around if anyone had some tips on getting Kubernetes up and running fast. I couldn’t be bothered with creating a multi node kubernetes cluster. I wanted to play around with some of the commands and YAML files. I tried Atomic as suggested by the lab manual, but there were way too many steps involved to install/configure Kubernetes if you ask me. Next option would be some version hosted in a cloud of choice, but I didn’t want to incur the cost. After digging around I stumbled on MicroK8s. It sounded easy, so I figured I would give it a try. When it comes to Linux my preference is Ubuntu/Debian, it is just what I am most familiar with, and as MicroK8s comes from Canonical I figured I would give it a try. As Kelsey Hightower suggested on twitter yesterday (which triggered this article), it is just one command away:

I downloaded the latest Ubuntu Server ISO and I created a VMware Fusion VM. I stepped through the installation wizard of Ubuntu Server and then noticed it already provided the option even to install “microk8s”, I selected the package, and some additional packages I figured I would need, and clicked done. Literally within minutes, I had a fresh single node Kubernetes configuration, which for me worked straight out of the box!

After it is done configuring, click reboot and login. I created an alias for kubectl, as I didn’t want to type “microk8s.kubectl” every time or install a different version:

sudo snap alias microk8s.kubectl kubectl

I also enabled the Kubernetes dashboard from the get-go, which can be done by running the command “microk8s.enable dashboard“. There are a whole bunch of articles out there that can take you through the steps of deploying your first container, making it highly available by specifying the number of instances so I am not going to do that. I don’t want to pretend to be an expert, as I am far from that. Also, check the MicroK8s documentation, it is pretty decent. My colleague Myles Gray has a very good tutorial on why containers, very good read which I also recommend for people who just want to know a bit more about it like myself.

The post Want to play around with Kubernetes? Try MicroK8s! appeared first on Yellow Bricks.

Posted in cloud, cloud native, containers, k8s, Kubernetes, microk8s | Comments Off on Want to play around with Kubernetes? Try MicroK8s!

New KB articles published for the week ending 20th April,2019

VMware Horizon Remote Desktop connection with NLA is not supported in Horizon View Date Published: 16-Apr-19 VMware NSX for vSphere Can’t assign new user role in NSX Manager Date Published: 18-Apr-19 NSX Controller Certificate flagged on Security/Audit Scan flagged as bad certificate Date Published: 15-Apr-19 Mellanox Onyx, NSX Hardware VXLAN Gateway, version 3.7.1200, NSX 6.4.4

The post New KB articles published for the week ending 20th April,2019 appeared first on VMware Support Insider.

Posted in KB Digest, Knowledge Base | Comments Off on New KB articles published for the week ending 20th April,2019

New KB articles published for the week ending 13th April,2019

Virtual Disk Development Kit Best practice for Linux proxy VM configuration in HotAdd transport Date Published : 10-Apr-19 VMware ESXi ESXi host may crash with a PSOD – Spin count exceeded – possible deadlock with PCPU Date Published: 11-Apr-19 VVOLs are inaccessible after moving to another vCenter Server or refreshing CA certificate Date Published: 9-Apr-19

The post New KB articles published for the week ending 13th April,2019 appeared first on VMware Support Insider.

Posted in KB Digest, Knowledge Base | Comments Off on New KB articles published for the week ending 13th April,2019

vCloud Director Embraces Terraform

This year marks a focused support for HashiCorp Terraform in VMware vCloud Director (vCD). The refreshed Terraform vCloud Director provider enables administrators and DevOps engineers to define vCD infrastructure as code inside Terraform configuration files. This makes it an efficient automation and integration tool.

We already released two new provider versions this year (v2.0.0 and v2.1.0). More releases are on the way. Most importantly, with the latest version you can automate creation of the following resources:

  • Catalog with ability to upload OVA and ISO items to it
  • Org VDC networks (routed, direct and isolated)
  • vApp and vApp level networks
  • VMs, which can use the uploaded OVA, networks and mount an ISO when needed
  • Firewall rules and DNAT/SNAT rules that can enable access to the VMs
  • Independent disks and ability to attach them to the VMs
  • Various VM configuration parameters: CPU and cores, custom boot script, etc.

Please see the official documentation at the HashiCorp portal below for all currently available features and details. Note, that current release supports all vCD versions from v8.20 to the latest v9.7.

https://www.terraform.io/docs/providers/vcd/index.html

Open-Source and 100% written in Go

The project itself is fully open-source and available on GitHub. HashiCorp is hosting it in the “terraform-providers” namespace together with all the other official Terraform providers. If you’d like to contribute with a feature request, report an issue or propose a code improvement please visit the project’s site below. There you can also see current activity and what’s in the works.

https://github.com/terraform-providers/terraform-provider-vcd

To make it trivial to setup and use, Terraform vCloud Director Provider v2 is using Go programming language exclusively (just like the overall Terraform platform). This means we are also able to fully integrate it with HashiCorp build and download system. As a result for a user, all it takes to get the new provider is to prepare a .tf configuration file, define the “vcd” provider there and execute terraform init from the console. Terraform will take care of downloading and enabling the provider for you.

`terraform init` downloads the vCloud Director provider

It’s worth noting that for making calls to vCloud Director API the provider is using a Go library (SDK). This library is also open-source and available on GitHub below. Please take a look if you are developing applications written in Go and need an easy way of calling vCloud Director.

https://github.com/vmware/go-vcloud-director

Example Configuration

Here’s a small .tf configuration file that illustrates how you could automate creation of a catalog, upload an OVA to it, and also create an independent disk. You can similarly extend this example to create a vApp with VM that would use all these items.

# Configure the VMware vCloud Director Provider
provider "vcd" {
  url      = "https://${var.vcd_host}/api"
  org      = "myorg"
  vdc      = "myorgvdc"
  user     = "orgadmin"
  password = "${var.org_pass}"

  allow_unverified_ssl = "true"
}

# Catalog for OVAs and ISOs (v2.0.0+)
resource "vcd_catalog" "OperatingSystems" {
  name        = "OperatingSystems"
  description = "OS templates"

  delete_force     = "true"
  delete_recursive = "true"
}

# OVA for catalog (v2.0.0+)
resource "vcd_catalog_item" "OVA" {
  catalog     = "OperatingSystems"
  name        = "photon"
  description = "Linux VM"
  ova_path = "/home/images/ova/photon-hw11-3.0-26156e2.ova"
  show_upload_progress = true

  depends_on = ["vcd_catalog.OperatingSystems"]
}

# Independent disk (v2.1.0+)
resource "vcd_independent_disk" "TerraformDisk" {
  name         = "tf-disk"
  size         = "1024"
  bus_type     = "SCSI"
  bus_sub_type = "VirtualSCSI"
}

Next Steps

All in all, we are actively working on further extending and enhancing functionality of the Terraform vCD provider. Likewise, we are also supporting community efforts around it and doing our best to (a) listen to feedback and (b) help merge the code from contributors in an efficient, safe and high quality way. Hence, if you’re a Terraform user or have questions, please join us on GitHub or come by to our Slack channel for a chat!

Here are the take-away links from this post.

Hope to see you there!

The post vCloud Director Embraces Terraform appeared first on VMware Cloud Provider Blog.

Posted in Terraform, vCloud Director, vCloud Director API, VMware Cloud Provider | Comments Off on vCloud Director Embraces Terraform

Mixing versions of ESXi in the same vSphere / vSAN cluster?

Advertise here with BSA


I have seen this question being asked a couple of times in the past months, and to be honest I was a bit surprised people asked about this. Various customers were wondering if it is supported to mix versions of ESXi in the same vSphere or vSAN Cluster? I can be short whether this is support or not, yes it is. Would I recommend it? No, I would not!

Why not? Well mainly for operational reasons, it just makes life more complex. Just think about a troubleshooting scenario, you now need to remember which version you are running on which host and understand the “known issues” for each version. Also, for vSAN things are even more complex as you could have “components” running on a different version of ESXi. On top of that, it could even be the case that a certain command or esxcli namespace is not available on a particular version of ESXi.

Another concern is when doing upgrades or updates, you need to take the current version into account when updating, or more importantly when upgrading! Also, remember that firmware/driver combination may be different for a particular version of vSphere/vSAN as well, this could also make life more complex and definitely increases the chances of making mistakes!

Is this documented anywhere? Well, there are two KBs which hint at this, for vSAN specifically I have requested our team to document this in an upcoming document. The KBs that hint at this are below.

The post Mixing versions of ESXi in the same vSphere / vSAN cluster? appeared first on Yellow Bricks.

Posted in 6.5, 6.7, Server, Storage, u1, u2, Virtual SAN, VMware, vsan, vSphere | Comments Off on Mixing versions of ESXi in the same vSphere / vSAN cluster?

Free Spanish VMware technology ebook available now!

Advertise here with BSA


A couple of months ago I was asked if I wanted to write a foreword for an upcoming ebook. I have done this various times, but this one was particularly interesting. Why? Well, there are 3 good reasons:

  1. This book is written by 14 community experts, many of which I have met over the past years.
  2. It is a free, but sponsored, ebook!
  3. All sponsor proceeds will go to charity.

The first book I wrote was also a book with multiple authors, it was only a handful of people and that was painful enough as it is. An insane amount of coordination is involved usually and I have a lot of respect for these guys, 14 people writing a single book is not easy.

On top of that, these guys decided to cover multiple VMware technologies, ranging from NSX to VDI to vSphere etc. Very cool if you ask me. Oh, and before I forget… They have already managed to collect over 25.000 Euro for charity. Great job guys, what an achievement. I am not going to say much more, just download the book (if you read/speak Spanish)! Thanks for letting me part of this.

https://www.vmwareporvexperts.org/

The post Free Spanish VMware technology ebook available now! appeared first on Yellow Bricks.

Posted in Book, ebook, Server | Comments Off on Free Spanish VMware technology ebook available now!

vSphere HA virtual machine failed to failover error on VMs in a partitioned cluster

Advertise here with BSA


I received two questions this week around partition scenarios where after the failure has been lifted some VMs display the error message “vSphere HA virtual machine failed to failover”. The question that then arises is: why did HA try to restart it, and why did it fail? Well, first of all, this is an error that in most cases you can safely ignore. There’s a KB on the topic which gives a bit of detail to be found here, but let me explain to also in a bit more depth.

In a partition scenario, each partition will have its own master node. If there is no form of communication (datastore / network) possible, what the HA master will do is it will list all the VMs that are currently not running within that partition. It will also want to try to restart those VMs. A partition is extremely uncommon in normal environments but may happen in a stretched cluster. In a stretched cluster when a partition happens a datastore only belongs to 1 location. The VMs which appear to be missing typically are running in the other location, as typically the other location will have access to that particular datastore. Although the master has listed these VMs as “missing and need to restart” it will not be able to do this. Why? It doesn’t have access to the datastore itself, or when it has access to the datastore the files are locked as the VMs are still running. As a result, this will, unfortunately, be reported as a failed failover. Even though the VM was still running and there was no need for a failover. So if you hit this during certain failure scenarios, and the VMs were running as you expected, you can safely ignore this error.

The post vSphere HA virtual machine failed to failover error on VMs in a partitioned cluster appeared first on Yellow Bricks.

Posted in 6.0, 6.5, 6.7, BC-DR, failure scenario, ha, Server, testing, VMware, vSphere | Comments Off on vSphere HA virtual machine failed to failover error on VMs in a partitioned cluster

vCloud Availability 3.0 Blog Series: Introduction

Until now Disaster Recovery as a service for Cloud Providers has been broken up into three unique solutions: vCloud Availability C2C, vCloud Availability DR2C, and vCloud Director Extender. Unfortunately, these solutions brought with them three disparate interfaces and infrastructure. This has led to a lot of bloat and confusion for cloud providers to sort through. The recent release of vCloud Availability 3.0 aims to sort out these issues by providing a comprehensive platform that is, not only, easy to deploy and manage, but also easy to use.

In this blog series, we will introduce vCloud Availability 3.0, provide details on how to implement and manage the platform, and best practices. The focus of this particular blog is to introduce vCloud Availability 3.0, highlight key features, and set the stage for subsequent blogs.

Introduction to vCloud Availability 3.0

vCloud Availability is a powerful solution built to offer simple, more secure, and cost-effective onboardingmigration, and disaster recovery services “to” or “between” multi-tenant VMware clouds. As with vCloud Availability C2C, which provides a consolidated view for for cloud based services, vCloud Availability 3.0 brings the ability to manage services for on-premises to cloud as well. No more having to bounce around to multiple interfaces to manage migration and DR services. vCloud Availability 3.0 achieves this by leveraging the C2C platform as a foundation and extending it to the enterprise via an on-premises appliance.

Features

Along with the consolidated platform, vCloud Availability brings a number of key features.

  1. Simplified deployment – The deployment of both the cloud and on-premises appliances are supported by H5 interfaces and require minimal effort to install
  2. Fully integrated plugins – vCloud Availability provides fully integrated plugins with support for vCloud Director versions 8.20 and 9.x. The vCD plugins provide multi-tenant support can be leveraged by the cloud provider for a fully managed service or by the customer for self-service offerings. On the vSphere side, there is native plugin support for vCenter versions 6.0U3 and newer.
    vcloud director plugins for vcloud availability
  3. Policy Management – vCloud Availability provides replication policies allowing for granular tenant controls. Policy controls allow for enabling and disabling incoming and outgoing replications, maximum number of replications, maximum snapshot retention, and minimum RPO. Policies can be defined based on different levels of service or on a tenant by tenant basis. Only one policy can be assigned to a tenant at a time.
    vcloud availability policy management
  4. Protection/Migration Workflows – The simplified protection workflows provide details for the replication such as the destination location, the retention policy, recovery point objectives (RPO), compression and quiescing. An added benefit of the protection workflows is that they can now be scheduled to start at a specific time. So no need to wait until off hours to configure the replications. The can now be configured during normal business hours and set to run during off hours to maximize throughput and minimize business impact.
    vcloud availability replication details
  5. Network Settings – A new feature that was introduced is the ability to manage the network settings during failover. This allows the user to reset the MAC address as well as reassign the IP address during migration or failover. These controls can be applied globally, at the host level, or for individual NICs.
    vcloud availability failover network settings
  6. Security – Secure end-to-end connections for both cloud-to-cloud connections and enterprise-to-cloud. Inbound policies are no longer required for the enterprise, which simplifies deployment and increases security.

For a more comprehensive list features in vCloud Availability, please check out the release notes.

Cloud Architecture

The deployment consists of three appliances for the cloud provider. The cloud management appliance provides the user interface for managing the service. It also translates all of the vCD constructs and provides the vCD plugin. The replicator exposes the host-based replication primitives for the ESXi hosts as rest endpoints and proxies the replication connections between ESXi hosts. The final appliance, tunnel appliance, provides secure connections between locations. For lab testing and POCs, there is a combined appliance available that consolidates all three components into a single virtual machine. Providers will install vCloud Availability for each instance of vCloud Director and pair the sites to allow for migration and disaster recovery services between cloud sites.

Enterprise Architecture

There were a number of considerations taken into account when architecting the solution for the on-premise to cloud replication. Two were deemed extremely important by both the cloud provider and enterprise customer. The first consideration was to provide a way to deploy the solution with minimal impact/effort. In previous implementations, the secure connections required change changes to the infrastructure, including the firewall. For some customers, this meant a security exception or meant the solution was a non-starter. In vCloud Availability, this has been addressed this by requiring the enterprise appliance to establish the tunnels to the cloud service provider. This means that the enterprise does not require any inbound policies to establish connections. The second consideration is permissions. The current architecture does not require any enterprise accounts to share permissions with the cloud service provider which means no enterprise account information needs to transmitted or stored in the cloud.

For the enterprise, a single appliance is required which consists of the replicator and a secure tunnel endpoint. This reduced footprint, compared to vCloud Availability DR2C, significantly reduces the total cost of ownership. The on-premise appliance also provides the vCenter plugin. Once the tunnel is established with the cloud provider, all communication and replication traffic is securely transmitted via the tunnel.

 

 

Performance

The performance of the latest release exceed the previous generations and will continue to be a focus moving forward. Currently, the scale and performance numbers are as follows

  • 9,500 protected virtual machines
  • 300 tenants with active protections
  • 2000 active protections per replicator
  • 10 vCloud Availability Replicator instances per Cloud
  • 5TB maximum virtual machine (with seed)

Conclusion

The latest solution has done a tremendous job of consolidating three platforms in such a way that it is easy to deploy, manage, and use for both the cloud provider as well as the tenant. It also implements valuable features and performance enhancements over previous versions. As an additional resource please check out vCloud Availability 3.0 – Lightboard Overviews by Daniel Paluszek where he provides some additional overview of the latest solution and discusses deployment for both provider and tenant. Also check out the product page and documentation for more details.

Keep an eye out for the next blog where we will dig deeper into more of the features that make vCloud Availability such a valuable platform.

Additional Resources

  • Download vCloud Availability 3.0 here: https://my.vmware.com/en/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vcloud_availability/3_0
  • Release notes: https://docs.vmware.com/en/VMware-vCloud-Availability/3.0/rn/VMware-vCloud-Availability-30-Release-Notes.html
  • Documentation: https://docs.vmware.com/en/VMware-vCloud-Availability/
  • API reference: https://code.vmware.com/apis/441

The post vCloud Availability 3.0 Blog Series: Introduction appeared first on VMware Cloud Provider Blog.

Posted in Cloud Migration, Cloud Services, Disaster Recovery, Hybrid Cloud, vCloud Availability, vCloud Director, VMware Cloud Provider, VMware Cloud Provider Platform | Comments Off on vCloud Availability 3.0 Blog Series: Introduction

Top 20 Articles for vSAN, March 2019

Component metadata health check fails with invalid state error “Host cannot communicate with all other nodes in vSAN enabled cluster” error vCenter Server 6.0 Update 2 displays on non-vSAN enabled ESXi hosts displays the message: Retrieve a ticket to register the vSAN VASA Provider Status of TLSv1.1/1.2 Enablement and TLSv1.0 Disablement across VMware products Best

The post Top 20 Articles for vSAN, March 2019 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for vSAN, March 2019

Top 20 Articles for NSX, March 2019

Virtual machine in ESXi is unresponsive with a non-paged pool memory leak VMs running on ESXi 5.5 with vShield endpoint activated fails during snapshot operations Performing vMotion or powering on a virtual machine being protected by vShield Endpoint fails When using VMware vShield App Firewall, virtual machines fail to connect to the vSwitch/vDS/network with the

The post Top 20 Articles for NSX, March 2019 appeared first on VMware Support Insider.

Posted in KB Digest, Top 20 | Comments Off on Top 20 Articles for NSX, March 2019