This has been cross posted from my own blog vGemba.net. Go check it out!
In Part 1 of this series we went about installing Terraform, verifying it was working and setting up Visual Studio Code. In this part we will cover some Terraform basics.
The three Terraform Constructs we are going to look at are:
Providers are the resources or infrastructure we can interact with in Terraform. These can include AWS, Azure, vSphere, DNS, etc. A full list is available on the Terraform website. As you can see it’s a very big list. In this series we will concentrate on the VMware vSphere Provider.
Resources are the things we are going to use in the provider. In the vSphere realm this can be a Virtual Machine, Networking, Storage, etc.
Terraform uses Provisioners to talk to the back end infrastructure or services like vSphere to create your Resources. They essentially are used to execute scripts to create or destroy resources.
Setup Terraform for vSphere
Open up Visual Studio Code and create a new file called
main.tfin the folder
C:\Terraform. If you have added
C:\Terraform to your Path environment variable save
main.tf anywhere you like, but of course the best place for all of your Terrform files is source control…
I’ve been really lucky over the last few weeks getting to do some deep dive workshops on NSX-T and will be blogging a lot about the good the bad and the ugly over the next few weeks (really good timing for “Blogtober” right?!)
First things first the documentation, for the moment at least, is a little bit on the light side. VMware are obviously working on the documentation as I am starting to see some more become available in the public domain but it certainly wasn’t as well documented as other GA products.
This leads onto my first topic, as I think it’s quit a big one!
I’m going to post about the new routing and switching technologies/methodologies used in NSX-T as they are VERY different from NSX-V in the next few days but for now let’s assume there is a need to move away from the well known and loved Distributed Switch (start looking up the Opaque Switch). Put simply you can’t run a vSphere Distributed Switch on a KVM host, the price for delivering a Hypervisor agnostic SDN solution means we need to introduce a new type of virtual switch.
No big deal right?
This has been cross posted from my own blog vGemba.net. Go check it out!
Terraform is one of the new products that let you treat infrastructure as Code. What does Infrastructure as Code actually mean?
Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
In the case of Terraform this means using code to *declare* what we want from vSphere, AWS, Azure, OpenStack etc. and then Terraform goes and creates the infrastructure to our declared final state. This is the opposite to Procedural Infrastructure where we have to describe *how* to get our end result. Terraform does the hard work in figuring out how to create the infrastructure we have defined – we don’t have to worry how to actually create it or the sequence of steps to get there.
A few years ago, I was asked to deploy vCAC (as it was known then). Soon after I found myself part of a team dedicated to creating a new Service Catalog for an SDDC based Private Cloud offering. It was a huge learning curve for me and I was soon immersed in a world of Cloud development with decisions to make based on these new VMware Cloud tools. The one thing that I did learn very quickly was the importance of version control. Coming from an infrastructure background, this was alien to me but it soon became one of the most critical things I learnt about successfully developing a Private Cloud and more importantly – maintaining it!
At the heart of our Service Catalog was vRealize Orchestrator and as requirements for automated Catalog items grew, so did the team. This caused a lot of issues, with many developers working simultaneously on the same product and as a result changes to the same Workflows occurred and relevant changes lost. It soon became apparent we where lacking a sensible way to ensure our final packages was bug free and not overwritten unintentionally. Natively in vRO we can export a package containing Workflows, Actions, Configuration Files etc, but this is not in an ideal format to efficiently review or track changes. It was becoming impossible to keep tabs on what was happening.
As some of you will be aware, vRA6 will be end of life support by the end of 2017 and as a result i was tasked with deploying a POC for vRA / vRO 7.3 in order to check if our current vRO code was compatible. I expected to see some challenges as we are heavily reliant on vRO for our Service Catalog, however one specific issue I did not expect caught me out.
As part of our Private Cloud offering, we use vRO to request a catalog item rather than vRA. The overall workflow also contains many post request actions such as deploying agents, resetting default passwords etc. All of these rely on the successful deployment passing us the hostname after the catalog request completes. After setting up the POC and running a test deployment I noticed that although the request was successful, the overall Workflow was failing. Looking deeper I saw some differences in the completion details of vRA7.
In vRA6, we used to get the following, where “tyler-prefix04” is the hostname of the newly created VM:
Sitting in the airport on my way home from Barcelona, the haze is starting the clear and i’ve been trying to formulate my judgement on this year’s VMworld. 2017 marked my 6th consecutive VMworld and i’ve enjoyed them all immensely but it’s very clear to me that this year’s stands alone, i had a very different experience from any previous years.
I read someone comment yesterday that the more times you attend there tends to be an evolution and the dynamic of VMworld changes and i couldn’t agree more.
The first year i attended in 2012 my schedule was insane, it was SESSION, SESSION, SESSION i packed my entire agenda from 09:00 on day one till 17:00 on day three. I was so eager to learn and so eager to ensure my bosses didn’t think i’d disappeared to Barcelona on a jolly, i had a breakout in every available time slot. I went to nothing but sessions and the solutions exchange, i took notes and met vendors. 2013 was broadly the same, i had a fantastic trip on both occasions and i was knackered at the end, I learnt a lot but didn’t add much on a personal level
Over the year’s i’ve met more people and more people and more people, and without really realising i’ve built a network. I usually start out with the same intention, i pack my agenda with sessions but i’m a lot more flexible about how the event plays out. If there’s something i really want to see i’ll make sure i go, but the layer below that, if i’m in conversation with someone be that a friend, peer, or vendor i’m more inclined to let a session or two slide.
We were very excited yesterday to announce the two keynote speakers for the Scottish VMUG in Edinburgh on October 26th, Duncan Epping and Chris Wahl.
Duncan Epping (VMware) – Duncan Epping is a Chief Technologist in the Office of CTO of the Storage & Availability BU at VMware. He is a VCDX (# 007) and the author of multiple books including “Essential Virtual SAN” and the “vSphere Clustering Technical Deepdive” series. http://www.yellow-bricks.com/
Chris Wahl (Rubrik) – Chris Wahl is a Chief Technologist at Rubrik. published author, tech writer, double VCDX, PowerShell coder, and Datanauts Show host. http://wahlnetwork.com/