I use the Azure CLI for much of what I do in Azure now – true, the same things can usually be achieved through the portal or by using PowerShell, but I just prefer the Linux / Bash nature of the CLI.

One of the things that makes the CLI so nice to use is the powerful query language that it has available – this language is called JMESPath. JMESPath isn’t specific to the Azure CLI though – it’s a query language for JSON (http://jmespath.org/) so it can be used whenever you need to manipulate or query JSON data. Read the rest of this entry »

A few months back, I created a lab workshop focused around building virtual data centres in Azure (see here for more details). Given how successful this workshop has been when running it with partners, I’ve now recorded a video with my colleague Dan Baker that takes viewers through the whole process of building the VDC environment. The video is less than an hour long and will walk you through the entire lab build, explaining each step along the way.

The video is available on YouTube, or you can view it directly below.


A question I’ve heard a few times recently is “if I have services running in an Azure Virtual Network, how do I securely connect that VNet to Azure public services, such as Blob Storage?“. Microsoft have this week announced a couple of features designed to help with this scenario, but before delving into those, let’s look at the issue we are actually trying to solve.

First, a few basics: a Virtual Network (VNet) is a private, isolated, network within Azure into which you can deploy infrastructure resources such as virtual machines, load balancers and so on:


Although these VMs can (and very often do) have direct Internet access, it is of course possible to restrict connectivity into and out of this VNet according to your requirements. Read the rest of this entry »

Azure has a number of ways in which to run containers, ranging from simple IaaS VMs running Docker, to Azure Container Service (a service that provisions a full container cluster using Kubernetes, Swarm or DC/OS) and Azure Container Instances. One of the characteristics of these services is that when a container is provisioned, it typically has an IP address allocated to it from within the local host, rather than from the Azure virtual network to which the host is connected. As an example, consider the following scenario where we have a single Azure IaaS virtual machine running Ubuntu and Docker:

Container-net1 Read the rest of this entry »

I’ve just finished working on a new self-guided lab that focuses on the Azure ‘Virtual Data Centre’ (VDC) architecture. The basic idea behind the VDC is that it brings together a number of Azure technologies, such as hub and spoke networking, User-Defined Routes, Network Security Groups and Role Based Access Control, in order to support enterprise workloads and large scale applications in the public cloud.

The lab uses a set of Azure Resource Manager (ARM) templates to deploy the basic topology, which the user will then need to further configure in order to build the connectivity, security and more. Once the basic template build has been completed, you’ll be guided through setting up the following:

  • Configuration of site-to-site VPN
  • Configuration of 3rd party Network Virtual Appliance (in this case a Cisco CSR1000V)
  • Configuration of User Defined Routes (UDRs) to steer traffic in the right direction
  • Configuration of Network Security Groups (NSGs) to lock down the environment
  • Azure Security Center for security monitoring
  • Network monitoring using Azure Network Watcher
  • Alerting and diagnostics using Azure Monitor
  • Configuration of users, groups and Role Based Access Control (RBAC)

Read the rest of this entry »

It’s been some time since I last posted a blog – I’ve been spending the majority of my time settling in to my new role at Microsoft and learning as much as I can about the Azure platform. Considering my background, it seems fitting that my first “Azure related” blog post is all about….ACI! In this case however, ACI stands for Azure Container Instances – a new Azure offering announced this week and currently in preview. So what are Azure Container Instances? Read the rest of this entry »

A little over three years ago, I was introduced to Cisco’s Application Centric Infrastructure (ACI) for the first time. For someone who had spent many years at Cisco working on “traditional” networking platforms, this was something of a revelation – the ability to define network connectivity in a programmable manner using a policy based model was a major departure from anything I had done before. Since then, I’ve been lucky enough to work with a variety of customers around the globe, helping them to design and deploy the ACI solution. I’ve been part of a great team of people, worked closely with the INSBU team (responsible for ACI) and presented to hundreds of people at Cisco Live.

Over the last few months, I’ve spent some time thinking about what I do next: as ACI becomes more mainstream, do I continue with more of the same – expanding my skill set to include the other great products (Tetration, Cloudcenter,etc) that Cisco has in the data centre – or do I take a slightly different path? After some serious consideration, I’ve decided to go with the latter option – later this month, I’ll be joining Microsoft as a Cloud Solutions Architect, working with the Azure platform.

I’ve thoroughly enjoyed writing this blog over the last few years and want to thank everyone who has read the posts, commented or given me feedback. I’m hoping to continue blogging occasionally, so keep an eye out for the odd Azure-related post!


ACI has the ability to divide the fabric up into multiple tenants, or multiple VRFs within a tenant. If communication is required between tenants or between VRFs, one common approach is to route traffic via an external device (e.g. a firewall or router). However, ACI is also able to provide inter-tenant or inter-VRF connectivity directly, without traffic ever needing to leave the fabric. For inter-VRF or inter-tenant connectivity to happen, two fundamental requirements must be satisfied:

  1. Routes must be leaked between the two VRFs or tenants that need to communicate.
  2. Security rules must be in place to allow communication between the EPGs in question (as is always the case with ACI).

Read the rest of this entry »

The 1.1(1j) & 11.1(1j) release of ACI introduced support for transit routing. Prior to this, the ACI fabric acted as a ‘stub’ routing domain; that is, it was not previously possible to advertise routing information from one routing domain to another through the fabric. I covered L3 Outsides in part 9 of this series where I discussed how to configure a connection to a single routed domain. In this post, we’ll look at a scenario where the fabric is configured with two L3 Outsides and how to advertise routes from one to another. Here is the setup I’m using:


Read the rest of this entry »

Everything I’ve shown so far in this blog series has been focused on using the APIC GUI to achieve what we need. This is fine for small environments or for becoming familiar with ACI, but what if you need to configure 100 tenants, each with 50 EPGs, tens of private networks and bridge domains, multiple contracts to allow traffic and a L3 Outside or two thrown in? Configuring all of that through the GUI is clearly going to be massively time consuming and probably quite error prone, so we need to find a better way of doing things. ACI has a wide variety of programmability options that can be used to automate the provisioning of the fabric and its policies.

Read the rest of this entry »