Reputation: 350
I am running a kubernetes cluster on azure without any public access (not AKS).
I would like to run kubctl commands from azure devops using kubernetes service connection. I used it importing certificates and pointing to a public IP and the deployment worked.
But I was wondering if there is a possibility access the vm of a resource group (perhaps using vnet or a gateway) without having the machine in public?
Upvotes: 0
Views: 321
Reputation: 18138
If you were running an Azure Kubernetes Cluster, it would be really easy to establish a service connection in Azure DevOps to the AKS service. Under that setup, running your scripts would be done securely through Azure automation within Microsoft's data-centers.
However, since you are running your own Kubernetes cluster it's basically like running a set of virtual machines that just happen to be in Azure. In order for Azure DevOps to execute commands against your cluster you need to provide an IP address that it can reach, and since these machines are in different networks it feels like you need to expose your cluster on public IP address. This indeed does feel like a vulnerability.
I can think of two options:
Network Security Rule for Azure DevOps Agents - When you created the virtual machine, you had to put it into a virtual network. You can create a network security group (NSG) for that virtual network and you can add Inbound rule to allow connections from Azure DevOps. So, yes it'd be a public IP, but you'd deny all inbound traffic except the IP addresses you specify.
Now, the challenge here is that if you're using a hosted build agent for Azure DevOps (the agent provided to you by Microsoft) it's important to note that each time you run a build or release you get a brand new virtual machine, so it will be difficult to create a firewall rule for a single IP address. You'll need to configure a range of IP Addresses.
Microsoft publishes a list of IP Address ranges for their build agents every Monday, so you could in theory set up a firewall rule to limit access to those specific IP address ranges. I have no idea how volatile this list is, or if you'll need to change the IP address ranges on a weekly basis. That sounds like work.
Self-Hosted Build Agent -- If you want to avoid the public IP address completely, then you'll want to use a "Self-Hosted Build Agent" (a machine provided by you) so that the build agent can communicate on the same private network as the cluster.
Using the self-hosted build agent means you'll have to take responsibility for maintaining that machine, but it could reside on your internal network where you can control the IP address. You could also set up a dedicated virtual machine in the same virtual network as the cluster. And my personal favorite, you could also configure the build agent as a docker container inside your cluster. The docker image for the vsts agent is located here: https://hub.docker.com/_/microsoft-azure-pipelines-vsts-agent
A note about build agents -- they don't require firewall rules as they communicate outbound to Azure DevOps. All triggers and interactions happen at Azure DevOps web site but the actual work is delegated down to the agent.
Upvotes: 1