kops is an Open Source top level Kubernetes project. Kops is an abbreviation for Kubernetes Operations. From the README in the project:

kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and VMware vSphere in alpha, and other platforms planned.

Founded by the Kubernetes Team in 2016. Justin Santa Barbara said that he did not find kops, but it found us. Mike DaneseZach Loafman, Justin, and many other people from the Kubernetes team contributed to the initial creation of kops.

kops is DevOps

Wikipedia’s DevOps Definition

DevOps (a clipped compound of “development” and “operations”) is a software engineering practice that aims at unifying software development (Dev) and Software operation (Ops). The main characteristic of the DevOps movement is to strongly advocate automation and monitoring at all steps of software construction, from integration, testing, releasing to deployment and infrastructure management.

The above definition describes the kops team’s vision for developing kops. Repeatable, tested automation is the core of the philosophy of DevOps. Through code, the kops’ team do the hard things to make it easy for the kops users.

Kops is tested, tested and tested again. Every time a PR is pushed in the main Kubernetes repository, kops builds a cluster in AWS. The end to end Kubernetes tests are then run on that cluster. The kops team is working on getting GCE e2e tests up and running soon.

A key component of DevOps culture is repeatable processes, and cluster configurations can get complicated. One of the problems using CLI tools, in my opinion, is flag hell. kops create cluster has close to 40 different CLI flags. Enter kops create -f mycluster.yaml, to save the day. kops, just as Kubernetes, can be driven by YAML or JSON manifests that can be checked into source control. Coming in kops 1.8.0 the kops team is introducing various templating features as well to help build the manifests.

kops Features

Supported Clouds

  • AWS
  • GCE in kops 1.8.0
  • Alpha support for vSphere
  • Digital Ocean and Bare metal is under development

Features

  • Deploys Highly Available (HA) Kubernetes Masters
  • Ability to generate configuration files for Terraform and AWS CloudFormation
  • Supports custom Kubernetes add-ons
  • Command line autocompletion
  • Manifest Based API Configuration
  • Rolling updates and upgrades that are on par with commercial products
  • Validate that a cluster is up and running
  • Exporting kubeconfig for installed clusters
  • CLI options for edit, create, delete, update, upgrade, rolling-update, validate, and more
  • Full support for RBAC
  • And many many more

Tutorials

I am not going to cover how to install clusters in this post, because the kops project has multiple different tutorials:

Google Compute Engine is still under development, and with the 1.8.0 release will be stable. At the time of writing this post, 1.8.0 is notreleased yet. Either compile master or use the 1.8.0 alpha.

Warning: only use if you have a severe need for speed and want to be more productive.

Bazel is seriously amazing. The primary Kubernetes/Kubernetes repository has had Bazel enabled for months, but I had run into problems as far as building with Bazel on my MacBook Pro. The learning curve to use bazel is a bit steep, but once @justinsb and I got it running in kops: Holy Cow Batman! Kops tests and builds are so much faster. When cached, build times are sub-second which took 6 min. Those numbers are no joke.

Bazel Logo

See this link for information on their logo.

What is Bazel

On February 25th, 2015, a Googler pushed the first comment to the Bazel project, and Google officially Open Sourced the project. Blaze is Google’s closed source build tool and is the predecessor to Bazel.

Bazel is a build tool that replaces using Makefiles or other tools for building go. Under the hood, it uses go build; but it is not your average build tool. Just as make has many different options, Bazel provides many exciting features including dependency management, and templating with external tools, and the capability to build containers without docker.

Why Bazel

  1. Speed, speed and more speed, Ferrari-like performance. Because of caching, compilation and unit test speed is ridiculous.
  2. One tool to rule them all. Bazel supports Go, Java, C++, Android, iOS, on OSX, Linux, and Windows.
  3. Extensibility. Add plugins and call external tools.
  4. The tool scales. It handles codebases of any size and integrates into CI. The Kubernetes/Kubernetes repository is a huge repo with multiple containers and binaries.
  5. Build vendoring for go. We were not able to fully use it in kops because of some challenges, but for other projects it works great. Watch out dep, cause you have a serious competitor.

Using Bazel with Go

Pre-flight Checks

  1. Install Bazel – Instructions are here.
  2. It is my understanding that you do not even need to install Golang, but it is helpful otherwise. Here are the install instructions for Golang.

Helper Script

Create your project with your source control tool of choice. Inside your project, run the bash script provided here.

Provide the go path for the project. For example: create-bazel-workspace github.com/myuser/myproject

Here is the script:

#!/bin/bash

if [ -z "$1" ]; then 
  echo >&2 "please provide your projects base go path as an argument."
  ehco >&2 "For example: github.com/chrislovecnm/go-bazel-hello-world"
  exit 1 
fi

PREFIX=$1

cat > WORKSPACE <<- EOM
git_repository(
    name = "io_bazel_rules_go",
    remote = "https://github.com/bazelbuild/rules_go.git",
    tag = "0.6.0",
)

load("@io_bazel_rules_go//go:def.bzl", "go_rules_dependencies", "go_register_toolchains", "go_repository")
go_rules_dependencies()
go_register_toolchains()

go_repository(
    name = "com_github_golang_glog",
    commit = "23def4e6c14b4da8ac2ed8007337bc5eb5007998",
    importpath = "github.com/golang/glog",
)

go_repository(
    name = "com_github_spf13_cobra",
    commit = "7b1b6e8dc027253d45fc029bc269d1c019f83a34",
    importpath = "github.com/spf13/cobra",
)

go_repository(
    name = "com_github_spf13_pflag",
    commit = "f1d95a35e132e8a1868023a08932b14f0b8b8fcb",
    importpath = "github.com/spf13/pflag",
)
EOM

cat > BUILD <<- EOMB
load("@io_bazel_rules_go//go:def.bzl", "go_prefix", "gazelle")

go_prefix("${PREFIX}")

# bazel rule definition
gazelle(
  prefix = "${PREFIX}",
  name = "gazelle",
  command = "fix",
)
EOMB

echo "project WORKSPACE and BUILD files created, run gazelle to create required BUILD.bazel files"

The above script will seed a project to use cobra. I have provided an example project located here on Github.

The Files

The above script creates two files. The “WORKSPACE” file is mandatory for every Bazel project. This file typically contains references to external build rules and project dependencies.

From my example project:

git_repository(
    name = "io_bazel_rules_go",
    remote = "https://github.com/bazelbuild/rules_go.git",
    tag = "0.6.0",
)

load("@io_bazel_rules_go//go:def.bzl", "go_rules_dependencies", "go_register_toolchains", "go_repository")
go_rules_dependencies()
go_register_toolchains()

go_repository(
    name = "com_github_golang_glog",
    commit = "23def4e6c14b4da8ac2ed8007337bc5eb5007998",
    importpath = "github.com/golang/glog",
)

go_repository(
    name = "com_github_spf13_cobra",
    commit = "7b1b6e8dc027253d45fc029bc269d1c019f83a34",
    importpath = "github.com/spf13/cobra",
)

go_repository(
    name = "com_github_spf13_pflag",
    commit = "f1d95a35e132e8a1868023a08932b14f0b8b8fcb",
    importpath = "github.com/spf13/pflag",
)

The syntax quite similar to Groovy, and you can see the go dependencies being added to the Workspace. Continually adding go_repository rules is one of the areas that needs some automation, and hopefully be addressed with work on this issue.

The next set of files are called BUILD, or BUILD.bazel.

From Bazel documentation:

By definition, every package contains a BUILD file, which is a short program written in the Build Language. Most BUILD files appear to be little more than a series of declarations of build rules; indeed, the declarative style is strongly encouraged when writing BUILD files.

Here is the BUILD file in the root directory of the example project.

load("@io_bazel_rules_go//go:def.bzl", "gazelle", "go_binary", "go_library", "go_prefix")

go_prefix("github.com/chrislovecnm/go-bazel-hello-world")

# bazel rule definition
gazelle(
    name = "gazelle",
    command = "fix",
    prefix = "github.com/chrislovecnm/go-bazel-hello-world",
)

The BUILD and BUILD.bazel files are the continual work in Bazel. Add a new import in go, and you need to update the BUILD file. Add a new go file or test, and yep, update BUILD.bazel. Thankfully the Bazel authors have added Gazelle.

Gazelle Build File Generator

Once you have the project initialized, you can execute bazel run //:gazelle. This command will execute gazelle because the Gazelle rule is defined in the above BUILD file. Running Gazelle will create various BUILD.bazel files in your project.

More documentation about Gazelle.

Defining Your Binary File(s)

One thing that Gazelle did not do initially was define a rule to build a binary. I added the go_binary and go_library rules by hand, after Gazelle, generated the files.

load("@io_bazel_rules_go//go:def.bzl", "go_binary", "go_library")

go_binary(
    name = "go-bazel-hello-world",
    importpath = "github.com/chrislovecnm/go-bazel-hello-world/cmd",
    library = ":go_default_library",
    visibility = ["//visibility:public"],
)

go_library(
    name = "go_default_library",
    srcs = ["main.go"],
    importpath = "github.com/chrislovecnm/go-bazel-hello-world/cmd",
    visibility = ["//visibility:public"],
    deps = ["//pkg/cmd:go_default_library"],
)

go_binary(
    name = "cmd",
    importpath = "github.com/chrislovecnm/go-bazel-hello-world/cmd",
    library = ":go_default_library",
    visibility = ["//visibility:public"],
)

Common Bazel Commands

The example Makefile contains the usual suspects for Bazel commands. The usual workflows for development: build, test, and Gazelle for Bazel.

all:
	bazel build //...

test:
	bazel test //...

gofmt:
	gofmt -w -s pkg/ cmd/

gazelle:
	bazel run //:gazelle

Next Steps

Bazel is powerful. You can do cool things like running external targets, and building containers without docker, a heck of a lot faster than docker does it!

Cross-compiling with Bazel is complicated, and the support for making Linux binaries on OSX is limited. One solution is to use a container such as planter.

I am planning on further posts about using go-bindata and containers with Bazel once we have those fixed in kops.

We need to integrate Bazel into kops testing with Travis. One note, use caching inside your CI tool. Without caching you loose ALL of Bazel’s performance. Frankly you may as well use go build.

TLDR;

  1. Install Bazel
  2. Run the script or create the Workspace and BUILD files
  3. Execute bazel run //:gazelle
  4. Define your binaries with Bazel rules
  5. Enjoy the serious performance

Thanks

First off thanks to the Bazel team, for such an amazing tool. Thanks to @justinsb for his kopeio/build project. I was able to work through issues for my example, using the project as a base.

@ixby, @bentheelder, and others help on the #bazel Kubernetes slack channel have been invaluable.

Thanks to @jroberts235 for recommending including a basic overview of Bazel.

I have been on a mission as of late to use more shell aliases, and Matt Tucker at the last Kubernetes meetup, in Boulder, reminded me of huge time savers for kubectl/Kubernetes users.

Here is the fantastic time saver that needs to be in your shell profile:

alias k=kubectl

For those who are not familiar, kubectl is the command line interface tool for running commands against Kubernetes clusters. Most K8s users use kubectl a lot.

As mentioned in the kubectl cheat sheet, kubectl supports autocomplete in both the bash and zsh shells.

# setup autocomplete in bash, bash-completion package should be installed first.
source <(kubectl completion bash)
# setup autocomplete in zsh
source <(kubectl completion zsh)
# for help on setting up autocomplete
kubectl completion -h

For those who want to have more aliases here are 600 for your viewing pleasure: https://github.com/ahmetb/kubectl-aliases.

A humorous story, one of the bugs I found in the PetSet documentation was that the author documented using k instead of the kubectlcommand.

The Container Network Interface (CNI) is a library definition, and a set of tools under the umbrella of the Cloud Native Computing Foundation project. For more information visit their GitHub project. Kubernetes uses CNI as an interface between network providers and Kubernetes networking.

CNI Logo

Why Use CNI

Kubernetes default networking provider, kubenet, is a simple network plugin that works with various cloud providers. Kubenet is a very basic network provider, and basic is good, but does not have very many features. Moreover, kubenet has many limitations. For instance, when running kubenet in AWS Cloud, you are limited to 50 EC2 instances. Route tables are used to configure network traffic between Kubernetes nodes, and are limited to 50 entries per VPC. Moreover, a cluster cannot be set up in a Private VPC, since that network topology uses multiple route tables. Other more advanced features, such as BGP, egress control, and mesh networking, are only available with different CNI providers.

Choosing a Provider

Which CNI provider should I use?

The above question is repeatedly asked on the #kops Kubernetes slack channel. There are many different providers, which have various features and options. Deciding which provider to use is not a trivial decision.

I am not going to say in this blog post: “Use this provider”. There is not one provider that meets everyones needs, and there are many different options. The focus of this post is not to tell you which provider to use. The goal of this blog article is to educate and provide data so that you can make a decision.

CNI in kops

At last count, kops supports seven different CNI providers besides kubenet. Choosing from seven different network providers is a daunting task.

Here is our current list of providers that can be installed out of the box, sorted in alphabetical order.

Any of these CNI providers can be used without kops. All of the CNI providers use a daemonset installation model, where their product deploys a Kubernetes Daemonset. Just use kubectl to install the provider on the master once the K8s API server has started. Please refer to each projects specific documentation.

Summary of the Providers

Calico

Mike Stowe provided a summary of both Calico and Canal.

Calico provides simple, scalable networking using a pure L3 approach. It enables native, unencapsulated networking in environments that support it, including AWS, AZ’s and other environments with L2 adjacency between nodes, or in deployments where it’s possible to peer with the infrastructure using BGP, such as on-premise. Calico also provides a stateless IP-in-IP mode that can be used in other environments, if necessary. Beyond scalable networking, Project Calico also offers policy isolation, allowing you to secure and govern your microservices/container infrastructure using advanced ingress and egress policies. With extensive Kubernetes support, you’re able to manage your policies, in Kubernetes 1.8+.

Canal

Canal is a CNI provider that gives you the best of Flannel and Project Calico, providing simple, easy to use/out of the box VXLAN networking, while also allowing you take advantage of policy isolation with Calico policies.

This provider is a solution for anyone who wants to get up and running while taking advantage of familiar technologies that they may already be using.

flannel

Brandon Phillips’ views on flannel.

Flannel is a simple and easy way to configure a layer3 network fabric designed for Kubernetes. No external database (uses Kubernetes API), simple performant works anywhere VXLAN default, can be layered with Calico policy engine (Canal). Oh, and lots of users.

Tectonic, CoreOS’s Commercial Kubernetes product, uses a combination of flannel and Felix from Calico, much like Canal.

kopeio-networking

Justin Santa Barbara the founder of kopeio provided this:

kopeio-networking provides Kubernetes-first networking. It was purpose built for Kubernetes, making full use of the Kubernetes API, and because of that is much simpler and more reliable than alternatives that were retrofitted. The VXLAN approach is the most commonly used mode (as used in weave & flannel), but it also supports layer 2 (as used in calico), with more experimental support for GRE (the replacement for IPIP), and for IPSEC (for secure configurations). It does all of this with a very simple codebase

kube-router

Kube-router is a purpose-built network solution for Kubernetes ground up. It aims to provide operational simplicity and performance. Kube-router delivers a pod networking solution, a service proxy, and network policy enforcer as all-in-one solution, with single daemon set. Kuber-router uses Kubernetes native functionality like annotations, pod CIDR allocation by kube-controller-manger. So it does not have any dependency on a data store and does not implement any custom solution for pod CIDR allocation to the nodes. Kube-router also uses standard CNI plug-ins so does require any additional CNI plug-in. Kube-router is built on standard Linux networking toolset and technologies like ipset, iptables, ipvs, and lvs.

Murali Reddy founder of kube-router.

romana

Chris Marino summarized romana.

Romana uses standard layer 3 networking for pod networks. Romana supports Kubernetes Network Policy APIs and does not require an overlay, even when a cluster is split across network availability zones. Romana supports various network toplogies including flat layer 2 and routed layer 3 networks. Routes between nodes are installed locally and when necessary, distributed to network devices using either BGP or OSPF. In AWS deployments, Romana installs aggregated routes into the VPC route table to overcome the 50 node limit. This lets Romana use native VPC networking across availability zones for HA clusters. The current release uses its own etcd cluster, but the next version will optionally allow the Kubernetes etcd cluster to be used as a datastore.

Weave Net

Paul Fremantle’s synopsis of Weave Net.

Weave Net supports an overlay network that can span different cloud networking configurations, simplifying running legacy workloads on Kubernetes. For example, Weave supports multicast, even when the underlying network doesn’t. Weave can configure the underlying VPC networking and bypass the overlay when running on AWS. This provider forms a mesh network of hosts that are partitionable and eventually consistent; meaning that the setup is almost zero-config, and it doesn’t need to rely on an Etcd. Weave supports encryption and Kubernetes network policy, ensuring that there is security at the network level.

By the Numbers

Kops does not track usage numbers of the different CNI providers, and I hope never does. When making a product selection of software hosted on GitHub, I look at three different numbers:

  1. GitHub Stars – Likes on GitHub
  2. GitHub Forks – Number of copies of the repo
  3. GitHub Contributors – Number of people with merged code

The activity level of a project is critical. These are some of the metrics that I use to judge the activity level.

GitHub Stars

GitHub Stars are akin to likes on a Social Media platform.

Project Stars

GitHub Contributors

The number of contributors talks to the number of people maintaining the code base and documentation. Active projects have a high number of contributors.

Project Contributors

GitHub Forks

The number of forks is a mix of likes and contributors. Contributors typically have to fork the repo. Other people will fork the project to build a custom copy, push code to a feature branch that they own, or for various reasons.

Project Forks

Support Matrix

Here is a table of different features from each of the CNI providers mentioned:

ProviderNetwork
Model
Route 
Distribution
Network
Policies
MeshExternal
Datastore
EncryptionIngress/Egress
Policies
Commercial
Support
CalicoLayer 3YesYesYesEtcd 1YesYesYes
CanalLayer 2 vxlanN/AYesNoEtcd 1NoYesNo
flannelvxlanNoNoNoNoneNoNoNo
kopeio-networkingLayer 2 vxlan 2N/ANoNoNoneYes 3NoNo
kube-routerLayer 3BGPYesNoNoNoNoNo
romanaLayer 3OSPFYesNoEtcdNoYesYes
Weave NetLayer 2 vxlan 4N/AYesYesNoYesYes 5Yes

1. Calico and Canal include a feature to connect directly to Kubernetes, and not use Etcd.
2. kopeio CNI provider has three different networking modes: vlan, layer2, GRE, and IPSEC.
3. kopie-network provides encryptions in IPSEC mode, not the default vxlan mode.
4. Weave Net can operate in AWS-VPC mode without vxlan, but is limited to 50 nodes in EC2.
5. Weave Net does not have egress rules out of the box.

Table Details

Network Model

The Network Model with providers is either encapsulated networking such as VXLAN, or unencapsulated layer 2 networking. Encapsulating network traffic requires compute to process, so theoretically is slower. In my opinion, most use cases will not be impacted by the overhead. More about VXLAN on wikipedia.

Route Distribution

For layer 3 CNI providers, route distribution is necssary. Route distribution is typically via BGP. Route distribution is nice to have a feature with CNI, if you plan to build clusters split across network segments. It is an exterior gateway protocol designed to exchange routing and reachability information on the internet. BGP can assist with pod to pod networking between clusters.

Network Policies

A kubernetes.io blog post about network policies in 1.8 here.

Kubernetes now offers functionality to enforce rules about which pods can communicate with each other using network policies. This feature is has become stable Kubernetes 1.7 and is ready to use with supported networking plugins. The Kubernetes 1.8 release has added better capabilities to this feature.

Mesh Networking

This feature allows for “Pod to Pod” networking between Kubernetes clusters. This technology is not Kubernetes federation, but is pure networking between Pods.

Encyption

Encrypting the network control plane, so all TCP and UDP traffic is encrypted.

Ingress / Egress Policies

The network policies are both Kubernetes and Non-Kubernetes routing control. For instance, many providers will allow an administrator to block a pod communicating with an EC2 instance meta and data service on 169.254.169.254.

Summary

If you do not need the advanced features that a CNI provider delivers, use kubenet. It is stable, and fast. Otherwise, pick one. If you do need run more than 50 nodes on AWS, or need other advanced features, make a decision quickly (don’t spend days deciding), and test with your cluster. File bugs, and develop a relationship with your network provider. At this point in time, networking is not boring in Kubernetes. It is getting more boring every day! Monitor test and monitor more.

Thanks

Appreciate all of the feedback that I received on the pull request for this blog post. Also, all of the summaries that were provided by the different people that work on the different projects.