2019-09-03

The Mathematics of Continuous Delivery

If you're used to traditional, i.e. fairly infrequent, software delivery, the idea of Continuous Delivery will probably seem hopelessly expensive. "Maybe it works for someone else, but it won't work for us," is a common thought. If normal feature deliveries typically takes days or weeks, it seems impossible to make them every time a programmer checks in some code.

You can probably deliver an emergency bug fix fairly rapidly in a pinch, but could you continuously make all your deliveries in that speed without making a mess? Those emergency fixes are special cases, and maybe done in a way which wouldn't be sustainable if all changes were made like that. Right?

Of course, it would be great if emergency fixes were as reliable and well tested as ordinary releases—they certainly deserve to be that, but isn't that just a dream?

Each delivery requires a substantial amount of time and resources, so surely making many deliveries, must be more costly than making fewer and bigger deliveries, right? If we rush a delivery, it's likely to turn out broken. Quality must be allowed to take time, and since the must let each delivery take a reasonable amount of time, we can't make them too often, so we need to include as many changes as we can in each delivery, right?

If we imagine that we produce \(m\) deliveries each year, with \(n\) changes in each delivery, we'll deliver \(x = mn\) changes in a year regardless of if it's \(x\) deliveries of  \(1\) change each, or  \(1\) delivery of  \(x\) changes each. How could the former be more efficient than the latter, if each delivery has some setup time etc?

There are some obvious benefits with frequent deliveries, and particularly with frequent deployment, such as being able to use new features sooner rather than later. I've also explained how deploying software in baby steps might reduce risks and downtime, in A Use Case for Continuous Deployment. In this text, I'm focusing on costs of delivery.

Let's do some math!

Call the total cost of a single software delivery \(C_{delivery}\). I don't think it matters a lot if we're thinking dollars, calendar days or person-hours here.

In your normal delivery process, there are probably parts that take as much time regardless of the size of the delivery. Maybe some IT person needs to install the new verson of the software on some server for instance. Let's call that part of the cost \(C_0\).

There are probably also parts of the delivery which will be proportional to the amount of changes we've made. For instance manual tests of these changes. That's \(C_1n\).

If this was all, things might be different, and you'd probably not be reading this article, hoping that there migh be a better way to deliver software. In all organizations I know, big releases have almost always taken longer in practice than they should have taken if everything went according to plan. The bigger the release, the less likely to go as planned.

A core issue in this is the interaction between the different changes in the release. In a delivery with \(n\) changes, each change might impact, or be impacted by the \(n-1\) other changes , so there are \(n(n-1)\) potential failure causes due to changes made in parallel, which is close to \(n^2\).

Even for bugs which are caused by a single change, it might not be obvious where the problem is located (unless you only made one change...). This means that for each hard to locate bug in a delivery of \(n\) changes, the time it takes to locate each bug is proportional to \(n\), and the number of such bugs is also proportional to \(n\). I.e. more factors that are propotional to \(n^2\).

It's also my experience, both as software developer and as responsible for software quality, that the time it takes to diagnose and solve a software defect, is stronly related, maybe proportional, to how long ago it was since the defect was introduced. If I mistype something and the compiler or a unit test help me discover this at once, I'll fix it in a couple of seconds. If someone reports a defect that was created months ago, it will probably take days before it's clarified that I'm the one who should fix it, and then I'm in the middle of something else that I need to finish, and after that it'll take time to reproduce that defect in some code I haven't touched in months, and I'll spend time recalling what that software was, and maybe I even have to figure out something done by someone who no longer works for us. More \(n^2\) terms, since the average age of changes when they are delivered is likely to be in proportion to \(n\) too.

There are probably higher order terms too, but let us stop here, and settle for the following equation for the cost of a delivery with \(n\) changes.

\begin{equation*}
   C_{delivery} = C_0 + C_1n + C_2n^2
\end{equation*}

This means that our yearly delivery costs for \(m\) deliveries are:

\begin{equation*}
   C_{year} = mC_{delivery} = mC_0 + mC_1n + mC_2n^2
\end{equation*}

How can we minimize \(C_{year}\) for a certain \(x\)? How many deliveries \(m = x/n\) should we make?

If we look at the middle term \(mC_1n\) first, we notice that it doesn't matter for this part of the equation if we make many small deliveries, or fewer but bigger. For a fixed \(x = mn\), this part will be \(xC_1\) per year either way. Of course it's good if \(C_1\) is as small as possible, but that's true whether we make one big mega-delivery, or many, tiny deliveries.

Looking at the remaining parts, \(mC_0\) and \(mC_2n^2\), we can make some observations:

If \(C_0\) is big, it seems we want to minimize \(m\) and prefer few and big deliveries.

On the other hand, looking at the last term, \(n^2\) means that  \(mC_2n^2\) can become the dominating term if \(n\) is large, even if \(C_2\) is small. That seems to suggest that small deliveries are better, and that Continuous Delivery becomes more important as your product grows, and get's more changes.

Besides, \(C_0\) is about IT processes, which should be standardized, predictable and repetitive. Ideal for automation, making \(C_0\) as small as possible, and thus \(mC_0\) reasonably small even if \(m\) grows larger.

\(C_2\) on the other hand is the surprise factor. It's the "how on earth could this cause that"-factor. While we can (and should) work to make problem solving and defect fixing systematic and controlled, it will require creativity and innovation, and we can be sure there will be some surprises and things we couldn't predict.

Hopefully, looking at deliveries—continuous or not—like this, helps you figure out how to approach challenges in software delivey.

Considering \(mC_0\), it's important to control and automate the repetetive steps in building, testing, configuring, installing and monitoring software, and also avoid handovers and enable development teams to deliver new versions of their products without red tape. There are plenty of good tools and practices for such things, but it also depends on a suitable system architecture.

Considering \(mC_1n\), manual testing and similar tasks should be performed and automated in parallel with development, so that we have good, automated regression tests etc when it's time to deliver.

The most important in dealing with \(mC_2n^2\) is to minimize \(n\), i.e. Continuous Delivery!

While programming is certainly a creative and sometimes unpredictable activity, software delivery is manufacturing, which should be standardized and automated, and benefits a lot from concepts such as Kaizen and Just-In-Time.

2018-11-03

Building a successful software service is a lot like building a successful chain of restaurants!

I'm an engineer with a background in electronics and product cycle times of several years, but in building modern, cloud based software services, I realize that I might need to think more like an entrepreneur in the fast food business.


Boundaries of mental models

My industrial clients are used to products which are more complicated than complex, and thus processes where design is based on thorough analysis and careful tracking of requirements. They are used to heavy investments in machinery, and high levels of precision and standardization to enable efficient, large scale production of identical products.

It's almost impossible not to be shaped by the things we have done, both as individuals, and as organizations. We obviously use our previous experiences when we venture into new fields, and in my opinion, it's one of the hallmarks of intelligence to learn from whatever we do, and then utilize what we learned in new situations.

A problem in this is to understand where the boundaries lie for the mental models we form, since misapplied models will lead us astray. We might use the same words, and think that we are in agreement about a system we're building, but suddenly someone says something casual, and you realize that they are seeing something entirely different than you do. Maybe it's a big eye opener for you, or maybe you suddenly feel that someone else is in need of a big eye opener...

Looking from a different angle

Some time ago, I heard a manager in a product development company talk about the upcoming first release of a cloud based software service as a Minimum Viable Product, and that surprised me, because to me the planned scope was way beyond what e.g. Eric Ries describes in The Lean Startup. But when he continues I understand, because he suggests that the next bunch of features would be released in a second major release, maybe six months after the first release.

We've been working hard to implement a continuous delivery pipeline, with the capacity to put newly developed features into production in maybe an hour from development, and management still thinks in half year release cycles... Almost as slow as new car models...

If he, with his background in heavy industry, had been charged with diversifying the company, and opening a chain of fast food restaurants, I'm pretty sure he'd suggested that we first open a single, small restaurant and learn from that, instead of opening lots of big restaurants at once. If he had been informed that a dish on the menu got massive complaints from the customers, he would never have suggested that we deal with that in a "second major release" six months from now.

While planning is certainly important in the restaurant business, planning too much in detail would be like planning all the moves in a chess game before you start. That strategy will fail quickly. Same thing with the restaurants. Your ways of working need to be based on rapidly responding to feedback from a ruthless reality, with your goals constantly guiding you.

Complicaded vs Complex

People get used to their tools. If their tools are gated processes, strict separation between R&D and operations, meticulous traceability tracking etc, it might feel very careless not to use these tools in substantial investments. When they learn about "agile" they see sloppy ways of working, since their standard tools, their safety nets, are missing.

A carpenter wouldn't try to use saw and hammer if had to cook meals instead of building a house. The difference would be so obvious that he'd realize that his normal tools were far from optimal.

People used to traditional, plan-based development don't see that agile processes mainly are for the complex domain, and their old tools are limited to the complicated domain, since they aren't aware of the distinction. They also don't see that a lot of their tools exist to manage complications that the agile processes eliminate if applied correctly.

For more about complicated vs complex, I'd suggest investigating the Cynefin Framework, which was first published as A Leader’s Framework for Decision Making in Harvard Business Review, and then elaborated further by Dave Snowden.

2018-04-15

Building a Smart Mirror from an old Computer Monitor and a Raspberry Pi

I recently built a Smart Mirror together with my son Tomas, and there has been some interest in how to do this, so I thought I'd post some notes. Please ask if you have questions or want clarifications. Don't ask me to build one for you, since a home built device like this is probably not legal to sell, and it's a bit much to ask someone to do a thing like that for free... It takes some effort...

Smart Mirror?

A Smart Mirror is basically a computer attached to a monitor on a wall. It's used as an information radiator. The mirror part is about having a semi-transparent mirror in front of the monitor, so that dark parts of the computer screen acts as a mirror, albeit somewhat darker than normal.

The Smart Mirror is hanging on the wall...

This doesn't take much computing power, so a good computer for this purpose would be a cheap and tiny Raspberry Pi and any old flat screen monitor you have left over. The most difficult part of this is to make it look good enough to be acceptable on a wall in a home.

Warning!

Unless you really know what you are doing, building or installing a device like this could cause fire, electrocution, network security breaches, or a divorce, so don't try unless you understand the legal and practical consequences for the safety of yourself and others who might be affected. This text will not teach you any of the skills you need. It will only help you with some practical tips, assuming that you alreaady have sufficient training and skills in building electrical devices, carpentry, linux and network security. 

I'm not going to tell you to unplug cables before tearing things apart, or to change default passwords. Unless such things are obvious, wait for consumer versions of smart mirrors to appear on the market. They will be cheaper and easier to use anyway... Be patient!

Parts

Your milage may vary, but this is the material I used:
Besides these parts, you obviously need suitable tools and materials, but if you passed the criteria in the warning section, you already know that...

If you have a suitable monitor with a HDMI connector, life will be easier for you, since the Raspberry Pi has an HDMI connector. Then you can skip the Gert VGA666 passive VGA adapter.

You could also use glass and semitransparent mirror film instead of the acrylic mirror I used.

The VGA Monitor

Before you start, you should probably think about the layout on the back of the monitor:
  • Is there a suitable space for the Raspberry? How should it be directed to make it convenient to attach power, monitor, possible USB devices and to swap micro SD card if you need to do that.
  • How are the connectors you need to use located? Will cables protrude in an inconvenient way?
  • In what direction from the mirror do you expect the cables to go?
  • If you plan to make the mirror vertical, should you rotate left or right?
Once upon a time, the monitor looked like this...
Your Smart Mirror will be a lot thicker than a normal mirror, so you'll want to remove all the plastic shielding from your monitor, and get it down to the thinnest possible size, just the metal. This is probably no big deal. You need a screwdriver and a suitable amount of violence.

Be careful about the monitor adjustment buttons. In my case, it was a piece of PCB on a ribbon cable, and I attached it on the backside of the monitor with double sided tape in such a way that I can reach the buttons with my fingertips if I need to.

Another consideration is that screws for the wall mount will no longer go through the layer of plastic you removed. This might cause supplied screws to extend further into the monitor interior than intended. That could possibly lead to damage, so beware!

The Raspberry Pi

I used a Raspberry Pi 3B, but the software I use, Magic Mirror 2, supports both Pi 2 and Pi 3.

Raspberry Pi 3B
Regarding location of the Pi in the mirror, all sides matter. The power connector is the micro USB to the left on the bottom side. Considering how I mounted my Pi, I drilled a hole in my frame to connect the USB power cable. I use WiFi, so I don't need the Ethernet port on the right side, but I used USB connectors on the right side for keyboard and mouse during setup. They couldn't be blocked. The micro SD card with the OS is connected on the underside of the board from the left side. That's accessible to me if I remove the frame. The Gert VGA666 power adapter sits on the GPIO port on the top. The problem I had there was that the VGA connectors in each end of the VGA cable almost got in each others way, since the monitor VGA port and the VGA666 were so close to each other.
The mirror from behind. The left side goes up, so the Raspberry Pi in the lower right corner will also be in the lower right corner when the mirror is in place.
I found a good place in a corner of my monitor for the Pi, and used M3 screws, nuts and 5 mm brass spacers to attach it to a piece of the monitor which could be screwed loose and drilled in. I suspect a glue gun would work. It obviously depends on what stress the Pi is exposed to, mainly from cables attacted to the connectors. M3 is really a bit big for the holes in the Raspberry Pi. The right size is M2.5.

The Mirror

I used a ready made acrylic one-way mirror from Slöjddetaljer. I think my screen area inside the metal frame was 30.5 cm * 38 cm, so their 30 cm * 40 cm model was a very convenient size. The disadvantage with acrylic is that it scratches very easily, so be very careful cleaning spots. You will obviously place the reflective layer back, so front side scratches won't destroy the reflection, and you can polish them away with special polish for acrylic surfaces.

To cut off the excess piece, I scratched the mirror on both sides with a sharp knife along a metal ruler, and broke off the piece I didn't want. It's not completely trivial to break 2 cm * 30 cm in one piece. Remember to be careful not to make scratches or marks in the mirror.

The more scratch resistant approach is obviously to get a pane of 3 mm glass cut to the right size, and mount one-way mirror film on the back side.

Whether you have a glass or acrylic mirror, you want to make sure that the reflective side is on the back, and I think that the mirror should lie directly on the LED screen. You don't want refelctons to go back and forth between the mirror and the screen.

The way I cut the mirror, it fit entirely inside the metal framing which surrounds the LED display, and thus lies flush to the LED. It's kept in place by the oak frame. I simply taped it to the screen with masking tape.
The mirror without frame, attached to the monitor with masking tape.

The Frame

Since the Monitor and the Computer hangs on the VESA wall mount, the frame only needs to be strong enough to carry its own weight, and stop the thin mirror from falling off the computer screen. The frame hangs on the "picture", in contrast to a traditional picture, which hangs in its frame.

You could imagine using a frame from an old painting or mirror, and just saw it to size, but there is a problem with that. Depth! A mirror glass is just a few millimeters. The minimal depth of an old painting it determined by the wood frame which the canvas is streched on. That's tiny compared to a computer monitor plus a wall mount.
Mirror without frame from side

With the wood I selected, I have 8 mm of oak in front of the mirror, and 47 mm of oak behind the top 8 mm, streching back towards the wall. The monitor and wall mount are thicker than that. I measure 19 mm from the wall to the back of the frame. That's big enough to allow for ventilation (I hope), but small enough to hide most of the ugly technology. It's 73 mm from the wall to the front of the frame. That's a 73 - 19 = 54 mm thick frame. 47 + 8 = 55, so I have sanded down 1 mm when I worked with this, which brings us to the carpentry part...

Unfortunately, my carpentry English is limited... Some swedish in italics.

The front part is a 27 mm * 8 mm "foglist", which means that the corners closest to one of the wide sides are rounded. This is the front of the frame. The frame sides are from 47 mm * 10 mm "planhyvlad list". While the bigger side pieces are untreated, the foglist was varnished, so I had to sand those down first so that they would work with wood glue and wood oil.

Gluing the front and side in angle with the outsides flush, the front will provide a 27 - 10 = 17 mm cover. We want to cover four things:
  1. Sideways gap between oak frame and metal frame of sceen.
  2. Metal frame besides screen.
  3. Gap between metal frame and mirror pane.
  4. Tape keeping the mirror in place.
17 mm should be fine for this. There was also 21 mm wide foglist, but 11 mm would have been too little, and it would have made the proportions of the frame worse. Running the Raspberry in console (non GUI) mode, I notice that the frame completely covers the top and bottom rows of text, and the first and last columns. Considering this, it's possible that I should have made the frame a bit bigger, and used shims to keep the monitor in the middle. It's no problem while running the magicmirror2 app though. The app is made with consideration to custom frames...

Frame from behind. Hole for Raspberry power in near left corner. Piece of wood to keep frame in place on inside of top side. Stains from rejected experiment with antique wax also visible on inside.

This is what I did in order:

  • Sand off the varnish from the foglist.
  • Cut foglist and planhyvlad list in four pieces each which are long enough to fit on each side with some margin.
  • Match the pieces, decide which sides look better, glue them and clamp them.
  • Remove excess glue and sand the L-shaped pieces so that they are straight and smooth.
  • Saw the ends to right length in 45 degree angle and sand edges.
  • Glue and clamp into the final shape.
  • Remove excess glue and sand corners slightly.
  • Measure where the hole for the power USB for the Raspberry should go, and drill that.
  • Glue a piece of wood on the inside of the top side of the frame, to keep the frame in place when it's hanging on the monitor.
  • Oil the frame twice with a night in between.
  • Mount the decorative (?) metal brackets for the corners. (I didn't want to hammer nails into the frame without pilot holes, and I was reluctant to drill freehand with thinner drills than 2 mm, so I used super glue to make sure that the thin nails for the corner brackets stuck in their holes.)

Software and Configuration

During installation, I had a USB keyboard and mouse attaced to the mirror. Once I had set up remote access via VNC, that was no-longer needed. It also took some work to get the VGA666 drivers to work, so initially I had another monitor attached on the HDMI port. 

Operating System

The Raspberry Pi is running Raspbian installed on a micro SD card.

MagicMirror2

The main piece of software is available at https://magicmirror.builders/

Installation and configuration is described in the GitHub repo.

The application is configured in config/config.js, and third party modules are installed with git clone in the modules subdirectory. I'n used these modules:

Display Rotation and VGA666

If yu are using a digital monitor via the HDMI connector, this is not an issue, but since the VGA666 adapter uses GPIO ports which are by default reserved for other use, you might have some things to deal with if you use a VGA monitor.

All the details you need to know are in the Raspberry Pi Forum. As I said above, you might need an HDMI monitor until you get this to work.

I added this to /boot/config.txt

# VGA 666
dtoverlay=vga666
enable_dpi_lcd=1
display_default_lcd=1
dpi_group=2
dpi_mode=16
display_rotate=1
avoid_warnings=1 

Starting MagicMirror2 on boot

I had to fiddle with Electron to get the app to run at boot, but I hope that's now set up with the default install.

Not blanking the screen...

By default, the Raspberry turns of the screen after a few minutes of inactivity. You don't want that...

Add in /etc/lightdm/lightdm.conf:
xserver-command=X -s 0 -dpms

In /etc/kbd/config:
BLANK_TIME=0
BLANK_DPMS=off
POWERDOWN_TIME=0

Remote Access

The easy ways to get remote access to a Raspberry Pi is to enable either ssh or vnc in raspi-config, interfacing options. I used vnc.

Maintenance

To update MagicMirror2, run git pull && npm install. You might need to do the same for each module. For Raspbian, it's the usual sudo apt update followed by sudo apt upgrade.

Placing the Mirror

The main consideration regarding placement compared to a normal mirror, is that you need somewhere to connect your power cords, so that you don't stumble on them.

Otherwise, besides the fact that you want a mirror placed so that you can look straight into it, it should of course be located in a place where it makes sense to see the information you radiate. For instance, weather information might be conventient where you decide whether to wear boots or loafers...

Future Work

There are some things left to do...

Dimming

Assume that you have settings so that you see yourself nicely in the mirror in the daylight, and you can see the text on the screen. Assuming that the settings are fixed, things will appear very differently at night. Without daylight, with much less light in the room, the thing that used to be a mirror, is now a black screen with very bright text that's hurting your eyes (well, almost).

There are two parts in this problem:
  1. How do we determine how bright the screen should be?
  2. How do we change the brightness of the screen once we know what we want?
Regarding part 1, I assume there are two fundamental approaches. A) time based solution, and B) light based solution. Since I don't have a camera connected to the Pi, and the VGA666 uses up the entire GPIO bus where I could possibly have attached a light meter, I think I'll go for the time based solution. It's also a much simpler solution, so it's a good first attempt anyway.

I guess there are basically two fundamental approaches for part 2 too. We either change system brightness settings in the drivers or in X, or we change something in the magicmirror2 application (which covers the entire screen anyway).

There are a number of ways you could adjust brightness for the HDMI output in a Raspberry Pi, e.g. xbacklight or xcontrast, but they don't seem to work with the VGA666 driver. There is no such thing as an assumption of a LED backlight in a VGA connection, even though it exists in the acual device when it's a LED screen.

The best way forward might be to cover the entire window in magicmirror2 with a black top layer with varying transparency in CSS, or something similar...

Infrastructure As Code

As is now, I've installed the application, and done all the needed configuration by hand. something happens, e.g. the SD card break (which sometimes happens with these devices after power losses), I'll have to do it all over again.

It would be nice to e.g. have a git repo with an Ansible playbook containing everything needed to get us from a fresh Raspbian install to a working system.

Only One Power Cord

It would be nice to only have one power cord from the power outlet to the mirror. That assumes that we can get 1.5 A stabilized 5V from some place in the monitor without breaking it, or that we can fit an extension cord providing two power sockets behind the display in the mirror.

Interactivity?

As far as I understand, you can e.g. attach a web camera, and install additional software to get features similar to Siri or Alexa. I have no such plans...

2017-10-07

Using Xming for GUI apps in Ubuntu on Windows 10 Home Edition

To get Ubuntu GUI applications to work under Windows Subsystem for Linux (WSL) I installed the X server Xming in Windows 10. Assuming that I've actually installed some GUI apps, such as xterm or chromium-browser, I can then start them from the Ubuntu bash prompt.

First of all, Xming needs to be started in Windows, and you need to set the DISPLAY environment variable: export DISPLAY=:0

You can then run xterm & to start a separate terminal window. I got a warning about Xming not finding the font it looks for, but I can live with that. The more serious problem is that I don't get my Swedish keyboard settings. To fix that, I adjusted the Xming shortcut in Windows. To find the shortcut file, I went to XMing in Windows 10 start menu, right-clicked, selected "More" and "Open path" as shown below:



I then right-clicked on the Xming shortcut in the explorer windows that appeared, and changed the path to the following:
"C:\Program Files (x86)\Xming\Xming.exe" :0 -clipboard -multiwindow -xkbmodel pc105 -xkblayout fi -xkboptions grp:ctrl_shift_toggle
I actually used Finnish keyboard settings, since I read that some Xming versions have problems with the (identical) Swedish keyboard settings. If you already had Xming running, you probably have to exit it before you restart it:


Having restarted Xming, and started an xterm again, the keyboard had all the keys in the right places! :-)


2017-09-17

Running docker in Ubuntu on Windows 10 Home Edition

With Ubuntu running in WSL on Windows 10, I want it to be as close to a "real" Ubuntu installation as possible. One of the shortcomings with WSL is that it doesn't allow you to run docker. apt install docker works as expected, and running docker will give you the familiar menu, but docker ps is sufficient to break the spell.

$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

It turns out that you can run the docker client commands in WSL ubuntu, but you can't run the docker service through WSL. The trick is to run docker in Windows, and get your client in WSL Ubuntu to talk to that.

These days, there is Docker for Windows which uses Microsoft Hyper-V in Windows 10, but since my laptop only run Windows 10 Home Edition, I installed the older Docker Toolbox, which is based on VirtualBox. At first, I installed a brand new VirtualBox, and then intalled Docker Toolbox (which also includes VirtualBox). This caused a networking problem, but after I deinstalled both Docker and VirtualBox, and ran the Docker Engine installer again so that it could install its preferred version of VirtualBox, it worked fine. I didn't include Kitematic, marked alpha, in the install.

The trick now, is to get the WSL Ubuntu docker client to communicate with the server in the VirtualBox. To do this, you need to define three environment variables:

export DOCKER_HOST=tcp://XXX:2376  
export DOCKER_TLS_VERIFY=1
export DOCKER_CERT_PATH=/mnt/c/Users/XXX/.docker/machine/certs/

 Of course, you should replace the XXX part in the host and cert variables. DOCKER_HOST should be the same as you have in Windows. I assume you know your Windows home directory...

Having done this much, enables me to do:

$ docker run -it alpine sh/ 
#
The obvious remaining problem is mounting volumes. Even if I run this in WSL Ubuntu, the docker service runs in a different environment, with a different view of the file system. I assume that provided directories must be avilable from the Windows system, and presented with paths that make sense in Windows...

I suspect there will be more complications, but there is basic operation at least. :-)




Running Linux directly in Windows 10

Tools like Cygwin and Babun has made Windows somewhat bearable for people used to the power of Unix, and VirtualBox has enabled us to run a full Linux system in a virtual machine if we want that. Despite all this, I still have a dual boot laptop with Windows 10 and Ubuntu on separate partitions. A shell and ported applications is too weak. A virtual machine is too slow and cumbersome.

A while ago, some surprising new "apps" appeared in Windows App Store: OpenSuSE Leap 42SLES 12 and Ubuntu 16.04 LTS! They all use the Windows Subsystem for Linux (WSL) which allows you to run native ELF binaries on a Windows 10 kernel. So, it's neither ported apps like Cygwin, nor virtualization. It's more like Wine, but in the other direction, and developed by people with full access to the relevant source code.

To install WSL, you need build 16215 or later of Windows 10 on x64, which currently (September 2017) means that you need to join Windows Insider to get a pre-release of Windows 10. No big deal! I went along and installed Ubuntu.

WSL is mainly intended as a developer tool. The idea is not to run Linux production servers on a Windows 10 kernel. I hope it's good enough for me to stop dual booting my laptop, and give the whole disk to Windows.

I now have an "Ubuntu" app in the Windows 10 task bar, which provides me with a familiar bash shell, just like Cygwin and Babun does, but here I can run things like apt install docker and a lot of things I could never do in Cygwin or Babun, without the sluggishness and complications of virtualization.

To run Linux GUI programs, the simplest approach is probably to install Xming in Windows, and export DiSPLAY=:0 in Ubuntu. By default that gives you a US keyboard in X, but I wrote a blog post about fixing that.

WSL does not support the docker server, but with a docker server running in Windows, you can use the docker command to run client tasks in Ubuntu. Something like docker run -it alpine sh works as intended. I wrote a separate blog entry about getting docker working.

Will I drop the Linux partition on my laptop SSD and be happy with only Windows 10 on my laptop? I don't know yet, but I'm optimistic.

2016-03-04

A Use Case for Continuous Deployment

There are a number of good reasons for reasons using Continuous Delivery and Continuous Deployment. Yesterday, Jeff Campbell talked about "Making the Case for Continuous Delivery" at the Göteborg Continuous Delivery Meetup. His talk is online at Youtube.

He had many good points, but I thought that I'd like to complement that with a more concrete example, of how life can get much simpler if we work this way. There are two fundamental ideas here:
  1. Complex problems usually get much simpler if we split them into several smaller problems, that we solve after each other.
  2. A Continuous Deployment ability enables us to rapidly get small production changes into production, so that we can solve problems in several small steps, with real-world feedback between each step in a single working day (instead of several months).
Somewhat simplified, Continuous Deployment means that as soon as developers change your software, your system is built, tested and put in production. Continuous Delivery means that as soon as developers change your software, your system is built, tested and could be put in production if that's what you want. The benefit from the case below comes from actually putting changes into production in several small steps, but it does not depend on a practice of always doing Continuous Deployment.

Our Use Case

Imagine that we have  a production system where we store information about customers. In the customer table, we've stored address information, but due to changes in business requirements, we'll soon need to keep track of both current and previous addresses. The old address fields won't suffice. We'll need a customer_address table where we can store several addresses for different date ranges for each customer.

The Traditional Solution

In traditional development, with releases maybe every month or less often, we'd develop a new version of the software, which contained the new support for several addresses, as well as all other features expected for this release. The new software would not expect any address information in the customer table. The replacement for current features would simply display the address from the new table where end_date was empty, and new features showing historical addresses, would use start_date and end_date to decide which address to show.

The programmer would probably not think so much about the migration of data when he changed the programs. His focus would be on the new business requirements, the future behaviour. Having implemented that, he (or a DBA) would look at data migration. The migration would go in four steps:
  1. Perform a backup of the database.
  2. Add the new table to the database schema.
  3. Run a script to copy the address for each customer to the new table, with some (more or less correct) start date set. 
  4. Drop the old address columns in the customer table.
This data migration would need to happen in some "service window", when the system is not running. Downtime would be difficult to avoid. It's quite possible that other system changes in our monthly release caused additional database migrations.

When the service window is over, we resume operation. If all is well, we only had a few hours of downtime. Ok? YMMV. We're not home yet though... What if some other change in the system caused a blocker bug, and we need to revert to the old version of the system? We no longer have a database that supports that. In the best case, the time needed to revert to the old software will be as long as the upgrade (reverse data migration), unless we're willing to loose the data changes done after the upgrade (restore backup).

Even if we are careful and quality-minded, a traditional approach with infrequent releases will always mean that the risk for disruptions are much bigger than for a Continuous Deployment approach, since there are many changes, and each change is a risk. The consequence of each issue that occurs is also bigger, since there are more migrations to revert etc.

The Agile Solution

With a Continuous Deployment we can afford to be much more concerned with the transition of our production environment from its current state to its future state. A small, low risk change to a production environment should be a simple thing in a Continuous Delivery environment, so we can do that often without worries.

Our first step, would simply be to create the new table. That is a tiny change which has minimal impact on the running system.

Our second step, is to change our software, so that the code which adds, updates or removes address information, deals with the new table in addition to the old table. Reads are still just from the old table. This means that we duplicate data for a while. In the long run we don't want that, but in the short run, we put this in production and verify that we get exactly the content we expect in the new table. Nothing relies on the new data yet, and we haven't made any backwards incompatible changes. We can easily run the new code in parallel with the old code in the production environment, and reverting to the previous software version in case of trouble can be made with minimal impact on production.

Once we're convinced that creating and updating addresses in the new table works as expected, we launch a script which goes through all customer records and add all old addresses to the new table. Yet another low impact, low risk operation.

Once we have complete data in the new address table, we deploy a version of the software which reads address data from the new table. We can verify that everything works just as expected, and if there are problems, we can always revert to the previous software version. The address information in the old table is still there. We can also monitor that no code is reading address information from the old location any longer.

Now it's time to remove the functionality from the software which updates the address fields in the old customer table. We put this in production and make sure that it works as expected.

Finally, when we're convinced that everything works as expected, we drop the address columns from the customer table. By now, we're convinced that they are no longer used by any software. If there happens to be some legacy software we can't change which expects to find address in the customer table, we can provide that with a database view.

Conclusions

Instead of one software upgrade with a significant risk that something would go wrong, and a possibly severe impact if something went wrong, we've performed six quick, simple, very safe, low impact changes to the system. We've been able to observe how each change worked, and we've had the ability to adapt to any discoveries we made at any step in the process.

In the nominal case, we have a service window of a few hours in the traditional approach, and no impact on production at all in the agile approach.

If we only want to perform a production release on a monthly basis, but want similar confidence and low impact as the agile approach, we'd have to release a preparatory version now (with other, non-related features and/or bug-fixes in it), get the new behaviour into production in the next release, a month from now (if all went as expected with the preparatory release), and get the data redundancy a little more than two months from now. Isn't it better to be finished today?