From pipe dream to a small company

Or: How my wife created The Small Batch Project, a company that’s importing award-winning chocolate into Switzerland.

Let’s start with some context. Seth Godin lays out a concept to revoluzionize school and education. If you care about the next generation, then Stop Stealing Dreams is a must read (or at least a must watch of the 20min long TED talk).

This idea, the fact that everyone can excel, and that the best education can happen online, anytime and independent of location made me and my wife discuss a lot. It was a time when she was looking to quit her day job and start an adventure on her own. And then we found the altMBA, an investment of roughly 4’000 USD and 1 month of intense workshops. Signed up. And it all happened in August 2018.

So, did it help? My wife started the year 2018 with a pipe dream of building a company which would have cost probably more than a million to start, and likely years to execute. Ideas we all have in our head, but never get to execute. Simply, it’s too big to even start. The coaching, the constant feedback of peers and the intense reading mainly simplified “mission impossible”. There are no guidelines, no instructions. Just feedback from peers, constant coaching pushing you to make a leap and convert your dream into an executable project. All of a sudden ideas like “start writing a blog” or “do workshops” were discussed. Zero or close to zero capital investment, easy to test, easy to pivot and also not a full crash in case of a failure.

With that, she started her idea to import award winning bean-to-bar chocolate from around the world to Switzerland, online available at The Small Batch Project. She started with visiting a chocolate fair in September 2018, got a company incorporated and a website based on Shopify up and running in October 2018, had an appearance in a market stand in November 2018, another one in December 2018, her first chocolate tasting event in January 2019, and well, a few month into starting the adventure learned a lot, made interesting connections, received broad support from her friends and got positive and re-inforcing feedback from all corners, including an alumni of the altMBA contacting her with some suggestions to improve her marketing strategy.

So all the altMBA did was simplifying a huge business idea into something actionable? Dare to jump? Dare to take action? Kind of. At least it did the most scary part. There was no point anymore of saying “that’s impossible”. Almost no point of going back. And most of that by following the philosophy of doing something good in this world and making it happen.

With all that, my new job title is “professional chocolate taster”.

Push Git Tag from GitLab Runner

The GitLab Runner defaults to having read access only to the current repository. Git pushes such as tagging a commit won’t work. There are some suggestion out there, such as issue #23894, but I didn’t find anything more straight forward than what I’m writing here.

GitLab Personal Access Tokens are one way of using git pushes. However, they are tied to a user and needs changes in case the user leaves the company or simply changes the team. And if you use a shared “service” user, it’ll consume a license in the Enterprise Licensing model.

GitLab deploy keys are an alternative. This article shows how to use the GitLab deploy keys to push tags to a git repository.

Create and configure deploy keys

The long anwer can be found on the GitLab and SSH keys documentation. The short answer is:

Copy the content of the public key (by default named id_rsa.pub) to your project. It’s located at GitLab Project -> Settings -> Repository -> Deploy Keys. Once added, the SSH keys fingerprint is displayed on the user interface. You can double check for a match by extracting the fingerprint from your private key locally via:

Encrypt deploy keys in your repository

I recommend encrypting the private key in your repository, and decrypt it at runtime. I’m using AWS Key Management Service (KMS), but there are many alternatives available, and also corresponding implementations at different cloud providers. Anyway, here’s how to get the private key encrypted:

Build file

The trick part of the build file, besides decrypting the private key, is configuring the git push URL and comment correctly. See inline comments of the following file:

.gitlab-ci.yml

Eventually, the script needs to be invoked on in the .gitlab-ci.yml file. Additionally, all other stages need to be excluded when tags are pushed to avoid an infinite loop of builds when each build pushes a tag and triggers a build (yes, I did it).

Once figured out, it’s straight forward to create an SSH key, encrypt it, store it in source code, update the git remote push URL and put those components together.

Manage ACM certificates through AWS CloudFormation

Certificate management has historically been fairly manual, costly and often related to trial and error (or long documentation). AWS ACM based certificates removed most of the pain.

ACM offers Email and DNS based validation. Email adds overhead in two ways. First, you need an Email address for the valid host (and you might not have an app.your-company.com Email address, forcing you into setting up AWS SES). Second, you need to regularly re-validate the certificates. DNS based validation removes those hassles and is the recommended way.

Remaining is still its setup. AWS CloudFormation (CF) offers creating a Certificate resource. Attaching DNS validation however isn’t straight forward, and the best way I could find so far was leveraging a Lambda function, which can be inlined in the CF template.

In short, the template creates the following resources:

  1. An IAM role to execute the AWS Lambda function.
  2. An AWS Lambda function that creates and deletes ACM certificates, and returns the created AWS Route53 RecordSet values that must be used for DNS validation.
  3. An AWS Route53 RecordSet matching the ACM certificate’s DNS validation settings.

Reading list 2018

After a slow start in 2017, I got to a few more books in 2018. I’m highly satisfied with the outcome regarding my learning, my acquired inspiration, and generally the selection I made to invest my limited reading time.

I started with A Second Chance: For You, For Me, And For The Rest Of Us by Catherine Hoke. It’s a fascinating story of Catherine believing in people that are at the bottom of their life, often 20 years or more in a high security prison. She brings them back to society. Not only in a safe way, but also making them successful entrepreneurs of small businesses.

A Beautiful Constraint : How To Transform Your Limitations Into Advantages, and Why It’s Everyone’s Business by Adam Morgan and Mark Barden was an inspiration read on how to frame challenges differently. It taught me to avoid seeing constraints as excuses to not pursue the next adventure, but instead see them from a different angle and leverage them to my advantage.

Start With Why by Simon Sinek is a classic based on his famous TED talk. As expected, the book isn’t revealing anything new. That said, I found it worthwhile time spent to inhale more of this simple, yet compelling idea by reading through a long list of good and bad examples.

Talking about “why” – I then moved on to understanding why the young generation needs to find purpose in everything they do. Drive: The Surprising Truth About What Motivates Us by Daniel H. Pink puts it in a usable framework. Valid for every generation. But especially good for dealing with the younger one.

The hardest read from a pure “understanding English” (which is my second language) was Finite and Infinite Games by James Carse. It took me a while to digest his ideas. But ever since I’m defining my infinite games and actually started to pursue some of them.

Then, Essentialism: The Disciplined Pursuit of Less by Greg McKeown got recommended to me, and I’d say it was the most influential book in 2018 for me personally. It’s a lot about saying “no” to clutter and “full commitment” to what’s essential in your life.

Another big one was Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker. Bill Gates mentions it as his new favorite book of all time. It adjusted my world view towards being more optimistic about where the world is heading. It’s towards less children dying after birth, less illiteracy world wide, better medication for the poor or many more people getting out of poverty.

Back to reality, Plain Talk: Lessons from a Business Maverick by Ken Iverson is a convincing story why working smarter over the course of decades outperforms those who look at short-term profit and squeezing out every penny of their employees. It’s about the believe in people and leveraging their will and motivation.

I hesitated for a while, but then still jumped onto It Doesn’t Have to Be Crazy at Work by Jason Fried and David Heinemeier Hansson. I followed Jason Fried for a while already, and I’m working in an environment that isn’t crazy by many of those means. Still, it contained a lot of useful hints on how to do better, and again, believe in the individual.

Ok, too much philosophy, let’s do something for real. Measure What Matters: OKRs: The Simple Idea that Drives 10x Growth by John Doerr presents a 25 year old concept that John Doerr brought to Google and many other companies. The forword by Larry Page, as well as a recommendation by Bill Gates, gives this concept and book additional weight. While the concept is an old hat, it’s revamping goal setting into an easy to understand and execute framework.

The year couldn’t have ended with more insight into the meaing of life than reading Man’s Search For Meaning: The classic tribute to hope from the Holocaust by Viktor E Frankl. If you’re searching for purpose, or simply want to get reminded of the darker times almost a century ago, reading Frankl’s stories from his 5 years of imprisonment in concentration camps is putting everything you do into a different perspective.

Podcasts

During my commute, podcasts work better than reading. I started to listen to Akimbo by Seth Godin, which enhances Seth’s daily inspiration with a weekly 30min talk. Some of the talks from The Knowledge Project by Farnam Street are really twisting my perspective on our world. And Adam Grant interviewed a set of interesting people in WorkLife.

So what’s coming in 2019?

At the time of this writing, I already completed All Marketers are Liars by Seth Godin (no, I’m not switching jobs). To move on, I’m thinking of 21 Lessons for the 21st Century (Yuval Noah Harari), Principles: Life and Work (Ray Dalio), Mandela’s Way: Lessons on Life, Love, and Courage (Richard Stengel), The Infinite Game (Simon Sinek) and many more. What are your recommendations for me? Contact me, or tweet a reply.

Avoiding AWS Access Keys

The AWS Well-Architected framework is a recommendation by AWS, summarized in an 80 page PDF document. After focusing on cost optimization in my first article, this article looks at one specific aspect of the security pillar.

Passwords are bad

Yes, passwords are bad. I don’t need to repeat that, right? Anyways, a few words on that: Managing passwords is a challenge. Especially when you have to manage and hand them out as a central team. First, you end up spending a lot of time resetting passwords, and potentially even managing the secrets in some “secure” store. Second, you have a security risk by keeping passwords active after employees left the company or simply by having the headache on how to protect a central credentials store.

Instead of using AWS IAM users, use AWS IAM Roles. Roles are a central piece of the AWS infrastructure, and every AWS service supports them. Notably EC2 can have an attached IAM profile. Once you attach an IAM instance profile, all calls to AWS services from that EC2 machine are invoked with the specified IAM role.

Custom applications

I often experience teams discussing how to securely store AWS Secret Keys in their development environment or tool they configure. Discussions are usually around how to pass them along to the build server and the production server. The answer is almost always: You don’t. Just ensure the EC2 machine uses an IAM Instance Profile (limited to the required permissions).

But wait, what about local development? I can’t assign an IAM Instance Profile to my machine. Again, don’t do anything in code. Instead, rely on well-documented credential configuration outside of your application (see example documentation for Node.js). Short version is to simply configure your user’s AWS credentials (~/.aws/credentials) and auto-rotate them on a schedule (mirri.js is a good tool to do that).

If you use federated logins to your AWS account, an alternative is to leverage AWS STS and automatically generate a temporary key every time you need one. This eliminates key rotation completely.

External services

There is also the case where you need to grant access to external services. For example, an external build server like Travis CI, a log collector like SumoLogic, etc. Some might have an option to configure an IAM Role with an enterprise subscription, but often the only way is to actually use access keys. So you’re tied to simply rotate them regularly. The key is to automate the log rotation. Felix is a tool that supports some external services, and definitely gives a baseline on how automation can be written.

References

Two weeks after I wrote this blog post, the AWS Security team came up with a great summary or a related topic. See Guidelines for protecting your AWS account while using programmatic access.

The show man

Or why the worst managers succeed.

There’s this kind of team in each company that everyone knows. Not because it’s a successful team. But because that team is famous for big escalations, production problems and an architecture that evolved badly with no way out of the mess.

And then there’s the manager of this team. Highly successful.

Why? Simple. He is the one who gets recognized by the customers.

He’s constantly visiting. Fighting fire. His weeks are turbulent. Full of de-escalations, workarounds, meetings. Resulting in an action plan and a promise to do better. He leaves for the weekend with a big thank you from the customer. In the end, he was the only one who was visible to the customer that week. And he’s the only one who gets mentioned in customer’s reports seen by the leadership team.

AWS Well-Architected Framework applied – Cost Optimization

The AWS Well-Architected framework is a recommendation by AWS, summarized in an 80 page PDF document. Booooring. True. So I’m taking a different approach. This is a hands-on, developer focused way of thinking of it.

Context

The 80 pager talks about 5 pillars: (1) Operational Excellence, (2) Security, (3) Reliability, (4) Performance and (5) Cost Optimization. Those are very high-level aspects. Taking those guidelines and following them however results in a cost-efficient, scalable and reliable cloud environment.

In just joined a new team. A team that was not using any deployment automation and manages many systems that were built before the cloud became a real thing. So there’s clearly some trickiness to managing this infrastructure, but the initial impact by cleaning up the current state is high.

In my first two weeks I’ve been focusing on the pillars cost optimization (5th pillar) and security (2nd pillar). Today, I’m only going to talk about cost optimization.

(5) Cost Optimization

Let’s start with “why?”. Why should a “normal software engineer” bother? In the end, you might just be an employee of a multi-billion-dollar company. It’s simple. It’s to help increasing the core financial metric of the company (be it UFCF, profitability, margins), and therefore your bonus (which likely depends on one of those metrics). The less you and your team spends for getting the same value, the higher the contribution. While it might be a minor contribution to the overall pot of a multi-billion dollar company, those targets are often getting pushed down. In my specific case they were pushed down to our team: Our team’s AWS spending was around 60K USD / month, resulting in over 700.000 USD per year. Think of: How many employees could we add to our team instead of spending that much? (Or how many first-class flights around the world can you fly, if you want a more fun relation?)

In short, I believe it’s everyone’s responsibility as well-payed employee’s to be cost conscious.

As specific actions, I’ve started to automate some parts of the AWS infrastructure by managing IAM roles, IAM users and a handful of other resources through AWS CloudFormation templates. At the same time, I also started to auto-curate some parts around cost and security. Specifically, there are now daily scripts that auto-terminate EC2 machines 30 days after they have been stopped, and deleting detached volumes. That 1-2 day activity resulted in cost avoidance of 3000 USD / month (5% of the spend, and having measures in place to avoid these kind of costs forever).

So, now, what can everyone do and how can everyone contribute? Let’s start with specifics that allow to change your mindset as well as an easy introduction to provided tools:

  • Have a look at the AWS Trusted Advisor’s Cost Optimization section. It’s fairly basic. But as specific example, it gave me insight that our team had 10 TB of detached EBS volumes.
  • And simply the AWS Cost Explorer from the AWS Billing Dashboard.
  • Way better is Cloudability, a 3rd party tool that allows to analyze AWS costs, and offer optimization. The easiest is probably to setup a weekly report from Cloudability. This helps to raise awareness. Nothing else. Just slowly helping with becoming cost-conscious. Then there are the simple and advanced reports, insights and optimizations, which are well-documented on the Cloudability pages.
  • One of my next targets will likely be Rightsizing. For example, our team still has 25’000 IOPS (guaranteed I/O operations on disk) provisioned in a test environment, resulting in 1750 USD / month (in addition to disk space). This might be the right choice for the testing needs, but if not, let’s simply not buy guaranteed I/O.
  • And then there are reserved instances. Cloudability offers very good insight and a risk assessment of which and how many instance hours a team should buy. Also, for larger companies, the reserved instances can be bought on the global account and therefore distributing the risk among all accounts in the organization.
  • Once you’ve done the basics, look what’s available out there. From auto-scaling to auto-spotting to leveraging more AWS managed services.

I doubt you’ll end up flying first-class around the world. But at least you avoid someone asking you to row across the ocean, or even paying back part of the unnecessary spend you caused.

Two kinds of batteries

There are two kinds of batteries. Those that come fully charged and can be used once. And those that come half-charged and are meant to be used many times.

I was surprised those very same approaches exist for trust.

My world view assumed that trust starts neutral (or half-charged). You do good things together, and trust builds up. And it goes down when things go bad. Once you have charged the trust battery close to 100%, projects succeed, no matter what. If it’s down to 30% or less, projects start to fail.

Recently I learned from a close co-worker, that his world view assumes full trust at the start. That is, 100% charged. Everytime the other person screws up, it goes down a bit. Never up.

It’s useful to me to understand that this other world view exists. It’s also useful for those to understand that most others think differently.

Principles

I started to develop a principles-oriented way of working. Those have paid off in my team, and I believe are applicable broader. With principles-oriented way I mean that “those are the inner believes of the team” and manifests in behavior such as “we don’t discuss about those, but this is the very nature how we develop our software”.

Here are my current two principles, easy to understand, hard to get there, but then easy to maintain and provide a true accelerator to the team by reducing errors on production software:

  1. No errors in production. True, we’ll always have errors. But we do everything to detect them early, fix them before the customer notices them, and iterate towards very stable software.
  2. Automate everything. Again, we won’t automate 100%. But my team’s cloud account has only 3 things that have been manually created (and those are fully documented). A specific outcome of that is that hitting the “merge button” on a pull request will automatically deploy a service to production (while we drink a cup of coffee, or more likely, already work on the next task).