Push Git Tag from GitLab Runner

The GitLab Runner defaults to having read access only to the current repository. Git pushes such as tagging a commit won’t work. There are some suggestion out there, such as issue #23894, but I didn’t find anything more straight forward than what I’m writing here.

GitLab Personal Access Tokens are one way of using git pushes. However, they are tied to a user and needs changes in case the user leaves the company or simply changes the team. And if you use a shared “service” user, it’ll consume a license in the Enterprise Licensing model.

GitLab deploy keys are an alternative. This article shows how to use the GitLab deploy keys to push tags to a git repository.

Create and configure deploy keys

The long anwer can be found on the GitLab and SSH keys documentation. The short answer is:

Copy the content of the public key (by default named id_rsa.pub) to your project. It’s located at GitLab Project -> Settings -> Repository -> Deploy Keys. Once added, the SSH keys fingerprint is displayed on the user interface. You can double check for a match by extracting the fingerprint from your private key locally via:

Encrypt deploy keys in your repository

I recommend encrypting the private key in your repository, and decrypt it at runtime. I’m using AWS Key Management Service (KMS), but there are many alternatives available, and also corresponding implementations at different cloud providers. Anyway, here’s how to get the private key encrypted:

Build file

The trick part of the build file, besides decrypting the private key, is configuring the git push URL and comment correctly. See inline comments of the following file:

.gitlab-ci.yml

Eventually, the script needs to be invoked on in the .gitlab-ci.yml file. Additionally, all other stages need to be excluded when tags are pushed to avoid an infinite loop of builds when each build pushes a tag and triggers a build (yes, I did it).

Once figured out, it’s straight forward to create an SSH key, encrypt it, store it in source code, update the git remote push URL and put those components together.

Manage ACM certificates through AWS CloudFormation

Certificate management has historically been fairly manual, costly and often related to trial and error (or long documentation). AWS ACM based certificates removed most of the pain.

ACM offers Email and DNS based validation. Email adds overhead in two ways. First, you need an Email address for the valid host (and you might not have an app.your-company.com Email address, forcing you into setting up AWS SES). Second, you need to regularly re-validate the certificates. DNS based validation removes those hassles and is the recommended way.

Remaining is still its setup. AWS CloudFormation (CF) offers creating a Certificate resource. Attaching DNS validation however isn’t straight forward, and the best way I could find so far was leveraging a Lambda function, which can be inlined in the CF template.

In short, the template creates the following resources:

  1. An IAM role to execute the AWS Lambda function.
  2. An AWS Lambda function that creates and deletes ACM certificates, and returns the created AWS Route53 RecordSet values that must be used for DNS validation.
  3. An AWS Route53 RecordSet matching the ACM certificate’s DNS validation settings.

Avoiding AWS Access Keys

The AWS Well-Architected framework is a recommendation by AWS, summarized in an 80 page PDF document. After focusing on cost optimization in my first article, this article looks at one specific aspect of the security pillar.

Passwords are bad

Yes, passwords are bad. I don’t need to repeat that, right? Anyways, a few words on that: Managing passwords is a challenge. Especially when you have to manage and hand them out as a central team. First, you end up spending a lot of time resetting passwords, and potentially even managing the secrets in some “secure” store. Second, you have a security risk by keeping passwords active after employees left the company or simply by having the headache on how to protect a central credentials store.

Instead of using AWS IAM users, use AWS IAM Roles. Roles are a central piece of the AWS infrastructure, and every AWS service supports them. Notably EC2 can have an attached IAM profile. Once you attach an IAM instance profile, all calls to AWS services from that EC2 machine are invoked with the specified IAM role.

Custom applications

I often experience teams discussing how to securely store AWS Secret Keys in their development environment or tool they configure. Discussions are usually around how to pass them along to the build server and the production server. The answer is almost always: You don’t. Just ensure the EC2 machine uses an IAM Instance Profile (limited to the required permissions).

But wait, what about local development? I can’t assign an IAM Instance Profile to my machine. Again, don’t do anything in code. Instead, rely on well-documented credential configuration outside of your application (see example documentation for Node.js). Short version is to simply configure your user’s AWS credentials (~/.aws/credentials) and auto-rotate them on a schedule (mirri.js is a good tool to do that).

If you use federated logins to your AWS account, an alternative is to leverage AWS STS and automatically generate a temporary key every time you need one. This eliminates key rotation completely.

External services

There is also the case where you need to grant access to external services. For example, an external build server like Travis CI, a log collector like SumoLogic, etc. Some might have an option to configure an IAM Role with an enterprise subscription, but often the only way is to actually use access keys. So you’re tied to simply rotate them regularly. The key is to automate the log rotation. Felix is a tool that supports some external services, and definitely gives a baseline on how automation can be written.

References

Two weeks after I wrote this blog post, the AWS Security team came up with a great summary or a related topic. See Guidelines for protecting your AWS account while using programmatic access.

AWS Well-Architected Framework applied – Cost Optimization

The AWS Well-Architected framework is a recommendation by AWS, summarized in an 80 page PDF document. Booooring. True. So I’m taking a different approach. This is a hands-on, developer focused way of thinking of it.

Context

The 80 pager talks about 5 pillars: (1) Operational Excellence, (2) Security, (3) Reliability, (4) Performance and (5) Cost Optimization. Those are very high-level aspects. Taking those guidelines and following them however results in a cost-efficient, scalable and reliable cloud environment.

In just joined a new team. A team that was not using any deployment automation and manages many systems that were built before the cloud became a real thing. So there’s clearly some trickiness to managing this infrastructure, but the initial impact by cleaning up the current state is high.

In my first two weeks I’ve been focusing on the pillars cost optimization (5th pillar) and security (2nd pillar). Today, I’m only going to talk about cost optimization.

(5) Cost Optimization

Let’s start with “why?”. Why should a “normal software engineer” bother? In the end, you might just be an employee of a multi-billion-dollar company. It’s simple. It’s to help increasing the core financial metric of the company (be it UFCF, profitability, margins), and therefore your bonus (which likely depends on one of those metrics). The less you and your team spends for getting the same value, the higher the contribution. While it might be a minor contribution to the overall pot of a multi-billion dollar company, those targets are often getting pushed down. In my specific case they were pushed down to our team: Our team’s AWS spending was around 60K USD / month, resulting in over 700.000 USD per year. Think of: How many employees could we add to our team instead of spending that much? (Or how many first-class flights around the world can you fly, if you want a more fun relation?)

In short, I believe it’s everyone’s responsibility as well-payed employee’s to be cost conscious.

As specific actions, I’ve started to automate some parts of the AWS infrastructure by managing IAM roles, IAM users and a handful of other resources through AWS CloudFormation templates. At the same time, I also started to auto-curate some parts around cost and security. Specifically, there are now daily scripts that auto-terminate EC2 machines 30 days after they have been stopped, and deleting detached volumes. That 1-2 day activity resulted in cost avoidance of 3000 USD / month (5% of the spend, and having measures in place to avoid these kind of costs forever).

So, now, what can everyone do and how can everyone contribute? Let’s start with specifics that allow to change your mindset as well as an easy introduction to provided tools:

  • Have a look at the AWS Trusted Advisor’s Cost Optimization section. It’s fairly basic. But as specific example, it gave me insight that our team had 10 TB of detached EBS volumes.
  • And simply the AWS Cost Explorer from the AWS Billing Dashboard.
  • Way better is Cloudability, a 3rd party tool that allows to analyze AWS costs, and offer optimization. The easiest is probably to setup a weekly report from Cloudability. This helps to raise awareness. Nothing else. Just slowly helping with becoming cost-conscious. Then there are the simple and advanced reports, insights and optimizations, which are well-documented on the Cloudability pages.
  • One of my next targets will likely be Rightsizing. For example, our team still has 25’000 IOPS (guaranteed I/O operations on disk) provisioned in a test environment, resulting in 1750 USD / month (in addition to disk space). This might be the right choice for the testing needs, but if not, let’s simply not buy guaranteed I/O.
  • And then there are reserved instances. Cloudability offers very good insight and a risk assessment of which and how many instance hours a team should buy. Also, for larger companies, the reserved instances can be bought on the global account and therefore distributing the risk among all accounts in the organization.
  • Once you’ve done the basics, look what’s available out there. From auto-scaling to auto-spotting to leveraging more AWS managed services.

I doubt you’ll end up flying first-class around the world. But at least you avoid someone asking you to row across the ocean, or even paying back part of the unnecessary spend you caused.