Nginx is a fantastic web server and reverse proxy to use
with Let’s Encrypt, but when dealing with multiple
domains it can be a bit tedious to configure. I have been moving services into
more FreeBSD jails as I alluded to in my previous
post, among them the
general Nginx proxy jail which I have serving my HTTP-based services. Using
Let’s Encrypt for TLS, I found myself declaring multiple server
blocks inside
my virtual host configurations to handle the apex domain (e.g.
dotdotvote.com
), the www
subdomain, and vanity domains (e.g.
dotdot.vote
). With the help Membear
and MTecknology
in the #nginx
channel on Freenode, I was able to refactor multiple
largely redundant server
blocks into one.
Howdy!
Welcome to my blog where I write about software
development
, cycling, and other random nonsense. This is not
the only place I write, you can find more words I typed on the Buoyant Data blog, Scribd tech blog, and GitHub.
Using FreeBSD's pkg(1) with an 'offline' jail
In the modern era of highly connected software, I have been trying to “offline” as many of my personal services as I can. The ideal scenario being a service running in an environment where it cannot reach other nodes on the network, or in some cases even route back to the public internet. To accomplish this I have been working with FreeBSD jails a quite a bit, creating a service per-jail in hopes of achieving high levels of isolation between them. This approach has a pretty notable problem at first glance: if you need to install software from remote sources in the jail, how do you keep it “offline”?
Loving the PinePower
My current available working space is at an all time low which has made the dimensions of everything around me much more important. While I can never become one of those extreme minimalists that works with only their laptop on a park bench, next to their camper van (or whatever), I have been pushing myself to become more space-efficient with my electronics. This includes how they all are powered, so when I learned about the PinePower device, I ordered it immediately.
Intentionally leaking AWS keys
“Never check secrets into source control” is one of those rules that are 100% correct, until it’s not. There are no universal laws in software, and recently I had a reason to break this one. I checked AWS keys into a Git repository. I then pushed those commits to a public repository on GitHub. I did this intentionally, and lived to tell the tale. You almost certainly should never do this, so I thought I would share what happens when you do.
Corporate dependence in free and open source projects
The relationship between most open source developers and corporations engaging in open source work is rife with paradoxes. Developers want to be paid for their work, but when a company hires too many developers for a project, others clutch their pearls and grow concerned that the company is “taking over the project.” Large projects have significant expenses, but when companies join foundations established to help secure those funds, they may also be admonished for “not really contributing to the project.” If a company creates and opens up a new technology, users and developers inevitably come to assume that the company should be perpetually responsible for the on-going development, improvement, and maintenance of the project, to do otherwise would be “betraying the open source userbase.”
Finally a successful winter garden
Of all the bizarre things to have happened in 2020, my winter garden may be one of the more benign occurrences. I started gardening seven or eight years ago in Berkeley. The long backyard with excellent sunlight rewarded me with incredible tomato harvests summer after summer. Autumn became the time when everything would get thrashed or covered up to lie fallow through the wet winter months in Northern California. After moving to Santa Rosa, my gardening became much more serious but still packed it all in around October/November. The last few winter seasons I have tried a winter garden with little success, but this year the winter garden is astounding.
Parsing Jenkins Pipeline without Jenkins
Writing and locally verifying a CI/CD pipeline is a challenge thousands of
developers face, which I’m hoping to make a little bit easier with a new tool
named: Jenkins Declarative Parser (jdp
).
Jenkins Pipeline is one of the most important advancements made in the last 10
years for Jenkins, it can however behave like a frustrating black box for many
new and experienced Jenkins users. The goal with jdp
is to provide a
lightweight and easy to run utility and library for validating declarative
Jenkinsfiles.
Parsing in Rust
In a world where everything is increasingly YAML, you might find yourself wondering: “why bother to write a parser?” For starters, I recommend reading the YAML specification before if you haven’t, but more importantly: there are so many domains which can be better modeled with domain-specific semantics and syntax. When I was younger parsing was typically done with lexx/yacc/bison/whatever and was complete drudgery, but there are a few great modern tools in the Rust ecosystem that make writing parsers fun.
The Five Stages of Incident Response
Training engineers to own their infrastructure can be challenging. It is important to help them recognize the five stages of incident response, because only then can system healing begin.
Noodling on Otto's pipeline state machine
Recently I have been making good progress with Otto such that I seem to be unearthing one challenging design problem per week. The sketches of Otto pipeline syntax necessitated some internal data structure changes to ensure that to right level of flexibility was present for execution. Otto is designed as a services-oriented architecture, and I have the parser service and the agent daemon which will execute steps from a pipeline. I must now implement the service(s) between the parsing of a pipeline and the execution of said pipeline. My current thinking is that two services are needed: the Orchestrator and the Pipeline State Machine.