As a comprehensive operating system FreeBSD never ceases to impress me, the
recent iterations of FreeBSD
Jails as an example have been an
absolute joy to use. The introduction of the
vnet(9)
network subsystem has completely transformed what I had originally thought
about software-defined networking. My previous exposure to the concept of
software-defined
networking was
through both OpenStack and Docker, two very
different approaches to the broad domain of “SDN”. FreeBSD’s vnet system has
resonated most strongly with me and has allowed me some measure of success in
deploying real production-grade virtualized networks.
Howdy!
Welcome to my blog where I write about software
development, cycling, and other random nonsense. This is not
the only place I write, you can find more words I typed on the Buoyant Data blog, Scribd tech blog, and GitHub.
Dynamically adding parameters in sqlx
Bridging data types between the database and a programming language is such a
foundational feature of most database-backed applications that many developers
overlook it, until it doesn’t work. For many of my Rust-based applications I
have been enjoying sqlx which strikes
the right balance between “too close to the database”, working with raw cursors
and buckets of bytes, and “too close to the programming language”, magic object
relational mappings. It reminds me a lot of what I wanted Ruby Object
Mapper to be back when it was called “data mapper.” sqlx
can do many things, but it’s not a silver bullet and it errs on the side of
“less magic” in many cases, which leaves the developer to deal with some
trade-offs. Recently I found myself with just such a trade-off: mapping a Uuid such that I could do IN queries.
Thoughts on WebTorrent
WebTorrent is one of the most novel uses of some modern browser technologies that I have recently learned about. Using WebRTC is able to implement a truly peer-to-peer data transport on top of support offered by existing browsers. I came across WebTorrent when I was doing some research on what potential future options might exist for more scalable distribution of free and open source libraries and applications. In this post, I want to share some thoughts and observations I jotted down while considering WebTorrent.
Technically I'm microblogging now.
I am a big fan of the open web and although I have enjoyed Twitter the platform has regressed in dramatic form and function since I first adopted it. I remember Twitter actively avoided building a walled garden with fantastic APIs and RSS feeds open to the public. Much of the popularity of the platform hinged upon the incredible third party applications and integrations developers like me built in the first five-ish years of its existence. Over time the site has strayed from open APIs and standards, and while I still enjoy Twitter, I want some more flexibility which is why you can now subscribe to my microblog with any RSS-capable client.
Synchronizing notes with Nextcloud and Vimwiki
The quantity of things I need to keep track of or be responsible for has
exploded in the past few years, so much so that I have had to really focus on
organizing my “personal knowledgebase.” When I originally tried to spend some
time improving my information management system, I found numerous different
services offering to improve my productivity and to help me keep track of
everything. Invariably many of these tools were web apps. In order to quickly and
productively work with information, a <textarea/> in a web page is the choice
of just about last resort. I recently revisited
Vimwiki and have been quite satisfied both by
my productivity boost and the benefits that come with having raw
text to work with. The best benefit: easy synchronization of notes with Nextcloud.
Reverse proxying a Tide application with Nginx
Every now and again I’ll encounter a silly problem, fix it, forget about it, and then later run into the exact same problem again. Today’s example is a confusing error I encountered when reverse-proxying a Tide application with Nginx. In the Tide application, I was greeted with an ever-so-descriptive error:
Multiple Let's Encrypt domains in a single Nginx server block
Nginx is a fantastic web server and reverse proxy to use
with Let’s Encrypt, but when dealing with multiple
domains it can be a bit tedious to configure. I have been moving services into
more FreeBSD jails as I alluded to in my previous
post, among them the
general Nginx proxy jail which I have serving my HTTP-based services. Using
Let’s Encrypt for TLS, I found myself declaring multiple server blocks inside
my virtual host configurations to handle the apex domain (e.g.
dotdotvote.com), the www subdomain, and vanity domains (e.g.
dotdot.vote). With the help Membear and MTecknology in the #nginx
channel on Freenode, I was able to refactor multiple
largely redundant server blocks into one.
Using FreeBSD's pkg(1) with an 'offline' jail
In the modern era of highly connected software, I have been trying to “offline” as many of my personal services as I can. The ideal scenario being a service running in an environment where it cannot reach other nodes on the network, or in some cases even route back to the public internet. To accomplish this I have been working with FreeBSD jails a quite a bit, creating a service per-jail in hopes of achieving high levels of isolation between them. This approach has a pretty notable problem at first glance: if you need to install software from remote sources in the jail, how do you keep it “offline”?
Loving the PinePower
My current available working space is at an all time low which has made the dimensions of everything around me much more important. While I can never become one of those extreme minimalists that works with only their laptop on a park bench, next to their camper van (or whatever), I have been pushing myself to become more space-efficient with my electronics. This includes how they all are powered, so when I learned about the PinePower device, I ordered it immediately.
Intentionally leaking AWS keys
“Never check secrets into source control” is one of those rules that are 100% correct, until it’s not. There are no universal laws in software, and recently I had a reason to break this one. I checked AWS keys into a Git repository. I then pushed those commits to a public repository on GitHub. I did this intentionally, and lived to tell the tale. You almost certainly should never do this, so I thought I would share what happens when you do.
Corporate dependence in free and open source projects
The relationship between most open source developers and corporations engaging in open source work is rife with paradoxes. Developers want to be paid for their work, but when a company hires too many developers for a project, others clutch their pearls and grow concerned that the company is “taking over the project.” Large projects have significant expenses, but when companies join foundations established to help secure those funds, they may also be admonished for “not really contributing to the project.” If a company creates and opens up a new technology, users and developers inevitably come to assume that the company should be perpetually responsible for the on-going development, improvement, and maintenance of the project, to do otherwise would be “betraying the open source userbase.”
Finally a successful winter garden
Of all the bizarre things to have happened in 2020, my winter garden may be one of the more benign occurrences. I started gardening seven or eight years ago in Berkeley. The long backyard with excellent sunlight rewarded me with incredible tomato harvests summer after summer. Autumn became the time when everything would get thrashed or covered up to lie fallow through the wet winter months in Northern California. After moving to Santa Rosa, my gardening became much more serious but still packed it all in around October/November. The last few winter seasons I have tried a winter garden with little success, but this year the winter garden is astounding.
Parsing Jenkins Pipeline without Jenkins
Writing and locally verifying a CI/CD pipeline is a challenge thousands of
developers face, which I’m hoping to make a little bit easier with a new tool
named: Jenkins Declarative Parser (jdp).
Jenkins Pipeline is one of the most important advancements made in the last 10
years for Jenkins, it can however behave like a frustrating black box for many
new and experienced Jenkins users. The goal with jdp is to provide a
lightweight and easy to run utility and library for validating declarative
Jenkinsfiles.
Parsing in Rust
In a world where everything is increasingly YAML, you might find yourself wondering: “why bother to write a parser?” For starters, I recommend reading the YAML specification before if you haven’t, but more importantly: there are so many domains which can be better modeled with domain-specific semantics and syntax. When I was younger parsing was typically done with lexx/yacc/bison/whatever and was complete drudgery, but there are a few great modern tools in the Rust ecosystem that make writing parsers fun.
The Five Stages of Incident Response
Training engineers to own their infrastructure can be challenging. It is important to help them recognize the five stages of incident response, because only then can system healing begin.
Noodling on Otto's pipeline state machine
Recently I have been making good progress with Otto such that I seem to be unearthing one challenging design problem per week. The sketches of Otto pipeline syntax necessitated some internal data structure changes to ensure that to right level of flexibility was present for execution. Otto is designed as a services-oriented architecture, and I have the parser service and the agent daemon which will execute steps from a pipeline. I must now implement the service(s) between the parsing of a pipeline and the execution of said pipeline. My current thinking is that two services are needed: the Orchestrator and the Pipeline State Machine.
Orphan steps in Otto Pipeline
After sketching out some Otto Pipeline
ideas last week, I was fortunate
enough to talk to a couple peers in the Jenkins community about their pipeline
thoughts which led to a concept in Otto Pipelines: orphan steps. Similar to
Declarative jenkins Pipelines, my initial sketches mandated a series of stage
blocks to encapsulate behavior. Steven
Terrana, author of the Jenkins Templating
Engine made a
provocative suggestion: “stages should be optional.”
Sketches of syntax, a pipeline for Otto
Defining a good continuous integration and delivery pipeline syntax for Otto is one of the most important challenges in the entire project. It is one which I struggled with early in the project almost a year and a half ago. It is a challenge I continue to struggle with today, even as the puzzles pieces start to interlock for the multi-service system I originally imagined Otto to be. Now that I have started writing the parser, the pressure to make some design decisions and play them out to their logical ends is growing. The following snippet compiles to the current Otto intermediate representation and will execute on the current prototype agent implementation:
Passing credentials to Otto steps
One of the major problems I want to solve with Otto is that in many CI/CD tools secrets and credentials can be inadvertently leaked. Finding a way to allow for the secure use of credentials without giving developers direct access to the secrets is something most CI/CD systems fail at today. My hope is that Otto will succeed because this is a problem being considered from the beginning. In this post, I’m going to share some of the thoughts I currently have on how Otto can pass credentials around while removing or minimizing the possibility for them to be leaked by user code.
Taking inspiration from Smalltalk for Otto steps
I have recently been spending more time thinking about how
Otto should handle “steps” in a CI/CD
pipeline. As I mentioned in my previous post on the step libraries
concept, one of the big unanswered questions with
the prototype has been managing flow-control of the pipeline from a step. To
recap, a “step” is currently being defined as an artifact (.tar.gz) which
self-describes its parameters, an entrypoint, and contains all the code/assets
necessary to execute the step. The execution flow is fairly linear in this
concept: an agent iterates through a sequence of steps, executing each along
the way, end. In order for a step to change the state of the pipeline, this
direction of flow control must be reversed. Allowing steps to communicate changes
to the agent which spawned them requires a control socket.