Writing and locally verifying a CI/CD pipeline is a challenge thousands of
developers face, which I’m hoping to make a little bit easier with a new tool
named: Jenkins Declarative Parser (jdp
).
Jenkins Pipeline is one of the most important advancements made in the last 10
years for Jenkins, it can however behave like a frustrating black box for many
new and experienced Jenkins users. The goal with jdp
is to provide a
lightweight and easy to run utility and library for validating declarative
Jenkinsfiles.
Howdy!
Welcome to my blog where I write about software
development
, cycling, and other random nonsense. This is not
the only place I write, you can find more words I typed on the Buoyant Data blog, Scribd tech blog, and GitHub.
Parsing in Rust
In a world where everything is increasingly YAML, you might find yourself wondering: “why bother to write a parser?” For starters, I recommend reading the YAML specification before if you haven’t, but more importantly: there are so many domains which can be better modeled with domain-specific semantics and syntax. When I was younger parsing was typically done with lexx/yacc/bison/whatever and was complete drudgery, but there are a few great modern tools in the Rust ecosystem that make writing parsers fun.
The Five Stages of Incident Response
Training engineers to own their infrastructure can be challenging. It is important to help them recognize the five stages of incident response, because only then can system healing begin.
Noodling on Otto's pipeline state machine
Recently I have been making good progress with Otto such that I seem to be unearthing one challenging design problem per week. The sketches of Otto pipeline syntax necessitated some internal data structure changes to ensure that to right level of flexibility was present for execution. Otto is designed as a services-oriented architecture, and I have the parser service and the agent daemon which will execute steps from a pipeline. I must now implement the service(s) between the parsing of a pipeline and the execution of said pipeline. My current thinking is that two services are needed: the Orchestrator and the Pipeline State Machine.
Orphan steps in Otto Pipeline
After sketching out some Otto Pipeline
ideas last week, I was fortunate
enough to talk to a couple peers in the Jenkins community about their pipeline
thoughts which led to a concept in Otto Pipelines: orphan steps. Similar to
Declarative jenkins Pipelines, my initial sketches mandated a series of stage
blocks to encapsulate behavior. Steven
Terrana, author of the Jenkins Templating
Engine made a
provocative suggestion: “stages should be optional.”
Sketches of syntax, a pipeline for Otto
Defining a good continuous integration and delivery pipeline syntax for Otto is one of the most important challenges in the entire project. It is one which I struggled with early in the project almost a year and a half ago. It is a challenge I continue to struggle with today, even as the puzzles pieces start to interlock for the multi-service system I originally imagined Otto to be. Now that I have started writing the parser, the pressure to make some design decisions and play them out to their logical ends is growing. The following snippet compiles to the current Otto intermediate representation and will execute on the current prototype agent implementation:
Passing credentials to Otto steps
One of the major problems I want to solve with Otto is that in many CI/CD tools secrets and credentials can be inadvertently leaked. Finding a way to allow for the secure use of credentials without giving developers direct access to the secrets is something most CI/CD systems fail at today. My hope is that Otto will succeed because this is a problem being considered from the beginning. In this post, I’m going to share some of the thoughts I currently have on how Otto can pass credentials around while removing or minimizing the possibility for them to be leaked by user code.
Taking inspiration from Smalltalk for Otto steps
I have recently been spending more time thinking about how
Otto should handle “steps” in a CI/CD
pipeline. As I mentioned in my previous post on the step libraries
concept, one of the big unanswered questions with
the prototype has been managing flow-control of the pipeline from a step. To
recap, a “step” is currently being defined as an artifact (.tar.gz
) which
self-describes its parameters, an entrypoint, and contains all the code/assets
necessary to execute the step. The execution flow is fairly linear in this
concept: an agent iterates through a sequence of steps, executing each along
the way, end. In order for a step to change the state of the pipeline, this
direction of flow control must be reversed. Allowing steps to communicate changes
to the agent which spawned them requires a control socket.
Quick and simple dot-voting with Dot dot vote
I recently launched Dot dot vote, a simple web application for running anonymous dot-voting polls. Dot-voting is a quick and simple method for prioritizing a long list of options. I find them to be quite useful in when planning software development projects. Every team I have ever worked with has had far too many potential projects than they have people or time, dot voting can help customers and stakeholders weigh in on which of the projects are most valuable to them. Dot dot vote makes it trivial to create short-lived polls which don’t require any user registrations, logins, or overhead.
Moving again with Otto: Step Libraries
I have finally started to come back to Otto, an experimental playground for some of my thoughts on what an improved CI/CD tool might look like. After setting the project aside for a number of months and letting ideas marinate, I wanted to share some of my preliminary thoughts on managing the trade-offs of extensibility. From my time in the Jenkins project, I can vouch for the merits of a robust extensibility model. For Otto however, I wanted to implement something that I would call “safer” or “more scalable”, from the original goals of Otto: