Brendan O'Leary's previous blog posts // This file contains a collection of my blog posts. // This includes posts published on boleary.dev as well as external publications. // Each blog post is separated by a line containing only '---'. // This data is intended for use by Large Language Models (LLMs) // to understand the topics, themes, and writing style of my blog content. // Each blog post entry is in Markdown format // Each entry should also contain a Title, Description, Date, and relevant Tags. // External posts include Publication name and Original URL. --- Title: Your AI Has Genie Energy (And That's a Problem) Subtitle: Why prompting AI is less like programming and more like making wishes—with all the catastrophic literalness that implies Description: AI coding assistants aren't malicious like genies, but they share the same fundamental problem: they respond to what you said, not what you meant. Here's how to make better wishes. Date: 2/4/2026 Source: boleary.dev Tags: AI, Tech My daughter has been obsessed with Amelia Bedelia books lately. If you're not familiar, Amelia is a housekeeper who takes every instruction completely literally. When told to "draw the drapes," she sketches them. Asked to "dress the chicken," she puts clothes on it. Told to "put out the lights," she hangs them on the clothesline. Reading these to my kids at bedtime, I keep thinking: *this is exactly what prompting an AI feels like.* ## The Genie Problem There's an old thought experiment about genies that philosophers love. You find a lamp, rub it, and a genie appears offering three wishes. The catch? The genie will interpret your wish as literally and maliciously as possible. Wish for "a million bucks"? Here's a million male deer trampling your house. Ask to "never feel cold again"? Your nerve endings stop working. Request "eternal life"? Enjoy outliving the heat death of the universe, conscious and alone. The genie isn't stupid. It understands exactly what you *meant*. It just doesn't care. It fulfills the letter of your wish while ignoring the spirit entirely. AI coding assistants aren't malicious like the genie. But they share the same fundamental problem: they respond to what you *said*, not what you *meant*. ## Amelia Bedelia, Senior Software Engineer I asked Claude to "clean up this function." It deleted most of the code. Technically cleaner! I asked it to "make this faster." It removed all the error handling. Much faster now—when it works. I told it to "add some comments." It added a comment above every single line. `// increment i` above `i++`. None of these responses were wrong. They were exactly what I asked for. The problem was that my prompts were vague enough that a helpful but literal-minded assistant could reasonably interpret them many ways. The [Greptile State of AI Coding 2025 report](https://www.greptile.com/state-of-ai-coding-2025) found that developers using AI tools shipped 76% more code year over year. But raw output isn't the same as useful output. You can generate a lot of code quickly if you're willing to accept "technically correct but missing the point" as a passing grade. ## Context Is Everything (And You Probably Forgot to Provide It) Amelia Bedelia isn't *wrong* when she draws the drapes. She's missing context that any reasonable person would have. The homeowner assumes shared understanding. Amelia assumes nothing beyond the literal words. AI assistants are the same. They have zero context about: - Your project's specific conventions - Why you made the architectural decisions you made - What "clean" or "fast" means in your codebase - The parts of the codebase you haven't shown them - What you'll regret tomorrow [Liz Fong-Jones captured this well](https://bsky.app/profile/lizthegrey.com/post/3mb65fnjiis25): "In essence a language model changes you from a programmer who writes lines of code, to a programmer that manages the context the model has access to." Managing context is the new core skill. Not managing the AI's feelings or convincing it to help you. Managing what it knows about your situation. ## Making Better Wishes The secret to working with genies (and AI) isn't cleverness. It's precision. Bad wish: "Make me rich." Better wish: "Add $10 million in legally obtained US dollars to my existing bank account at Chase, account number XXXX, without triggering any regulatory flags, tax complications, or negative consequences to myself, my family, or anyone else." The better version isn't just more specific—it anticipates failure modes and closes loopholes. Same with prompts: Bad prompt: "Refactor this function to be cleaner." Better prompt: "Refactor this function to use early returns instead of nested conditionals. Keep all existing error handling. Don't change the function signature or return type. Match the code style in utils/helpers.ts." The better prompt isn't longer for the sake of being longer. It specifies *what kind* of clean you mean, *what to preserve*, and *what to match*. ## Three Things That Actually Help **Say what you mean, precisely.** If you want error handling preserved, say so. If you want the code style to match existing files, say which files. If you want tests, say what kind and how many. Ambiguity is an invitation for the genie to exercise creativity in ways you won't like. **Specify what NOT to do.** Genies love loopholes. AI assistants love "helpful" additions you didn't ask for. "Don't add any new dependencies." "Don't change the API surface." "Don't refactor anything outside this function." Constraints are gifts. **Give context before asking for anything.** Before you make your wish, tell the genie about your kingdom. Before you prompt for code, give the AI your conventions, your constraints, your codebase's quirks. The [junior developer mental model](https://blog.kilo.ai/i/175061191/the-mental-model-that-works) works here too. You wouldn't ask a new hire to "make this better" without explaining what "better" means in your codebase. Don't do it to your AI either. ## The Amelia Bedelia Upside Reading those books to my kids, I've noticed something. Amelia Bedelia usually saves the day in the end. She makes amazing pie, or her literal interpretation accidentally solves a problem no one else could. AI assistants are similar. Sometimes the literal interpretation *is* what you need. Sometimes "add error handling to this function" produces exactly the error handling you would have written, just faster. The skill isn't in preventing all misunderstandings. It's in recognizing them quickly and iterating. The feedback loop between wish and result is instant now. That matters more than getting it right the first time. ## Making Your First Wish If I had to summarize everything I've learned about prompting: *Your AI is Amelia Bedelia with a photographic memory and the energy of a thousand junior developers. It will do exactly what you say. So say exactly what you mean.* Include context. Be specific. Anticipate misinterpretation. And when it inevitably draws the drapes instead of closing them, laugh, clarify, and try again. The genie has infinite patience. Use it. --- *Your kids probably understand Amelia Bedelia better than you understand your AI assistant. Maybe that's the real lesson here.* --- Title: The Death of Morality Subtitle: How America abandoned moral reasoning for tribal loyalty Description: America hasn't become immoral—it's become post-moral. We've abandoned principled reasoning for pure tribal alignment, making moral positions contingent on who's doing what rather than what's being done. Date: 9/12/2025 Source: boleary.dev Tags: Personal import Tooltip from "../../../../components/Tooltip.astro"; **America has become a post-moral society.** Not immoral - that would imply we're violating principles we still believe in and have morality. Post-moral...we've abandoned moral reasoning itself. ## Tribe First, Principles Never Here's what I mean by post-morality: **Your moral position is determined by checking tribal alignment first.** Watch how this works in practice. When Elon Musk was the electric car guy fighting climate change, one tribe loved him. When he bought Twitter and started posting different opinions, that same tribe decided electric cars were actually problematic. The other tribe, which had mocked Tesla drivers as virtue-signaling liberals, suddenly discovered that EVs were pretty cool actually. The product didn't change. The climate impact didn't change. Only the tribal alignment changed - and with it, everyone's "deeply held" convictions. ## The Motivated Reasoning Machine The psychological research on this is damning. [Jonathan Haidt's work](https://www.penguinrandomhouse.com/books/73535/the-righteous-mind-by-jonathan-haidt/) shows [we don't reason our way to moral positions](https://pubmed.ncbi.nlm.nih.gov/11699120/) - we have gut reactions and then rationalize them. But we've gone beyond that. We now have entire media ecosystems dedicated to providing pre-packaged rationalizations for whatever position our tribe needs us to hold this week. **The principle never mattered - only who was doing it.** Remember when deficit spending was the ultimate evil? Until your party held power. Executive orders were tyrannical overreach? Until your president signed them. Tech monopolies were dangerous? Until they banned your opponents. Revealing powerful men who exploited women? Only if they voted wrong. Political violence was abhorrent? Unless it was your protesters burning the federal building. The grocery prices, the farmers, the gas prices - urgent crises that mysteriously vanished the moment the election ended. **This isn't just hypocrisy**. *Hypocrisy* implies you still believe in the underlying principle. This is the complete abandonment of principles in favor of pure tribal loyalty. ## How We Rationalize Anything We've gotten incredibly sophisticated at justifying obviously contradictory positions. We don't just flip-flop anymore - we develop elaborate theoretical frameworks for why the thing we opposed yesterday is actually good today. Watch how quickly "free speech is not freedom from consequences" becomes "censorship is violence" depending on who's being censored. See how "lock them up" becomes "due process matters" depending on who's accused. Notice how "states' rights" becomes "federal override" depending on which policy we're discussing. We've built an entire intellectual infrastructure designed to make our tribalism look principled. Academic theories, think pieces, and lengthy Twitter threads all devoted to explaining why our side's violence is speech but their side's speech is violence. ## The Cost: Moral Bankruptcy When everything is justified by tribal loyalty, actual moral reasoning atrophies. We lose the ability to say "my side is wrong about this"...or even "this person who happens to also be on my side is wrong about this"...because that's tribal betrayal. We can't acknowledge good ideas from the other side because that's giving aid to the enemy. It is so disturbing is watching the emotional reactions from friends whose political views I know well. When violence occurs, when someone is wronged, their response is entirely based on who did what to whom. Silence when the perpetrator is from their tribe. Outrage and dangerous rhetoric when the victim is one of their own. The wrongdoing **itself** becomes secondary to the tribal affiliation of those involved. Just another chance at a "gotcha" for the other side - actual humanity and empathy be damned! Actually, let me use some of that "empathy" to make the other side hurt more.... This moral bankruptcy isn't confined to social media or dinner table arguments - it happens at the highest levels of government. Leaders don't even pretend to be acting morally anymore, directly doing the opposite and being praised for it. And it's self-reinforcing: when we don't hold our leaders to any moral standard while still *claiming the moral high ground*, we teach everyone that principles are just weapons to use against the other side, not standards to live by. ## The Way Forward (If There Is One) How do you restart moral reasoning in a culture that's abandoned it? How do you get people to examine principles when tribal loyalty is the only principle left? As Jim VandeHei wrote after the Charlie Kirk killing, we need to ["break the fever. Or it might break us."](https://www.axios.com/2025/09/12/charlie-kirk-political-violence-america) The path forward isn't through more tribalism or better arguments for our side. It's through individual actions that chip away at this toxic dynamic. - **Stay grounded.** The most extreme voices get amplified *because* they're extreme, not because they're representative. - **Tone it down.** Stop dunking on bad takes. *Your outrage feeds the algorithm*, destroying our ability to think. - **Turn it off.** During breaking news, step away. *The misinformation-to-truth ratio is staggering*. - **Open up.** Ask questions from *curiosity*, not *judgment*. People are more complex than their tribe. - **Practice principle.** Call out your side when they're wrong. Credit the other side when they're right. These small tribal betrayals rebuild moral reasoning. The question isn't whether you engage in tribal thinking - you do. The question is whether you can occasionally step outside it and ask: would I support this if the other tribe did it? Would I oppose this if my tribe did it? Can I name a principle I'd maintain even if it cost my tribe power? If you can't, then you're not engaged in moral reasoning. You're just playing team sports with higher stakes. **The test is simple:** Would you support this if the other tribe did it? If not, you're not doing moral reasoning - you're playing team sports with democracy as the stakes. --- Title: Hurry up and wait Subtitle: The timeless struggle with time: A reflection on parenthood and mortality Description: Parenthood and mortality are two of the most profound struggles humans face. The passage of time, both in the daily grind and the long-term perspective, is a constant reminder of our finite existence. This blog explores how we grapple with time, from the daily challenges of parenting to the anticipation of future events. Date: 9/6/2025 Source: boleary.dev Tags: Personal One of the worst things that I think you can say to a new parent is "It goes by so fast, enjoy it!" When you have a newborn baby, time is one of those human constructs that goes out the window first. You're in survival mode—for both you and the baby. In fact, it's babies that most convince me that time is one of the things that human consciousness grapples to understand the most, yet understands the least. Sure, it is natural for us to go through the days and weeks in cycles—sleeping at night and having a routine in our days. But no one tells a newborn that. I have four kids, and I'm fairly confident that with each of them in the first few months of life, I took them to the window in the wee hours of the night, pointed outside, and said, "Look. It's nighttime. This is when you're supposed to be sleeping," only to be met with a blank stare. Or maybe some questionable intestinal sounds. And that is why it can be especially cruel to point out to new parents, who are just trying to survive hour by hour, minute by minute, on little-to-no-sleep, clothing of questionable cleanliness, and certainly no personal hygiene routine, that they will "miss this one day." Perhaps they will, but even if you're right, now is not the time. When you're fighting for survival, reminders of time aren't helpful. In fact, that can hurt your ability to see what's right in front of you. There's another phrase people say when talking about parenthood that I also think applies more broadly to the human desire and failure to grapple with time. "The days are long, but the years are short." This at least acknowledges the fact that days can be hard. Making meals, dealing with tantrums, band-aiding cut knees. The daily life of families makes for long days, ending up (often) in exhausted parents who are in bed minutes after the children are. And yes, as the kids grow and go off to school, play sports, spend more time with friends than at home, you can be nostalgic for the time they were more dependent on you for their every need. But two things can be true: that nostalgia and the fact that it is hard, _damn hard_, to have another human being rely on you that way. Those two clichés—"It goes by so fast" and "The days are long but the years are short"—aren't the only clichés about time. In fact, humans have been grappling with the passage of time for so long that I think it may account for some of the oldest clichés in the book. It's challenging to pin down why time is such an obsession of the human mind. Perhaps it is what comes with a consciousness that is capable of understanding its own mortality. Knowing that a resource is finite causes any animal—human or not—to hoard and obsess over it. With the consciousness of time comes another feeling. One that, for someone like me who is prone to a significant amount of anxiety, can be more devastating still. Anticipation. And yet another time cliché comes to mind whenever I think of that anticipation: hurry up and wait. Well...I can't wait. It's difficult to decide what is worse to wait for—something good or something bad. In your adult life, you find many of both. A long-awaited vacation. Results of a biopsy. Visits from friends. The next shoe to drop at work. A big (good) milestone for your child. A big (bad) milestone for your child. I find that all of these things can be complicated to handle as a creature acutely aware that the time we have to wait for them is, in the end, actually all we have. The pressure to both enjoy the time we have and plan for the future we know is inevitable can be daunting. This can be especially hard when dealing with grief and loss—or anticipating grief and loss. Because what, after all, are we worrying about losing when someone is sick or waiting for news from a doctor? It is time. The thing that we share with each other. The thing that bonds us to one another. The thing that we only have so much of is - time. Time waits for no one. Time is of the essence. Time is money. Time flies when you're having fun. Time marches on. Only time will tell. Time is running out. Make time. Kill time. There's no time like the present? The only time we have **is the present**. And that is perhaps the most disturbing part of it all. We only have the present. Past and future are concepts, but only that. The only time we actually have is right now. That's what makes having a baby _and_ sending that child to school hard. That's what makes the daily routine _and_ anticipating the future hard. But it's also all we have, so we better make the most of it. While we still have the time. --- Title: The 10 Year Old Trip: One Kid, One Parent, One Perfect Tradition Subtitle: How we accidentally discovered the best parenting hack for large families Description: When you have four kids, individual attention becomes a rare commodity. The '10 year old trip' tradition gives each child their moment to be the only decision-maker while creating memories that last a lifetime. Date: 8/23/2025 Source: boleary.dev Tags: Personal, Personal import Callout from "../../../../components/Callout.astro"; I always say we have **too many kids**. Four kids, to be exact. It's somewhat in jest, but also... it's true. When you're trying to get a family of six moving anywhere, "vacation" becomes a relative term. We go to the beach, sure, but let's be honest—it's not a vacation for my wife and me. It's a trip. The kids have a blast, but for us? It's a lot of work. That's why when my wife stumbled across another large family's [instagram posts](https://www.instagram.com/kathrynwhitakertx/) about their "10 year old trip" tradition, it immediately resonated. Here was a family that understood the chaos, the constant compromises, the reality that individual attention becomes a rare commodity when you're outnumbered. ## The Tradition The concept is simple: when each child turns 10, they get to pick one parent and one destination (within reason, in the continental United States) for a solo trip. Just them, one parent, and complete decision-making power. Think about it—when do kids in large families ever get to make *all* the decisions? Usually, it's a negotiation between siblings, a compromise that leaves everyone partially satisfied but no one completely happy. Not on the 10 year old trip. This is their show. ## Our Adventures So Far ### Trip One: Legoland and Beyond Our oldest son chose me and Legoland in Florida. Right off the bat, he was making the big calls. When I explained that staying on the Legoland resort property was expensive, but if we stayed off-property we could add a day at Harry Potter World with the money we saved, he didn't hesitate. "Let's do Harry Potter World too!" The best part? I surprised him with a "last day before the plane home" trip to Cape Canaveral. He's obsessed with space and NASA, so watching his face light up at Kennedy Space Center was worth the extra driving. When you only have one kid to focus on, you can actually see these moments instead of managing three other meltdowns. ### Trip Two: Disney Magic My daughter, two years younger, chose my wife and—in what I had expected would be the template for most 10 year old trips—Disney World. But here's where the tradition surprised us again. Turns out, she's absolutely fearless on roller coasters. With just one kid to focus on, my wife could actually enjoy watching her daughter's pure joy instead of worrying about where the other three were or who needed what. They rode everything, multiple times, without the usual "but I don't want to" chorus from siblings. ### Trip Three: The Great Outdoors This year, I'm heading to Yellowstone with our third. After two theme park adventures, we're going completely different—national parks, hiking, wildlife. And it makes sense again in retrospect given our son's break from his siblings. I'm sure he'd love Disney World, but being a little more "introverted and skinned knees from climing trees" then his older siblings, this just fits. It's going to be fascinating to see how this plays out compared to the structured entertainment of the previous trips. ## The Real Magic Here's what I've learned: when you're used to managing four kids, having just one is incredibly freeing. (my wife and I have a saying "one is the new zero") You can actually have conversations. You can be spontaneous. You can say "yes" to things you'd normally have to say "no" to because of logistics. When we travel with all four kids, I don't call it a vacation. It's a trip. A vacation for the kids, but work for us parents. The 10 year old trip? That's _actually_ a vacation. But beyond the logistics, there's something deeper happening. These kids get undivided attention in a way that's nearly impossible in daily life with a household of six. They get to be heard, to have their preferences matter completely, to experience what it feels like to have to and get to make all the decisions for a few days. ## Why Structure Matters This tradition works because it gives kids something concrete to look forward to. Structure and tradition aren't just nice-to-haves in large families—they're survival tools. The more predictable rhythms you can build into their lives (daily, weekly, etc.), the better they handle the chaos that inevitably comes with having siblings. It's the same principle they tell you about with babies: structure helps them understand and adjust to the rhythms of life. But here's the thing—every kid is different. Some take to structure naturally, others push against it. The 10 year old trip works because it honors both needs: the structure of a family tradition and the individual exploration of personal choice. ## The Bigger Picture What started as a practical solution to the "too many kids" problem has become something much more meaningful. Each child knows their turn is coming. They watch their older siblings go and start planning their own adventure. They learn that being part of a big family doesn't mean getting lost in the crowd—it means your individual moment will come, and when it does, it'll be completely yours. The tradition also teaches them about decision-making in a low-stakes environment. Want to spend the whole day at one attraction? Your call. Prefer to hop around? Also your call. These are the kinds of choices they rarely get to make unilaterally at home. ## Looking Forward As we gear up for Yellowstone, I'm curious how this outdoor adventure will compare to the theme park experiences. Will the lack of structured entertainment make the one-on-one time even more meaningful? Will hiking trails lead to deeper conversations than roller coaster lines? What I know for sure is that this tradition has become one of our family's most treasured practices. It acknowledges the reality of large family life while creating space for individual connection. It gives each child something uniquely theirs in a household where sharing is the default. And honestly? It gives us parents a chance to remember why we love spending time with our kids in the first place, without the constant referee duties that come with the full crew. The 10 year old trip isn't just about the destination—it's about the rare gift of undivided attention in a divided world. For families like ours, that might be the most valuable vacation of all. --- Title: Grace Hopper Would Love AI Coding Subtitle: From machine code to natural language: Admiral Hopper's legacy lives on. Description: Admiral Grace Hopper spent her career fighting the assumption that humans had to adapt to machines, insisting machines should understand us. Today's AI coding assistants are exactly what Amazing Grace predicted. Date: 8/19/2025 Source: boleary.dev Tags: AI, Tech, History, Tech Admiral [Grace Hopper](https://en.wikipedia.org/wiki/Grace_Hopper) carried wire pieces [in her purse](https://www.youtube.com/watch?v=si9iqF5uTFk) - 11.8 inches for a nanosecond, nearly 1,000 feet for a microsecond. These weren't random lengths. They represented the maximum distance electricity could travel in those tiny time fractions. She'd hand them to confused engineers during meetings, watching their faces change as the abstract became tangible. *"I sometimes think we ought to hang one over every programmer's desk so they'll know exactly what they're throwing away when they throw away a microsecond."* Would Admiral Hopper have used a Mac? Or Arch Linux btw? But Hopper's real genius wasn't making time visible - **it was making [computers speak human](https://www.youtube.com/watch?v=1LR6NPpFxw4)**. She spent her career fighting the assumption that humans had to adapt to machines, insisting machines should understand us. I think about those wires and Admiral Hopper when I'm prompting Claude in plain English and watching code appear. We're living through exactly what Amazing Grace predicted. ## Beyond the Model T In 1983, when Morley Safer suggested the computer revolution was over, Hopper [disagreed](https://www.youtube.com/watch?v=1LR6NPpFxw4): *"We're only at the beginning. We've got the [Model T](https://en.wikipedia.org/wiki/Ford_Model_T). That's where we are now."* She was right. If 1983 was the Model T, today's AI coding assistants are self-driving cars. We've gone from her [Mark I](https://en.wikipedia.org/wiki/Harvard_Mark_I) computer doing three calculations per second to language models processing 100,000 tokens per second while understanding natural language. But again, the real revolution wasn't about speed—it was about making computers accessible to human thinking. ## The Vision She Fought For When Hopper proposed English-like commands in the 1950s, colleagues said it was impossible. *"I was told very quickly that I couldn't do this because computers didn't understand English."* But she had a different vision. She saw two kinds of users: *"people who liked using symbols—mathematicians and people like that"* and *"people who were in data processing who hated symbols, and wanted words."* Her breakthrough came with [FLOW-MATIC](https://en.wikipedia.org/wiki/FLOW-MATIC) in 1955—the first programming language using English-like statements. Commands replaced cryptic symbols, and FLOW-MATIC can be understood even by today's programmers who may have never seen it before: ``` COMPARE PRODUCT-NO (A) WITH PRODUCT-NO (B) ; IF GREATER GO TO OPERATION 10 ; IF EQUAL GO TO OPERATION 5 ; OTHERWISE GO TO OPERATION 2 . ``` But like all great languages, FLOW-MATIC's legacy lives on in something even bigger. And that would be [COBOL](https://en.wikipedia.org/wiki/COBOL), introduced in 1959 as the Common Business-Oriented Language. Programming finally looked less like advanced mathematics and more like structured English. ## Programming in Plain English Hopper kept pushing for actual English sentences that computers could execute. Today, I write to GPT-5: *"Build me a React component that displays a user's profile with their avatar, name, and bio. Make it responsive and include a follow button."* And it works. This isn't sophisticated autocomplete. These models understand context, intent, and implied requirements—exactly what Hopper envisioned. The progression is clear: - Machine code and punch cards (1940s-1950s) - Assembly language and early compilers (Hopper's era) - High-level languages like COBOL (1960s-1970s) - Personal computers, BASIC and the like (Hopper's "Model T" era) - Modern specialized high- and low- programming languages (1990s-2000s) - Natural language programming with AI (today) We've reached Hopper's ultimate goal: the primary interface between humans and computers is natural language itself. ## Making the Magic Concrete Just like Hopper with her wires, let's make LLMs less of a black box. Think of tokens as LEGO bricks. Each token represents roughly three-quarters of a word. When you prompt an LLM, you're giving an impossibly fast robot LEGO bricks and asking it to build something meaningful. That robot sees up to 128,000 bricks at once—that's roughly 400-500 pages of text or about 25,000 lines of code. It processes 50-120 bricks per second depending on the model. Put that in perspective: A fast typist at 100 words per minute produces about 133 tokens (LEGO bricks) per minute, or about 2.2 per second. Modern LLMs like Claude Sonnet 4 (50-60 TPS) or Gemini Flash (~100 TPS) process 25-50 times faster than a speed typist. In one microsecond—the time electricity travels Hopper's 984-foot wire—the LLM processes about 0.05-0.1 tokens. A human typist needs 30-60 seconds to produce what the AI generates in one second. In one full second? 50-120 tokens of coherent response. That's what would take a speed typist 25-60 seconds to type, but the AI is simultaneously understanding context, following instructions, and generating syntactically correct code. This isn't magic. It's very fast pattern matching at incredible scale, constrained by the same physical laws Hopper's wires represented. ## Time to Pay Attention Admiral Hopper would agree: software engineers need to pay attention. Too many developers make the same mistake Hopper's contemporaries made—dismissing fundamental shifts because they don't fit existing mental models. They see AI coding as "just autocomplete" or "vibe coding that won't last." Even worse, they say Hopper's most dangerous phrase: *"We've always done it that way."* Hopper also understood: when you make computers more accessible to human thinking, you don't replace creativity—you amplify it. Engineers who learned COBOL didn't become obsolete when it replaced machine code. They became more productive. The same thing is happening now, faster and more dramatically. Hopper also kept a counterclockwise clock in her office to remind people there was never any fundamental reason clocks had to run clockwise—just what people were used to. There's no fundamental reason programming has to be about memorizing syntax and fighting compilers. That was just what we were used to. ## What Would She Have Thought? I imagine Hopper pulling out LEGO bricks instead of wires. *"This is a token. Now imagine a machine that understands your entire blueprint and builds whatever you describe, as fast as electricity travels down this wire."* And then pull out the LEGO Eiffel Tower. At 10,001 pieces, it would take an LLM just a minute or two to assemble. It takes humans about [15-20 hours](https://time2brick.com/?p=48). Thankfully, Chris Adams [time-lapsed his 14 hour build](https://www.youtube.com/watch?v=m6XN5GQ4BPg) to show it at LLM speed. The parallels run deeper still: - **Then**: COBOL made programming readable to business people **Now**: LLMs make programming accessible to anyone who can describe what they want - **Then**: She created the first compiler to translate human-readable code **Now**: We have AI translating natural language directly into code - **Then**: She fought "we've always done it that way" thinking **Now**: We need to challenge assumptions about what programming looks like This isn't just about making programming easier—it's about democratizing computation itself. Just as COBOL broke programming out of mathematicians' exclusive realm, AI is breaking it out of professional developers' exclusive realm. Product managers build prototypes. Teachers create educational tools. Kids make games by describing them. ## The Wire Forward Hopper's nanosecond wire made the abstract concrete so people could reason with it and transcend it. We need to do the same with AI coding tools. Stop treating them as magical black boxes. Understand them as very fast pattern matchers working with discrete meaning units at incredible scale. Grace Hopper proved human-computer relationships didn't have to be adversarial—they could be collaborative. She laid the groundwork for everything we're experiencing now. > Don't let yourself say "we've always done it that way." I wish she could have seen Claude write COBOL from casual English. [Here's a program](https://gitlab.com/-/snippets/4880775) that prints the scale of a nanosecond, microsecond, and second - made from the prompt "Can you write me a COBOL program that prints out a scaled length of a nanosecond and a microsecond in the output?". Hopper would have smiled, pulled out a wire, and repeated what she told Safer: *"We're only at the beginning."* To developers dismissing AI coding as a fad, I imagine her adding, *"Pay attention. We've moved beyond the Model T I talked about in 1983. This is what I've been telling you would happen for sixty years."* **The future is human-friendly computing.** Admiral Hopper would be proud—and she'd want you to [get on board](https://kilocode.ai/). Originally published here: https://blog.kilocode.ai/p/grace-hopper-would-love-ai-coding --- Title: When Republics Fail Description: Examining historical patterns of democratic collapse and how the U.S. Constitution was deliberately designed with checks and balances to prevent executive overreach, legislative inaction, and public acquiescence that doomed republics like Weimar Germany and early Republican China. Date: 3/20/2025 Source: boleary.dev Tags: History import Callout from "../../../../components/Callout.astro"; History offers sobering examples of republics that collapsed into authoritarian regimes. In each case we see common patterns. This includes very well known falls like the [Weimar Republic's fall into Nazi Germany](https://en.wikipedia.org/wiki/Weimar_Republic), lesser known instances like the [early Republic of China (1912-1916)](https://en.wikipedia.org/wiki/Republic_of_China_(1912%E2%80%931949)) under Yuan Shikaiπ, and many older attempts at permeant republican societies. What does it look like? Ambitious leaders exploit crises to **overreach their executive power**. Lawmaking bodies **fail to check** these abuses in a meaningful way. And then ordinary people **acquiesced**, sometimes even welcoming strongman rule during chaos. Even though many of the examples of these failures we may think of occurred well after the Revolutionary War in the United States, the founders were keenly aware of these failure modes. They designed the U.S. Constitution with deliberate checks and balances to **prevent the fate of past republics**, drawing on lessons from history - and documenting many of those key insights in the [Federalist Papers](https://guides.loc.gov/federalist-papers/full-text). I wanted to take a closer look at this, and writing this blog became my excuse to do so. And I want to focus on those three steps that I mentioned: executive overreact, legislative inaction, and public acquiescence - and then think about how the United States is specifically designed to attempt to prevent the types of republican failures we've seen in the past. **Table of Contents** - [Executive Overreach: When Leaders Seize Too Much Power](#executive-overreach-when-leaders-seize-too-much-power) - [American safeguards against executive tyranny](#american-safeguards-against-executive-tyranny) - [Legislative Inaction: When Parliaments Fail to Check Tyranny](#legislative-inaction-when-parliaments-fail-to-check-tyranny) - [American safeguards in the legislature](#american-safeguards-in-the-legislature) - [Public Acquiescence: When Citizens Surrender Liberty for Security](#public-acquiescence-when-citizens-surrender-liberty-for-security) - [American perspective on public vigilance](#american-perspective-on-public-vigilance) - [How the U.S. System Was Designed to Endure](#how-the-us-system-was-designed-to-endure) - [Lessons for today](#lessons-for-today) ## Executive Overreach: When Leaders Seize Too Much Power A republic entrusts elected leaders with limited powers, not unchecked authority. **Executive overreach** occurs when a leader subverts these limits and accumulates power that no single branch should hold. The Founding Fathers warned that concentrating all government powers in one person or body is tyranny: “The accumulation of all powers, legislative, executive, and judiciary, in the same hands...may justly be pronounced the very definition of tyranny,” James Madison wrote in [Federalist No. 47](https://teachingamericanhistory.org/document/federalist-47-federalist-48-and-federalist-51/). To prevent this, the U.S. Constitution creates separate branches and distributes authority - making them "co-equal branches" of the federal government. As [Federalist No. 51](https://avalon.law.yale.edu/18th_century/fed51.asp) explains: > Ambition must be made to counteract ambition” Each branch must have the means and motive to resist encroachments by the others . If _"men were angels, no government would be necessary,"_ but since they are not, the government must be obliged _"to control itself"_ through internal checks and balances. So if that is true, how have we seen in history Executive overreach lead to the breaking down of these checks and balances? **Nazi Germany (Weimar Republic's collapse):** In the early 1930s, Germany's Weimar Republic was a democracy on paper, but economic depression and political deadlock created an opening for executive abuse. After Adolf Hitler was appointed chancellor in January 1933, he moved swiftly to dismantle checks on his power. Exploiting emergency provisions, Hitler persuaded President Hindenburg to issue the Reichstag Fire Decree, suspending civil liberties. He then pushed through the [**Enabling Act of 1933**](https://en.wikipedia.org/wiki/Enabling_Act_of_1933), a law that gave Hitler's cabinet the power to enact laws without the Reichstag (parliament) or presidential approval. This Act effectively let Hitler bypass the constitution's checks and rule by decree, laying the legal foundation for his dictatorship. By combining the offices of President and Chancellor after Hindenburg's death, Hitler completed his power grab and proclaimed himself _Führer_, wielding absolute authority. In short, Hitler **manipulated the democratic system to destroy democracy and create a dictatorship**. The Weimar Republic's fate shows how quickly an executive can accrue near-absolute power in a crisis, especially if constitutional safeguards are weak or ignored. **China's First Republic:** The [**early Republic of China**](https://en.wikipedia.org/wiki/Republic_of_China_(1912%E2%80%931949)) offers a lesser-known but vivid example of executive overreach. After the 1911 revolution ended imperial rule, a constitutional republic was proclaimed in 1912. General **Yuan Shikai** soon became president, and rather than uphold republican institutions, he systematically undermined them. In 1913, when the new National Assembly and elected parliament asserted themselves, Yuan resorted to violence and coercion. He allegedly ordered the assassination of Song Jiaoren, the leader of the majority [Kuomintang](https://en.wikipedia.org/wiki/Kuomintang) (Nationalist) Party, who was a pro-democracy advocate. Yuan then banned the Kuomintang, purged hundreds of opposition lawmakers, and ultimately dissolved Parliament outright in 1914. Ruling by decree and backed by the Beiyang Army, Yuan stripped away the checks on his authority and even proclaimed himself Emperor for life in 1915. This was the ultimate act of overreach: **a president of a republic crowning himself monarch**. Yuan's power grab quickly eroded China's first experiment in republican governance. As one historian noted, **“China's first attempt at a democratic transition utterly failed”** once Yuan dismantled the nascent constitutional system ( [A Forgotten Experiment | British Journal of Chinese Studies](https://bjocs.site/index.php/bjocs/article/view/54) ). Within a few years, the republic had collapsed into warlordism - a direct outcome of one man concentrating all power in his own hands. ### American safeguards against executive tyranny The framers of the U.S. Constitution, having _just_ defeated monarchical rule, created a presidency with significant but strictly limited powers. Presidential authority is systematically checked by Congress and the Judiciary at every turn. Presidents cannot unilaterally create or modify laws—all legislation requires Congressional approval. Executive orders and emergency powers remain subject to judicial review and congressional limitation. Congress controls funding, conducts oversight, and can impeach presidents who exceed legal boundaries. This structure embodies Madison's principle that liberty requires no branch exercise another's powers. Unlike in failed republics, American presidents cannot dissolve legislatures or rewrite the Constitution, even during crises. Washington himself warned in his [Farewell Address](https://avalon.law.yale.edu/18th_century/washing.asp): > the spirit of encroachment tends to consolidate the powers of all departments in one, and thus create... a real despotism. This design ensures that any president attempting authoritarian measures would face immediate resistance from Congress, courts, states, media, and voters. While presidential overreach remains possible, the constitutional safeguards established by the Founders create substantially higher barriers against tyranny. By design, an American executive who attempted Yuan Shikai's or Hitler's path would quickly run up against Congress, the courts, the states, a free press, and ultimately the voters. This doesn't mean a president _could never_ abuse power, but the **hurdles are far higher** because of the constitutional circuit breakers installed by the Founders. ## Legislative Inaction: When Parliaments Fail to Check Tyranny In a healthy republic, the legislature is meant to be a robust check on the executive. If the legislative branch becomes ineffective, cowardly, or complicit, an aspiring autocrat faces little resistance in taking control. The Founders understood that tyranny can arise not only from an over-mighty executive, but also from a **failure of the legislature to assert its authority**. As Madison observed, tyranny can spring from “the intrigues of the executive” if not adequately checked by a representative assembly. In [Federalist No. 51](https://avalon.law.yale.edu/18th_century/fed51.asp), he argued each branch must have the will and means to defend its own powers, making **“the legislative authority … to be divided”** and balanced so it cannot be easily usurped. The U.S. Constitution thus vests Congress with extensive powers and internal checks (two houses with different constituencies and terms) to ensure energy and independence in the legislative branch. These measures were intended to prevent legislative paralysis or surrender in the face of executive ambition. **Nazi Germany:** The demise of Germany's democratic legislature in the 1930s is a cautionary tale of legislative failure. The Weimar Republic's Reichstag was crippled by political infighting and economic paralysis, enabling Hindenburg to govern through emergency decrees, bypassing parliament. By 1933, the Reichstag had already surrendered much of its authority. When Hitler proposed the Enabling Act to abolish parliamentary oversight, legislators capitulated without meaningful resistance. Centrist and right-wing parties voted to transfer full legislative powers to Hitler's cabinet for four years. Though intimidation factored in—SS troops surrounded the building and opposition members were silenced—elected officials legally dissolved their own authority. This self-neutering removed the final check on Hitler's power. The Reichstag became merely ceremonial while Hitler's inner circle controlled lawmaking, a decisive step in democracy's collapse. This underscores how a legislature's **weakness or indecision** can enable authoritarianism just as surely as an ambitious executive can. **Early Republic of China:** In China's short-lived 1912-1916 republic, parliament failed to prevent autocracy due to Yuan Shikai's direct suppression. Following the 1913 elections, the Chinese Parliament was tasked with drafting a constitution to limit presidential power. Yuan preemptively crushed this threat by forcefully dismantling the legislature. When lawmakers demonstrated independence, Yuan expelled opposition members en masse, stripping all 438 Kuomintang representatives of their seats and eliminating the majority party. Without a quorum, parliament stalled, and by January 1914, Yuan dissolved it entirely, ending constitutional development. Unlike in Weimar Germany, Chinese legislators actively resisted—some even launching the "Second Revolution" uprising - but Yuan's military quickly crushed this opposition. The Chinese Parliament's fate demonstrates that constitutions are only as strong as the officials willing to enforce them; when legislators lack institutional strength to check executive overreach, written protections prove inadequate. ### American safeguards in the legislature The U.S. Founders prioritized legislative strength, recognizing Congress as the people's true representative. They established it as co-equal - and initially even considering it preeminent - and equipped it with powerful checks against executive overreach. Congress alone can declare war, control spending, and confirm appointments. Its bicameral structure prevents hasty legislation and resists factional domination. Madison's principle that "ambition must counteract ambition" ensures each branch defends its authority when others encroach. We've seen this system be effective before in American history: * Truman's steel mill seizure was blocked by the Supreme Court in [Youngstown (1952)](https://en.wikipedia.org/wiki/Youngstown_Sheet_%26_Tube_Co._v._Sawyer). * [Andrew Johnson's impeachment in 1868](https://en.wikipedia.org/wiki/Impeachment_of_Andrew_Johnson) - the first use of this Congressional power - checked his violation of the Tenure of Office Act * Nixon's impoundment of funds prompted Congress to pass the [Budget and Impoundment Control Act (1974)](https://en.wikipedia.org/wiki/Congressional_Budget_and_Impoundment_Control_Act_of_1974) and assert Congressional control over the purse While this framework has generally succeeded, it depends on legislators actively using their powers. The Constitution provides tools but requires representatives with the courage to uphold their oaths regardless of party loyalty. This is a responsibility modern Americans must demand of their Congress. When that equilibrium fails, history shows **democracy itself** is in peril. ## Public Acquiescence: When Citizens Surrender Liberty for Security The final (and perhaps most crucial) factor in a republic's collapse is the role of the people themselves. Even the best-designed institutions can falter if the public either actively supports an authoritarian turn or passively allows it. **Public acquiescence** can take the form of voters electing anti-democratic leaders, or simply a population that - out of fear or despair - doesn't resist the erosion of their freedoms. The Founders recognized that no constitutional safeguards can save liberty if the people do not value and defend it. Ultimately, as Madison noted, _"a dependence on the people is, no doubt, the primary control on the government"_. At the same time, the Framers understood that in times of crisis, people might trade liberty for security. This is why they created **"auxiliary precautions"** - checks and balances, independent courts, federalism with powers resrved to the states, etc. - to secure rights even when popular sentiment wavers. Still, a republic's fate rests on its citizens' vigilance. When Benjamin Franklin was asked about the Constitutional Convention's result, he famously replied they had created _"a Republic, if you can keep it."_ This proviso underscores that public commitment is the ultimate safeguard. Franklin warned the government could: > only end in despotism…when the **people shall become so corrupted as to need despotic government**. If citizens grow indifferent or excessively partisan, or prefer a strongman's false comfort, no paper constitution will save the republic. **Nazi Germany:** Weimar democracy's collapse required both Hitler's ambition and public acceptance of authoritarian rule. Though the Nazis used terror against opponents, they also enjoyed genuine popularity. Many Germans willingly exchanged democratic freedoms for promises of economic recovery and national restoration. Millions, exhausted by unemployment and political dysfunction, embraced extremist parties. Hitler explicitly promised both economic revival and "strong authoritarian government" - with many voters willing to sacrifice democracy for perceived strength and stability. Once in power, Nazi propaganda reinforced compliance by portraying Hitler's leadership as superior to democratic division. When Hitler implemented radical measures (e.g. [Night of the Long Knives](https://en.wikipedia.org/wiki/Night_of_the_Long_Knives) or the [Nuremberg Laws](https://encyclopedia.ushmm.org/content/en/article/the-nuremberg-race-laws)), public resistance was minimal. By the time the true horrors of the Nazi regime became apparent, it was too late to effectively oppose it. Weimar's fall demonstrates how **democratic norms quickly erode when public faith falters**. Economic crisis and national resentment created demand for a "strong hand" that the Nazis exploited. Hitler succeeded not merely through force, but through a population that initially accepted democracy's dismantling from a mixture of hope, fear, and indifference. **Early Republic of China:** In China's case, public opinion is harder to gauge given lower literacy and limited mass politics, but evidence shows many Chinese elites and citizens accepted Yuan Shikai's authoritarian turn, fearing continued instability. Yuan's 1913-1914 crackdown faced resistance from Sun Yat-sen's supporters, but he also received significant backing from influential figures. The Progressive Party sided with Yuan against the Kuomintang, while reformers like Liang Qichao defended his harsh measures, calling rebels "more disastrous than deluge and wild animals"—prioritizing order over democracy. Tang Hualong even advocated "immediate extermination" of Yuan's opponents, revealing that even educated elites accepted strongman rule as necessary stabilization. The general public, particularly in northern provinces under Yuan's military control, failed to defend parliament. After decades of imperial decline and civil conflict, many Chinese desired unity and strong leadership at the cost of republican principles. Only when Yuan declared himself Emperor did he lose support from provincial governors and military leaders. By then, the republic was effectively finished, collapsing into warlordism after Yuan's death. China's experience demonstrates how public acquiescence often stems from exhaustion—people may accept authoritarianism when they fear democratic instability more than dictatorial control. Without established democratic traditions, following a strongman became the path of least resistance. ### American perspective on public vigilance The United States was founded on a paradox: profound _distrust_ of concentrated power, yet great _trust_ in the ordinary citizen. The Founders believed an informed citizenry forms the foundation of a lasting republic. Recognizing human fallibility, they designed a system encouraging participation while tempering sudden passions. Regular elections ensure accountability, while staggered terms and independent institutions **prevent momentary majorities from permanently upending the system**. Hamilton recognized that popular majorities can make mistakes, so the Constitution filters public impulses without thwarting public interest. Madison argued in Federalist 63 that the Senate could check "people's momentary errors" through deliberation and long-term thinking. The Bill of Rights protects fundamental liberties **even if a fearful majority would abandon them**. These mechanisms maintain freedom in **good times and bad**—when citizens are vigilant and when they aren't. The Founders emphasized civic education, with Adams noting, "Liberty cannot be preserved without general knowledge among the people." **The American system requires active public participation in safeguarding the republic**. Citizens must vote, question leaders, and reject blind loyalty to individuals or parties. Franklin's warning that government ends in despotism when people become corrupted implies _each generation must earn its republic anew_. Modern Americans face this responsibility. Becoming polarized, apathetic, or fearful would repeat past republics' failures. Our Constitution provides the framework, but **depends on the people** to enforce it. America's democratic culture, while stronger than 1930s Germany or 1910s China, requires constant civic engagement and prioritizing country over party. Throughout our history, public courage—from voting out corrupt officials to exposing wrongdoing—shows a republic that has, thus far, chosen vigilance over complacence. ## How the U.S. System Was Designed to Endure The examples of Weimar Germany and early Republican China underscore why the U.S. Founders were almost obsessive about **checks and balances**. They knew from their study of history (including the ancient Roman Republic's fall and the English Civil War) that giving any individual or group unchecked power was a recipe for disaster. The Constitution's **separation of powers** and **federalism** (splitting authority between state and federal governments) create multiple barriers to tyranny. As Madison noted, in the “compound republic of America,” power is divided both horizontally and vertically - a “double security” for the rights of the people. If an authoritarian movement were to capture one center of power, the others could resist. For instance, even if the executive branch veers toward autocracy, Congress can block its agenda, states can refuse unconstitutional directives, and courts can invalidate illegal orders. Conversely, congressional demagogues face presidential vetoes and judicial review. This system of **co-equal rivals, forced to cooperate but empowered to check each other** was deliberately designed. The American system also includes crucial safety valves absent in failed republics. **Regular, free elections** provide peaceful means to remove power-abusing leaders. **Freedom of speech** enables critics to raise alarms against authoritarian behavior before it's too late. An **independent judiciary** upholds rule of law against authoritarian impulses. And our constitutional culture ensures that even ambitious politicians typically justify actions in legal terms rather than dismissing constitutional constraints outright. This culture serves as a vital safeguard: when President Nixon trampled legal boundaries during Watergate, even members of his own party chose Constitution over cult of personality, forcing his resignation rather than allowing the rule of law to crumble. Today's far deeper partisan divisions demand even greater moral courage - we urgently need leaders willing to risk political exile by standing against their own faction when constitutional principles are at stake, before our democratic guardrails collapse entirely. ## Lessons for Today These collapsed democracies aren't distant curiosities - they are **urgent warnings**. The toxic combination that destroyed previous republics: social division, extreme polarization, charismatic demagogues, cowed legislatures, and citizens trading freedom for false security - they already casts shadows across our own landscape. America has weathered democratic crises before and faces profound challenges now. Our survival depends entirely on whether our institutions and civic culture possess the resilience to withstand mounting pressures. The constitutional architecture the Founders designed remains brilliant, but like any structure, it deteriorates without constant maintenance, and answering hard questions: * When an executive grasps for unconstitutional power, will Congress defend its authority with unwavering resolve—even against a president of their own party? Or will they, like the Reichstag, sacrifice democratic principles on the altar of partisan loyalty? * When confronted with genuine national crisis, will Americans recommit to constitutional principles, or will we surrender to the seductive promise of a strongman who pledges to "do whatever it takes"? * When courts face overwhelming pressure to validate unconstitutional actions in the name of "national security" or "emergency powers," will they stand as the Youngstown Court did, or capitulate like Weimar's judges? Our judiciary's independence remains democracy's last line of defense. We've resisted authoritarian temptations before: Washington rejected kingship by establishing the peaceful transfer of power; our military has maintained steadfast civilian subordination; and even after 9/11's trauma, we maintained public debate over security measures and implemented sunset provisions on emergency powers. Today's fierce conflicts over executive authority, congressional oversight, and democratic norms aren't signs of failure - they're **vital signs** of a republic still fighting for its principles. A dying democracy wouldn't bother with such debates; it would slip quietly into autocracy's embrace. And today, maybe more than ever, Washington's warning against the 'spirit of party' remains relevant: > The alternate domination of one faction over another, sharpened by the spirit of revenge natural to party dissension...is itself a frightful despotism. But this leads at length to a more formal and permanent despotism. The choice before us is stark: vigilant citizenship or democratic collapse. History awaits our answer. **Sources and further reading:** 1. Madison, James. _Federalist No. 47_ (1788) - Warning against consolidation of power ([Federalist 47, Federalist 48, and Federalist 51 | Teaching American History](https://teachingamericanhistory.org/document/federalist-47-federalist-48-and-federalist-51/#:~:text=objection%20is%20founded,the%20federal%20Constitution%2C%20therefore%2C%20really)). 2. Madison, James. _Federalist No. 51_ (1788) - On checks and balances (“Ambition counteract ambition”) ([The Avalon Project : Federalist No 51](https://avalon.law.yale.edu/18th_century/fed51.asp#:~:text=administer%20each%20department%20the%20necessary,internal%20controls%20on%20government%20would)) ([The Avalon Project : Federalist No 51](https://avalon.law.yale.edu/18th_century/fed51.asp#:~:text=were%20to%20govern%20men%2C%20neither,oblige%20it%20to%20control%20itself)). 3. _Weimar Republic and Nazi Germany:_ United States Holocaust Memorial Museum, _Hitler's Rise to Power_ - on Hitler using legal means to destroy democracy ( [Hitler Comes to Power: How, When, & Key Dates | Holocaust Encyclopedia](https://encyclopedia.ushmm.org/content/en/article/hitler-comes-to-power#:~:text=In%20January%201933%2C%20Hitler%20did,democracy%20and%20create%20a%20dictatorship) ); Wikipedia, _Enabling Act of 1933_ - law giving Hitler dictatorial powers ([Enabling Act of 1933 - Wikipedia](https://en.wikipedia.org/wiki/Enabling_Act_of_1933#:~:text=Gesetz%20zur%20Behebung%20der%20Not,and%20balances%20in%20the%20constitution)) ([Enabling Act of 1933 - Wikipedia](https://en.wikipedia.org/wiki/Enabling_Act_of_1933#:~:text=The%20election%20was%20a%20setback,With%20the%20Enabling%20Act%20in)); USHMM Holocaust Encyclopedia - Nazi campaign promises of authoritarian government ( [Hitler Comes to Power: How, When, & Key Dates | Holocaust Encyclopedia](https://encyclopedia.ushmm.org/content/en/article/hitler-comes-to-power#:~:text=During%20election%20campaigns%2C%20Adolf%20Hitler,The%20Nazis%20promised%20to) ). 4. _Republic of China 1912-1916:_ Britannica, _Yuan Shikai_ - Yuan's undermining of parliament and self-crowning as Emperor ([Yuan Shikai | Chinese President & Warlord | Britannica](https://www.britannica.com/biography/Yuan-Shikai#:~:text=war%20lords%3B%20a%20permanent%20constitution,for%20life%20and%20then%20boldly)) ([Yuan Shikai | Chinese President & Warlord | Britannica](https://www.britannica.com/biography/Yuan-Shikai#:~:text=thus%20bringing%20about%20a%20revolt,forced%20to%20abolish%20the%20newly)); British Journal of Chinese Studies - details of Yuan dissolving Parliament in 1914 ( [A Forgotten Experiment | British Journal of Chinese Studies](https://bjocs.site/index.php/bjocs/article/view/54#:~:text=pro,a%20democratic%20transition%20utterly%20failed) ) and support from elites for his crackdown ( [A Forgotten Experiment | British Journal of Chinese Studies](https://bjocs.site/index.php/bjocs/article/view/54#:~:text=these%20rebels%20Yuan%20ordered%20the,cited%20in%20Shan%2C%202018%3A%20177) ). 5. Franklin, Benjamin. Quote and Constitutional Convention notes (Sept 17, 1787) - _“A republic, if you can keep it”_ ([September 17, 1787: A Republic, If You Can Keep It (U.S. National Park Service)](https://www.nps.gov/articles/000/constitutionalconvention-september17.htm#:~:text=,%E2%80%9D)) and warning that only a virtuous people can sustain free government ([September 17, 1787: A Republic, If You Can Keep It (U.S. National Park Service)](https://www.nps.gov/articles/000/constitutionalconvention-september17.htm#:~:text=they%20are%20such%3B%20because%20I,%E2%80%9D)). --- Title: 2024 in Books Subtitle: Another year of trying to read more Description: A look back at my year in reading through 50 books and nearly 17,000 pages, featuring recommendations across fiction, non-fiction, and everything in between. Date: 2/2/2025 Source: boleary.dev Tags: Learning import Callout from "../../../../components/Callout.astro"; # 2024 in Books Like many years before, I aimed to be an avid reader in 2024 - and this time I actually succeeded! I finished 50 books across nearly 17,000 pages this year. Looking at my [StoryGraph](https://app.thestorygraph.com/) stats, it's clear that my reading habits continue to evolve and grow. ## Statistics A few interesting breakdowns from my reading year, courtesy of The StoryGraph: - Read 50 books total (up 6% from 2023!) - The pace was split pretty evenly between medium (56%) and fast (44%) - September was my most prolific month with 2,251 pages read - July brought my highest average ratings at 5.0 - I explored works from 37 new authors this year, including David Grann, Michelle McNamara, and Bee Wilson - 14 of the books were part of series Books and pages read by month My 5 star books from 2024 ## Best of the Best ### Best Fiction
All the Light We Cannot See
[**All the Light We Cannot See**](https://amzn.to/3Eh2J58) is a historical novel that follows a blind French girl and a German boy whose lives intersect amidst the devastation of World War II. It emerged as my favorite fiction read of the year. The interweaving narratives and masterful prose created an unforgettable reading experience that stayed with me long after turning the final page. ### Best Nonfiction
The Anxious Generation
[**The Anxious Generation**](https://amzn.to/3Ei0JcC): *How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness* stood out as the most impactful nonfiction book I encountered this year. Its insights into modern challenges and practical wisdom make it essential reading for understanding our current moment. It has been widely (and rightfully) praised as a transformative work that courageously confronts the emotional undercurrents of our time. ## Must Reads Here's a curated selection that I think deserves special attention: ### Must Read for Parents [**The Anxious Generation**](https://amzn.to/4aMdxEy) provides crucial insights for anyone raising children in today's complex world. While potentially uncomfortable at times, it offers practical frameworks for supporting the next generation. It's a must-read for anyone who wants to understand the emotional challenges of raising children in today's world, and it lives up to the hype you've probably heard about it. ### Must Read for Spiritual Seekers [**Comedy Sex God**](https://amzn.to/40JPP7j) might seem like an unusual recommendation for spiritual reading, but its unique perspective on faith, meaning, and personal growth makes it a standout choice for anyone exploring their spiritual path. ### Must Read for Americans [**The Small and the Mighty**](https://amzn.to/4gp0Ss0) offers essential stories that reshape our understanding of American history through the lens of overlooked change-makers. It's the kind of book that makes you question what else you might not know about our shared history. ## Highly Recommended While there weren't many books I truly disliked this year, several stand out as particularly worthy of recommendation: [**C.S. Lewis's Short Stories**](https://amzn.to/3CIgtoP) Even as someone who has read much of Lewis's work, these stories surprise me. They showcase his mastery of narrative while carrying the same philosophical weight we expect from his other works. [**Magic Pill**](https://amzn.to/3WFcksL) This book flips our assumptions about health and wellness on their head. It strikes a balance between being scientifically sound and profoundly human. [**Turn the Ship Around**](https://amzn.to/4jGKqWX) Having witnessed my fair share of organizational shifts, I found this book hitting home. It’s not your run-of-the-mill business book—it's a guide to leading meaningful change while holding onto what truly matters. [**The Invisible Life of Addie LaRue**](https://amzn.to/3CHihyo) Perhaps the most inventive premise I encountered this year. The story's meditation on memory, legacy, and what makes a life meaningful is haunting in the best possible way. The fact that 568,701 other StoryGraph users shelved this book suggests I'm not alone in my appreciation. [**The Three-Body Problem**](https://amzn.to/3WJrX26) Science fiction at its finest - thought-provoking, expansive, and deeply original. While the concepts can be challenging, the payoff is more than worth the mental investment. [**Killers of the Flower Moon**](https://amzn.to/3WJrYTI) Reading this before watching the film adaptation proved to be the right choice. David Grann's meticulous research and compelling narrative style bring this tragic piece of American history to life in a way that demands attention. [**Finders Keepers**](https://amzn.to/3WIrOMs) A thriller that manages to be both a page-turner and a meditation on the nature of storytelling itself. It's rare to find a book that succeeds both as entertainment and as literary commentary. ## What's Next? Looking ahead to 2025, I'm already building my to-read list. The stats show I had a good balance of fiction and nonfiction this year, but I might try to push myself into some new genres. According to StoryGraph, my highest-rated reads tended to be reflective, informative, and adventurous with a slower pace - something to keep in mind as I choose my next books. I'm particularly interested in exploring more works from the authors I discovered this year. Among the 37 new-to-me authors, several have extensive backlists I'm eager to dive into. 💡 A huge "thank you" to [The StoryGraph](https://app.thestorygraph.com/) - an independent alternative to Goodreads from [Nadia Odunayo](https://nadiaodunayo.com/) that helps me track all of these great books and statistics. Remember, these are just my personal picks based on my own journey with these books. I'm really curious to know what you read in 2024 and what's on your radar for 2025—drop your suggestions in the comments below! ## All Books # Books Read in 2024 ## January - *The Paris Apartment* by Lucy Foley - *The Infinite Game* by Simon Sinek ⭐⭐⭐⭐⭐ - *Killers of the Flower Moon* by David Grann ⭐⭐⭐⭐ - *Dune Messiah* by Frank Herbert - *Lessons for Living* by Phil Stutz ⭐⭐⭐⭐⭐ ## February - *All the Light We Cannot See* by Anthony Doerr ⭐⭐⭐⭐ - *Lessons in Chemistry* by Bonnie Garmus ⭐⭐⭐ - *Originals: How Non-Conformists Move the World* by Adam M. Grant - *Day by Day with Saint Joseph* by Ken Lasch, Joseph Champlin - *Finders Keepers* by Stephen King ⭐⭐⭐⭐½ ## March - *End of Watch* by Stephen King - *Leaders Eat Last* by Simon Sinek ⭐⭐⭐½ - *Sex Talks* by Xander Marin, Vanessa Marin - *Radical Candor* by Kim Malone Scott ## April - *The Lincoln Lawyer* by Michael Connelly - *The Brass Verdict* by Michael Connelly ⭐⭐⭐⭐ - *The Art of Writing and the Gifts of Writers* by C.S. Lewis ## May - *I'll Be Gone in the Dark* by Michelle McNamara ⭐⭐⭐⭐ - *Masters of Greek Thought* by Robert C. Bartlett - *Dust* by Hugh Howey ⭐⭐⭐⭐½ ## June - *The Three-Body Problem* by Cixin Liu ⭐⭐⭐⭐ - *Magic Pill* by Johann Hari ⭐⭐⭐⭐½ - *Short Stories* by C.S. Lewis ⭐⭐⭐⭐½ - *Oath and Honor* by Liz Cheney ## July - *Last Boat Out of Shanghai* by Helen Zia - *A Long Walk to Water* by Linda Sue Park - *The Dark Forest* by Cixin Liu - *Nuclear War: A Scenario* by Annie Jacobsen - *Comedy Sex God* ⭐⭐⭐⭐⭐ ## August - *1923: The Crisis of German Democracy* by Mark William Jones - *The Invisible Life of Addie LaRue* by V.E. Schwab ⭐⭐⭐⭐¾ - *Where the Mountain Meets the Moon* by Grace Lin - *Out of the Silent Planet* by C.S. Lewis ⭐⭐⭐⭐ - *After Steve* by Tripp Mickle ## September - *Fight Club* by Chuck Palahniuk ⭐⭐⭐¾ - *The Giver of Stars* by Jojo Moyes ⭐⭐⭐½ - *First Bite: How We Learn to Eat* by Bee Wilson ⭐⭐⭐ - *The Woman in the Window* by A.J. Finn ⭐⭐⭐¾ - *Death's End* by Cixin Liu ⭐⭐⭐½ ## October - *Perelandra* by C.S. Lewis - *Turn the Ship Around!* by L. David Marquet ⭐⭐⭐⭐½ - *The Power of Myth* by Bill Moyers, Joseph Campbell ⭐⭐⭐⭐ - *If We Break* by Kathleen Buhle - *A Severe Mercy* by Sheldon Vanauken ⭐⭐⭐⭐⭐ - *The Deadline: Essays* by Jill Lepore ## November - *The Hitchhiker's Guide to the Galaxy* by Douglas Adams - *Creativity, Inc.* by Ed Catmull, Amy Wallace ## December - *Restaurant at the End of the Universe* by Douglas Adams - *The Anxious Generation* by Jonathan Haidt ⭐⭐⭐⭐⭐ - *The Small and the Mighty* by Sharon McMahon ⭐⭐⭐⭐⭐ --- Title: Americans found their X (not Twitter) Subtitle: What Xiaohongshu can teach us about international relations Description: In a delightfully ironic twist, the U.S. government's TikTok ban has pushed Americans toward Xiaohongshu (a more explicitly Chinese government-controlled app), highlighting the growing disconnect between how governments view the world and how their citizens actually live and connect in it. Date: 1/15/2025 Source: boleary.dev Tags: Tech import Callout from "../../../../components/Callout.astro"; There's something delightfully ironic about the U.S. government's TikTok ban pushing Americans toward Xiaohongshu - translated as "Little Red Book" but starting to be called "Red Note" in the US, an app that is more explicitly controlled by the Chinese government than TikTok ever was. The memes that have come out of this irony are top tier. ![TikTok Ban Meme 1](./butleriano_tweet.png) But beyond the irony lies a profound lesson about the growing disconnect between how governments view the world and how their citizens actually live in it. ## The Disconnect The government's reasoning for banning TikTok centers on concerns about data collection and foreign influence. And while these aren't invalid concerns in theory, the reality on the ground tells a different story. In the same month that Congress focused on banning an app primarily used for dance videos and cat memes, we've seen [unprecedented drone activity over critical infrastructure in the Northeast](https://wapo.st/42b0PwI) and learned about what's being called ["the worst telecom hack in our nation's history"](https://wapo.st/40x9sAv) - where Chinese hackers gained access to actually listen to phone calls and read text messages. Like...all of them. This stark contrast between what governments prioritize and what actually threatens security isn't just ironic - it's deeply telling about how disconnected policy decisions have become from reality. ## Beyond Politics: Human Connections in a Digital Age What's even more fascinating and reenforcing to me on this point is watching what's happening on Xiaohongshu right now. Note: I'm going to start calling Xiaohongshu "X (not Twitter)" to piss of Elon. I'll also be calling "Twitter (not X)" Regular Americans and Chinese citizens are coming together, sharing pictures of their everyday lives, helping each other with language practice, and creating shared cultural moments. Americans are captioning their posts in Mandarin while Chinese users create English language tutorials, all while building genuine human (and many many cat) connections that transcend geopolitical tensions. I've seen this before. In times of conflict, regular people find ways to connect **despite** their governments' stances. The human desire to share and support often overcomes imposed barriers. When Russia invaded Ukraine, I saw this firsthand: European colleagues offered homes to Ukrainian teammates, and Russian colleagues openly condemned their government, many seeking to leave. These human connections and reactions were profoundly different from the way governments discussed and treated each other. ## Security Theater vs. Real Threats Let's be clear: while Congress focuses on banning TikTok over theoretical concerns, we have concrete evidence of telecom security breaches that could let foreign actors monitor unencrypted communications. We've also seen unexplained drone activity over sensitive sites—real national security issues needing immediate attention. Data privacy concerns aren't unique to Chinese companies. For decades, foreign and domestic firms have mishandled user data. Remember Cambridge Analytica? That wasn't a foreign government—it was Facebook allowing data harvesting to manipulate millions of Americans for political gain. Many companies still treat data as a commodity, not something to protect. If governments truly want to protect privacy and security, they could look to Europe's GDPR. It's not perfect, but it tackles the real issue: protecting citizens' data from misuse by any entity, foreign or domestic. Instead, we get Cold War-era paranoia disguised as policy. The TikTok ban and migration to X (not Twitter) expose an uncomfortable truth: the government prioritizes domestic business interests over citizen privacy. When Meta or Google lose market share to TikTok, it's a “national security crisis.” But when those same companies mishandle data? Silence. This isn't just hypocrisy—it's a failure to protect citizens while shielding corporate interests. Americans see through it. The move to X (not Twitter) isn't teenage rebellion; it's a clear signal that people recognize the difference between real security measures and political theater. ## What This Really Tells Us The migration to X (not Twitter) reveals something fundamental about our increasingly connected world: the traditional model of national governance is struggling to keep pace with how people actually live and connect in the digital age. Governments still operate in a world of clear boundaries and binary choices - us vs. them, secure vs. insecure - while their citizens live in a much more nuanced reality. Note:The previous paragraph is well-written, but it significantly understates the problem. In this context, "governments" could be interpreted as "a generation of American career politicians who should be in retirement homes instead of making decisions in the U.S. Congress."

Note after the note:  

One U.S. Representative has literally [been in a nursing home for months](https://nypost.com/2024/12/22/us-news/missing-texas-rep-kay-granger-found-in-memory-care-unit-report/) 
while still being paid for a "full-time" Congressional job.  

This isn't the way—the way is TERM LIMITS FOR CONGRESS.

What interests me most is the growing gap between how governments view the world and how citizens experience it. While governments draw lines and build walls, citizens build bridges and form connections that transcend boundaries. The real challenge isn't choosing between security and openness—it's finding ways to ensure genuine security while acknowledging how people actually live and connect in our interconnected world. Until governments bridge this understanding gap—and lower the average age in Congress—we'll keep seeing well-meaning policies backfire in ironic ways. Meanwhile, millions of Americans and Chinese citizens will keep connecting and sharing their lives (and cats) on X (not Twitter) post at a time. The best response to artificial barriers? Creative ways around them. ![Cat Data Meme](./catdatameme.png) --- Title: The First Tuesday after the First Monday in November Description: Reflections on American elections and their importance to democracy Date: 11/5/2024 Source: boleary.dev Tags: Politics, Politics import Callout from "../../../../components/Callout.astro"; Every election season, we're reminded that our votes matter. But American elections are more than just a way to choose leaders—they're a foundational safeguard for the very idea of democracy itself. The principles underlying our electoral system reflect the core American ideal that "all are created equal." In a republic like ours, elections serve as the bedrock of our democratic values, helping to keep power accountable, ensuring stability, and giving each citizen a voice. Without these regular opportunities for the people to engage in self-governance, the structure of American government and its protections would erode, leaving democracy itself vulnerable to misuse and decay. Drawing on insights from the Federalist Papers, written by the Constitution's architects, we see how our elections reinforce the values they fought to enshrine. American elections uphold the ideals of liberty and equality, providing a counterbalance to power and offering a means to address grievances peacefully. By reflecting on the historical wisdom that guided our founders, we can better appreciate the stakes of each election and our role in preserving this experiment in democracy. ## Elections as a Reflection of the People's Will In [Federalist No. 39](https://guides.loc.gov/federalist-papers/text-31-40#s-lg-box-wrapper-25493394), James Madison argued that for a government to be truly republican, it must derive its authority directly or indirectly from the people. This is the core of a government "by the people." Elections are the embodiment of this principle, ensuring that those who govern do so only with the consent of the governed. The republic model of government contrasts with monarchies or oligarchies by making the people—not the wealthy, powerful, or elite—the source of power. Elections not only shape who holds office but also remind leaders that they are in their roles as representatives, not rulers. The cycle of elections forces each official to return to the people, to listen to their needs, and to seek their approval. In this way, every election reaffirms our nation's commitment to self-governance, asserting the right of each citizen to have a voice in shaping the direction of the country. This commitment is rooted in the belief that every American, regardless of background, has a role in the republic (if we can keep it), a principle that stands in stark contrast to other governments where only a select few are empowered. ## Elections as a Check on Power and Factionalism One of the greatest threats to any republic is factionalism, where divisions grow so deep that they threaten the stability of the state. In [Federalist No. 9](https://guides.loc.gov/federalist-papers/text-1-10#s-lg-box-wrapper-25493272), Alexander Hamilton warned against the dangers of factions, which could lead to internal conflicts or even insurrections. When individuals or groups prioritize their own interests above the nation's well-being, the result can be devastating for democratic stability. Elections act as a "safety valve" for these pressures by giving people a voice in government without resorting to force. Elections channel frustrations into a peaceful, constructive process, allowing the people to replace officials or redirect policies through the ballot rather than violence. Our recent history shows the danger of this balance being threatened. The events of January 6th were a stark reminder of what happens when factions attempt to overturn the democratic process. 🇺🇸 When elections are undermined or dismissed, factions turn from disagreements into outright threats to the republic. Our founders understood this risk well and built a system to diffuse it. By preserving and belief in fair, regular elections, we protect the republic from the destructive potential of unchecked factionalism. ## Elections as Safeguards for Union & National Stability In [Federalist No. 13](https://guides.loc.gov/federalist-papers/text-11-20#s-lg-box-wrapper-25493284) and [Federalist No. 41](https://guides.loc.gov/federalist-papers/text-41-50#s-lg-box-wrapper-25493405), Hamilton and Madison emphasized the importance of unity in securing peace and stability. A united government, sustained by a fair electoral process, can more effectively govern and protect its people. Disunity, on the other hand, breeds inefficiency, mistrust, and vulnerability. Madison argued that a strong federal government, balanced by the power of elections, ensures that we have a government capable of meeting the nation's needs while remaining accountable to its citizens. Through elections, the federal government is continually held to account, reducing the risk of overreach and keeping its powers within limits set by the people. The choice offered to us in each election is a choice to renew our commitment to unity and stability. Every vote we cast in federal elections is a vote to preserve the Union that holds the nation together, ensuring that our government remains strong yet responsive. ## The Importance of an Informed Citizenry The founders understood that an informed and engaged electorate was vital to the success of the republic. In [Federalist No. 67](https://guides.loc.gov/federalist-papers/text-61-70#s-lg-box-wrapper-25493454), Hamilton tackled the importance of honest information in shaping public opinion. He pushed back against critics who spread misinformation about the President's powers, warning that misleading the public about the Constitution's intent could erode faith in the government. Today, the challenge of misinformation is more present than ever. Misleading narratives threaten to distort public understanding of democratic institutions, creating confusion about the Constitution's limits on power and the purpose of elections. A well-functioning democracy depends on citizens who are informed about the stakes and understand the real responsibilities of each elected office. The choices we make must be based on reality, not rhetoric. This is why it's so crucial for voters to critically assess the information they encounter, ask questions, and seek reliable sources. Elections rely not only on participation but on informed participation. In each election, we have a duty to hold candidates accountable for the truth and reject those who would mislead for political gain. ## Renewing American Values Through Each Election In [Federalist No. 1](https://guides.loc.gov/federalist-papers/text-1-10#s-lg-box-wrapper-25493264), Hamilton introduced a challenge to the people of America: to decide whether societies are capable of establishing good government by "reflection and choice," or whether we are doomed to "accident and force." Each election is a renewed opportunity to make that choice, affirming America's dedication to liberty, equality, and justice. This is what distinguishes the American democratic process. Our right to vote is both a privilege and a responsibility that honors the sacrifices of those who fought to establish this republic. It reflects our role as guardians of the Constitution, as stewards of its principles, and as defenders of its promise for future generations. As we go to the polls, we are part of a legacy stretching back to the country's founding. Each election gives us the chance to ensure that our government remains by the people, for the people, and aligned with the values that have sustained it through centuries. --- Title: Empower, Document, Lead Subtitle: Submarine Insights for Startups Description: Submarine Insights for Startups Date: 10/6/2024 Source: boleary.dev Tags: Learning, Tech import Callout from "../../../../components/Callout.astro"; Imagine the tense, claustrophobic environment of a submarine, submerged thousands of feet beneath the ocean's surface. Here, every decision is critical, where even the smallest misstep could mean the difference between life and death. On a nuclear submarine in the US Navy, the crew holds unimaginable power and responsibility, serving as a crucial component of the nation's nuclear deterrent and response capabilities. Their duty extends beyond their own safety to the security of the entire world. In this high-stakes world, precision and clarity are necessities. This was the reality [Captain L. David Marquet](https://en.wikipedia.org/wiki/David_Marquet?ref=boleary.dev) faced when he [took command](https://amzn.to/3NfRDyy?ref=boleary.dev) of the [USS Santa Fe](), a submarine known for poor performance. Confronted with a crew used to top-down orders, Marquet realized he needed to empower his team in new ways to transform the vessel. Marquet introduced a radical shift by pushing decision-making to the lowest levels, granting each crew member the autonomy to act. This autonomy required every sailor to fully understand and articulate the ship's mission. The Santa Fe's manifesto became a shared vision, aligning all actions with the mission's overarching goals. In the high-pressure environment of a startup, similar principles apply. Success hinges on aligning everyone with the mission and empowering decision-making. Like the Santa Fe, where transparency maintained clarity, startups thrive by ensuring information flows freely. Documenting decisions reduces ambiguity, accelerates decision-making, and keeps the focus on what matters. Exploring the parallels between submarine operations and startup success reveals how empowering individuals, fostering transparency, and ensuring mission clarity can transform a team. By focusing on **Mission Clarity**, **Transparency & Documentation**, and **Empowering Decision-Making**, you'll learn to navigate leadership challenges and drive agility and innovation. ## Mission Clarity and Articulation On the Santa Fe, Captain Marquet recognized that effective operation required the crew to fully understand and articulate the mission. It wasn't just about individual tasks but how they aligned with the submarine's broader goals and the Navy's strategic objectives. Central to this was the ["We Learn" Manifesto](https://davidmarquet.wordpress.com/2015/01/22/we-learn-manifesto-from-the-santa-fe/?ref=boleary.dev), guiding daily activities and mindset. It emphasized continuous improvement and learning, crucial for growth. The manifesto also highlighted their mission's deeper purpose: defending the Constitution and protecting liberty and prosperity: > Excerpts from the "We Learn" Manifesto ![We Learn Manifesto](./1.png) ![We Learn Manifesto](./2.png) ![We Learn Manifesto](./3.png) ![We Learn Manifesto](./4.png) By understanding why submarines were critical — highlighted by their unique capabilities and historical contributions — the crew was reminded of their significant role in defending democracy. Marquet's expectation was clear: every crew member should strive to be a better submariner each day, seeing each task as an opportunity to learn and grow. This approach transformed the Santa Fe from one of the worst-performing submarines in the fleet to one of the best. 🫡 Crew members were no longer just following orders; they were actively contributing to the success of the mission. For startups, mastering this lesson is crucial. A clear, well-communicated mission empowers team members to understand not just what they do, but why it matters. Yet, clarity alone won't suffice. Your mission must be consistently reinforced, becoming a core part of daily decisions and actions. By articulating a compelling mission that emphasizes continuous learning and deeper purpose, you can inspire your team to own their roles and propel your organization toward success. 📈 Make your mission the core of your startup, hire great people, and watch them succeed. ## Empowering Decision-Making at All Levels In the hierarchical US Navy, where "just follow orders" is maybe assumed to be the status quo, Captain Marquet's decision to delegate decision-making on the USS Santa Fe was a radical shift. This approach showed that empowerment is essential for high performance. **If a nuclear submarine can implement this model successfully, a small, agile startup has every reason to follow suit.** Marquet's strategy of ["leader-leader" over "leader-follower"](https://www.shortform.com/blog/leader-leader-model/?ref=boleary.dev) increased responsibility. By empowering crew members, each sailor became more accountable for their actions and outcomes, moving beyond "just following orders" to critical thinking and decisive action, directly impacting mission success. A key practice was having crew members state their intentions before acting. Instead of seeking permission, they would declare, "I intend to..." This subtle shift in language encouraged informed decision-making, enhancing responsiveness, confidence, and competence. In startups, empowering decision-making at all levels is crucial. Fast-paced environments require agility, and centralized decision-making can hinder progress. Many decisions are ["two-way door" choices](https://www.youtube.com/watch?v=rxsdOQa_QkM&ref=boleary.dev), reversible and open to course correction. In these cases, speed is often more valuable than finding the "right" answer. By delegating authority, startups leverage diverse perspectives for innovative solutions and quicker execution. Assigning a [Directly Responsible Individual (DRI)](https://blog.boleary.dev/what-i-learned-at-gitlab-that-i-dont-want-to-forget/#dris-and-decision-making) for each decision clarifies roles and accountability, enabling team members to act confidently - with the necessary support.A culture where team members can voice ideas and take initiative boosts engagement and motivation. 💡 Trust them with decisions, and they'll invest deeply and pursue excellence. ## Transparency and Documentation On the USS Santa Fe, Captain Marquet prioritized transparency by bypassing the traditional chain of command, ensuring critical information reached crew members directly. This allowed swift, decisive action and built trust, aligning everyone with the submarine's mission. In startups, transparency and documentation are vital, especially with distributed teams. Writing things down creates a shared understanding and serves as a single source of truth, enabling asynchronous work across time zones. This documentation fosters continuous learning and improvement. 🪟 Transparency builds trust and encourages open dialogue. When team members grasp the 'why,' engagement skyrockets and success follows. ## Navigating the Future Reflecting on insights from the USS Santa Fe, we see that empowerment, clarity, and transparency are key to a winning organizational culture. Empowering decision-making fosters ownership and accountability, driving innovation. Startups that rely solely on a founder's vision risk becoming stagnant, like traditional top-down models. This can have the opposite of the intended effect and instead stifle the potential for rapid growth. Instead, they should function like a dynamic orchestra, where each member contributes unique skills toward a shared vision. Steve Jobs is often seen as a "top-down" leader, but actually Apple's success lay in his ability to articulate a clear vision and attract top talent to bring it to life. Jobs wasn't a designer or an engineer or a computer expert. Apple's success was not because Jobs was the only one who understood the direction...but because he aligned designers, engineers, and experts with the mission, empowering them to innovate and excel. The best companies articulate their mission clearly and hire individuals who complement the founder's strengths. By aligning diverse talents with the mission, startups leverage growth and avoid stagnation. Clarity and effective communication align actions with broader goals, creating a cohesive team. Transparency and documentation build trust and enable continuous learning and collaboration. ![Blog-Venn-Diagram-2-1](./Blog-Venn-Diagram-2-1.png) 🧠 The foundation for thriving teams: Align mission, talent, and transparency. In both submarines and startups, success hinges on the ability to navigate uncertainty with confidence and purpose. By adopting these principles, leaders can build resilient organizations that thrive in the face of challenges, steering their teams toward their future success. ## Further Reading - [Turn the Ship Around!](https://amzn.to/3NfRDyy?ref=boleary.dev) by L. David Marquet, Captain, US Navy (Retired) - ["We learn" Manifesto from the USS Santa Fe](https://intentbasedleadership.com/we-learn-manifesto-from-the-santa-fe/?ref=boleary.dev) - [Stephen Covey's ride on the USS Santa Fe](https://davidmarquet.wordpress.com/2012/07/16/stephen-coveys-ride-on-the-uss-santa-fe/?ref=boleary.dev) --- Title: Migrating from Ghost to Render Description: How I migrated my blog from Ghost to Astro and Render Date: 9/3/2024 Source: boleary.dev Tags: Tech, Tech import Callout from "../../../../components/Callout.astro"; Like any good technologists, I'm never one to turn down a nice little rewrite in a different technology just for "fun." Well, like anyone who has ever proposed such a change, I had my reasons, of course. Previously, I had coded [my website in VuePress](https://gitlab.com/brendan/website), but it became a little too heavy to maintain. Searching for a balance between the ease of publishing a blog and the fun of coding my website, I chose to migrate to a self-hosted version of [Ghost](https://ghost.org/), the CMS and publishing software. That solved two issues: the ability to quickly publish a blog and allow people to subscribe to the blog and get e-mail updates. What it took from me was the ability to add my own fun code and twist to my personal site. To do anything custom, I had to fit it into the framework of Ghost and my ability to customize its theme. ## Requirements In thinking about how I wanted to balance my ability to tinker with the ease of having a blog idea and being able to publish it quickly, I decided on the following requirements: - Ability to post a blog like this one quickly in Markdown or a similar simple format - Have that blog published to my subscribers (you should be one too!) - Be able to code my own simple solutions (like my blog length selector) - Simple and elegant design ## Finding the right fit Since I made this change about two years ago, a lot has changed in the world of writing and deploying static sites. One of my favorite new technologies is now much more mature, and my new chosen way to write static sites - Astro. I love the balance you can strike with Astro — write a simple markdown post and publish it quickly, or write an entire component library (in your favorite framework like Vue, Svelte, or React) and ship cool functionality. ## Design ### Astro Nano with Catppuccin Tailwind Theme For the design of my new site, I wanted something that was both visually appealing and easy to work with. I chose to start with the [Astro Nano](https://github.com/snowpackjs/astro/tree/main/examples/nano) template, which provided a minimalist foundation perfect for customization. To give the site a unique and cohesive look, I integrated the [Catppuccin Tailwind theme](https://github.com/catppuccin/catppuccin). Catppuccin offers a beautiful and consistent color palette that pairs wonderfully with Tailwind CSS, making it easy to maintain a clean and modern aesthetic. ### Ghost as a Backend CMS One of the main reasons I opted for Ghost was its powerful API, which makes it an excellent choice for a headless CMS. Connecting to the Ghost API was straightforward. I used the Ghost Content API to fetch blog posts and other content. This setup allows me to manage my content in Ghost while displaying it on my Astro site. With the API connection established, I focused on displaying blog post details on my Astro site. I created components to render the fetched content dynamically. This approach ensures that my site remains static and fast while still displaying the latest posts from Ghost. To keep the main site clean and focused, I decided to host the full articles on a subdomain, [blog.boleary.dev](https://blog.boleary.dev/). This setup allows visitors to browse through summaries and click through to read the full posts on Ghost. This separation of concerns ensures that my main site remains lightweight and performant while still leveraging Ghost's robust blogging features. ## Deploying to Render ### Easy (and Free!) Hosting for Static Sites When I was looking around for where to host the site, I came across [Render](https://render.com/). One of the standout features of Render is how straightforward and cost-effective it is to host a static site. Also, any product that offers "sign in with GitLab" already is a cut above in my book. To make it even sweeter, Render offers a generous free tier, making it an excellent choice for personal projects and small websites. Here's how I set up my deployment pipeline with Render: ### Automatic Deployments with GitLab 1. **Connect GitLab Repository**: Since I created my account on Render with GitLab, my GitLab repositories were already available to Render. Even better: the repositories are sorted by the time they were last updated so I didn't have to go searching for the right one. 2. **Configure Build Settings**: In the Render dashboard, I configured the build settings for my Astro site. Render automatically detected the static site setup and suggested the appropriate build commands. For Astro, the build command is typically `pnpm run build`, and the publish directory is `dist`. 3. **Automatic Deployments**: One of the best features of Render is its automatic deployment process. Every time I push changes to my GitLab repository, Render automatically triggers a new build and deploys the latest version of my site. 4. **Custom Domain and SSL**: Render also makes it easy to configure a custom domain and provides free SSL certificates. Setting up my custom domain (e.g., [boleary.dev](http://www.boleary.dev/)) was a breeze. ## Other Considerations Now that the general design and deployment was complete, there were just a few other lose ends to consider: ### Redirect to Ghost to Hopefully Not Miss Any Links One of my primary concerns during the transition was ensuring that no links were broken, which could negatively impact SEO and the user experience. To address this, I implemented smart 404 redirects. If a page is not found on the Astro site, it redirects to the corresponding page on Ghost. This way, I minimize the risk of dead links and ensure that visitors can always find the content they're looking for. ### Ghost Still Handles All the Subscription Items Maintaining the subscription functionality was crucial. Ghost's built-in subscription features are robust and user-friendly. By keeping Ghost as the backend, I can still manage subscriptions, send out email updates, and handle member interactions without any additional setup. ### Added Some Fun Things To make my site more engaging, I added a few fun elements. For instance, the kid emojis on my homepage dance a little, adding a playful touch. Additionally, I included a hidden Easter egg: clicking on "1x engineer" reveals a surprise. These small touches make the site more interactive and enjoyable for visitors. ## Conclusion Moving my site to Astro and hosting it on Render has been a fun and rewarding project. Using Ghost as a headless CMS and Render for easy deployments, I now have a fast and modern site that's super easy to update. If you're thinking about giving your own site a refresh, I definitely recommend checking out this setup! ### Links - [Astro](https://astro.build/) - [Render](https://render.com/) - [Ghost](https://ghost.org/) - [My live site](https://boleary.dev/) - [The code behind the site](https://gitlab.com/brendan/boleary-render) --- Title: Litany of the Saints Description: A personal story about faith and family during a medical emergency Date: 4/27/2024 Source: boleary.dev Tags: Personal, Personal Okay, yes this is typically a tech blog. And sometimes I ask forgiveness to stray into the [political](/blog/2022/03-why-is-this-war-different-from-all-other-wars/) or [Swiftie related](/blog/2023/12-taylor/). Today, I hope you'll indulge me and let me stray into the spiritual. If I'm being honest, this is a part of my life I don't share with my other chronically online folks because I'm a little scared of what everyone will think...but here goes nothing. --- **I had the most unreal, clear, undeniable intervention from the [Saints](https://en.wikipedia.org/wiki/Saint) and the Holy Spirit that I've ever experienced in my life last night. Here is the story:** My 8-year-old son, Daniel, broke his arm around 1 pm at school on the playground. It was bad enough that, worried about circulation, we decided to take an ambulance to Anne Arundel Medical Center. *A Marylander aside here: I think it's actually Luminis something or other now. But like a native Chicagoan who refuses to call that one building anything other than "Sears Tower", I will be calling it AAMC in this story.* Anyway, my wife rides with him to AAMC and I meet them there. We're seen relatively quickly and get X-rays ordered. As a paramedic friend would say of those images: "F***ing Ray Charles could've seen that break on the xray". And given the extent of the break, I knew before the doctors even looked at it that we were looking at surgery. ## Good News / Bad News Good news / bad news: One of the top 10 pediatric orthopedic hospitals in the country is in Baltimore - Johns Hopkins, making it the best choice for surgery due to its proximity to AAMC. With concerns about circulation during transport, Hopkins is going to pick him up, so we're waiting on their call. They arrived just as I was grabbing a quick bite in the café. I rushed back to meet them, hopping into their ambulance with Daniel while my wife headed home to the other kids. Despite the idea that splurging on the most expensive Uber around would mean bypassing the ER wait, there weren't any available beds when we arrived, so after a brief check-up, triage, and vitals, we ended up...in the waiting room. ## 'Happy' are All Who Wait By now, it's about 8:30 pm - which is 30 minutes past bedtime...which could be considered the second religion to Catholicism in our house. And D is feeling it: he's tired, and the last time he had any pain meds was three and a half hours ago. And instead of being in a hospital bed like he has been most of the day, he's in a wheelchair made for someone much much much larger than him sitting in a waiting room. He's really starting to lose it, honestly for the first time all day. And for me - I'm now alone with him, in a big-city hospital late at night. And as he's in more and more pain, I texted my wife: > "I just feel like we went from the front of the line to the back of the line" It was the worst I had felt the whole day. I was anxious and frustrated and scared and unsure of what to do. ## Meanwhile: In Other News Most of the day I had been praying. Praying for Daniel, for the doctors taking care of him, and for healing. In addition, since it happened at his Catholic school, where all 4 kids go and my wife has taught for 20 years, word spread fast, and a whole school community was praying for him. Oh, and my wife and I have been working with a local Catholic high school that our kids don't even attend, but one of our best friends directs the theater program. Tonight we intended to be there for the 4th performance of Beauty & the Beast, but without us there. And with us not there, we got this text from the director: > "We love you guys so much...we added St. Daniel, St. Joseph and St. Luke...to our intentions tonight" Add to that group: D's Godparents in Colorado and Virginia, his family here in the Maryland/DC area and in Indiana, Michigan, and Illinois all praying for him and sending their thoughts and prayers. Add it all up and, quite literally, hundreds of people are praying for him...many of whom are Catholic and asking directly for the intervention of the saints. ## The Litany Catholics have a lot of saints - a designation we believe belongs to all souls in heaven, but one that we 'acknowledge' in certain historic figures. And there's a prayer called the "litany of the saints". It is a prayer that invokes the intercession of various saints. And for Catholic families (some of which can also have a lot of members), many times kids are named after saints. And as with many parents who struggle to get their names right occasionally and run through them all, my mother's (one of 5) parents have always called that stumbling through names, somewhat jokingly, "the litany of saints." For our family then that would be something like "Patrick, I mean Molly, I mean Daniel, I mean Caitlin!" And so I finally got up the courage to ask about getting Daniel some more pain meds. And that was when the direction came - Enter our real-life litany of the saints: - The triage nurse sent us at that point to a room with a medical assistant. **Name: Patrick** - Within minutes of being in the room, a patient advocate entered to see what, if anything, we needed. **Name: Molly** - After asking about pain meds, Molly goes to the nurse to come in pretty soon after that. **Name: Patricia** And so, amidst the chaos of hospitals and waiting rooms, a profound sense of saintly intervention permeated the night. As Daniel endured pain and discomfort, a real-life litany of saints appeared—Patrick, Molly, Patricia—each offering their support and care, and representing his older siblings who wanted so badly to help him today. While some might dismiss it as mere coincidence, at that moment, I couldn't help but feel the comforting presence of Sts. Mary and Patrick, watching over Daniel just as his older siblings wished they could. That comfort sustained me for the rest of the night, through his successful surgery. And I couldn't help but try and write it down to remember this feeling - especially for the next time I need this kind of saintly help. --- *Update 2024-05-01 (the feast of St. Joseph the worker): The radiology technologist at Daniel's follow up appointment was named Caitlin* --- Title: 2023 in Books Description: A review of the 47 books I read in 2023 Date: 12/27/2023 Source: boleary.dev Tags: Learning, Learning Like many things in life, I have always wished I was more of a reader than I am in reality. Blame the speed of modern life, the abundance of screens, or just general laziness; I've never been great at sitting down and reading a book. And so it's only been recently that I've given myself a free pass to "read" books via audiobooks, and it has been a game changer. And now I do consider myself a reader — or listener — of books. ## Statistics I read a total of 47 books. I favor nonfiction and heavily favor audiobooks 😄. The total was 17 fiction and 30 nonfiction. I actually only "read" 8 books if you don't count audio - 2 digitally and 6 on actual real paper. Of interest, one stat that isn't tracked here but that I tallied myself is that I read more male authors than female authors this year. But as I finished this blog, I realized that every author I featured is female. Do with that what you will. ## Best and Worst ### Best Fiction: The Ballad of Songbirds and Snakes by Suzanne Collins As an enthusiast of the Hunger Games series, I can confidently say that this book stands out as the best yet. Particularly for those familiar with the storyline of the entire series, this prequel is not just a compelling narrative in its own right; it also skillfully interweaves elements that foreshadow future events. ### Best Nonfiction: Untamed by Glennon Doyle Admittedly, I might be [among the last](https://xkcd.com/1053/) to uncover the remarkable journey of Glennon Doyle and her extraordinary family. However, I gladly share this revelation if it guides even one more person to her inspiring book. This narrative is a masterclass in self-discovery, comprehending the nuances of love, and the art of nurturing family bonds. ### Worst: The Woman in Me by Britney Spears Britney Spears' "The Woman in Me" falls short in terms of its writing quality. As someone who is a strong supporter of Britney and shares the millennial nostalgia for her era, it's disappointing to acknowledge that this book doesn't quite meet expectations. ## Must Read for Parents: Good Inside by Dr. Becky Kennedy While I may be a recent arrival to the Dr. Becky Kennedy fandom, her impact is undeniably profound. "Good Inside" stands as the most practical, relatable, and actionable parenting book I have encountered. In a world full of platitudes and "must dos" in parenting advice, Dr. Becky actually delivers actionable, relatable, real-world practical advice. ## Must Read for Technologists: Engineering Management for the Rest of Us by Sarah Drasner Having been an admirer of Sarah Drasner's work for many years, I approached "Engineering Management for the Rest of Us" with considerable expectations. Remarkably, the book surpassed them, thanks to Sarah's profound expertise in both the human and technical facets of software engineering. ## Must Read for Families: When Life Gives You Pears by Jeannie Gaffigan Jeannie Gaffigan, co-architect of the comedic brilliance evident in Jim Gaffigan's stand-up, navigates the dual roles of a working professional and mother to five children. Her life takes an unexpected turn when she confronts a daunting medical diagnosis: a pear-shaped brain tumor. In "When Life Gives You Pears," Jeannie showcases resilience, drawing strength from her family, faith, and an inexhaustible well of humor. ## Highly Recommended Fiction - [Apples Never Fall](https://amzn.to/41G7Fbq) by Liane Moriarty - [The Picture of Dorian Gray](https://amzn.to/3tnbndi) by Oscar Wilde - [Project Hail Mary](https://amzn.to/3TGV9q6) by Andy Weir - [The Guest List](https://amzn.to/3TAotOU) by Lucy Foley - [The Last Thing He Told Me](https://amzn.to/3TAotOU) by Laura Dave - [The Ballad of Songbirds and Snakes](https://amzn.to/48f1H3s) by Suzanne Collins ## Highly Recommended Nonfiction - [Devil in the Grove](https://amzn.to/3GX1J3V) by Gilbert King - [Twelve Years a Slave](https://amzn.to/3TF7FX9) by Solomon Northup - [Better Living Through Birding](https://amzn.to/3H4Xi7b) by Christian Cooper - [The Culture Map](https://amzn.to/3TFtjKz) by Erin Meyer - [Brotopia](https://amzn.to/3TDIbcs) by Emily Chang - [Mere Christianity](https://amzn.to/3tAZIrh) by C.S. Lewis - [The Woman Who Smashed Codes](https://amzn.to/3TGeOGy) by Jason Fagone ## What's Next? Books I have on my radar include: - [Oath and Honor](https://amzn.to/3THH9fO) by Liz Cheney - [High Growth Handbook](https://amzn.to/3NHwuhi) by Elad Gil - [Radical Candor](https://amzn.to/3vmqqnQ) by Kim Malone Scott - [Fourth Wing](https://amzn.to/48xqBLt) by Rebecca Yarros - [The Martian](https://amzn.to/3RGXE9e) by Andy Weir *A huge "thank you" to [The StoryGraph](https://brendan.fyi/storygraph) - an independent alternative to Goodreads that helps me track all of these great books and statistics.* --- Title: What's Taylor's Age Again? Description: Exploring Taylor Swift's lyrics about age and relationships Date: 12/20/2023 Source: boleary.dev Tags: Personal, Personal > Time, mystical time Taylor Swift has consistently demonstrated her unmatched prowess in the music industry. Her ability to sell out football stadiums week after week, effortlessly filling them with her charisma, has made her an indomitable force. She even made a Chief's jersey more famous than a Chief can. Although I have yet to experience the Eras Tour in person, I've been an avid Swifty for years. My children, who were born around the release of 1989, were dancing to Shake It Off when they still hadn't shaken off diapers. Recently, I had the opportunity to witness the tour recording in theaters, and a profound realization dawned on me as I heard these familiar songs seamlessly interwoven - a not-so-invisible string running through Taylor's oeuvre began to unveil itself to me. Taylor, who has been in the spotlight since the age of 14 and has steadily climbed the ladder of fame, appears to have a fascination with age and how it is perceived, particularly in the context of her romantic relationships. ## Lyrics about Age I spent a lot of time looking for lyrics about age in Taylor's songs. Here are some highlights: | Song | Album | Lyric | |------|-------|-------| | Love Story | Fearless | "We were both young when I first saw you" | | Dear John | Speak Now | "Don't you think I was too young to be messed with?" | | All Too Well | Red | "If we had been closer in age maybe it would have been fine" | | All Too Well | Red | "I'll get older but your lovers stay my age" | | ...Ready For It | Reputation | "Younger than my exes but he act like such a man, so" | | cardigan | Folklore | "When you are young they assume you know nothing" | | betty | folklore | "I'm only 17, I don't know anything" | | tolerate it | evermore | "You're so much older and wiser and I / I wait by the door like I'm just a kid" | | Would've Could've Should've | Midnights | "And I damn sure never would've danced with the devil / At nineteen" | | Anti-hero | Midnights | "I have this thing where I get older but just never wiser" | ## What does it mean? I think these lyrics generally can be grouped into categories: ### Naivety (or perceived naivety) Taylor's lyrics frequently reveal a sense of self-criticism, perhaps too severe at times, as she evaluates her younger self through the lens of her now more mature perspective. The lyric "When you are young they assume you know nothing" encapsulates a common societal assumption about youth. The line "I'm only 17, I don't know anything" from "betty" poignantly illustrates this disparity - not just an acknowledgment of her own youthful inexperience, but also an implicit commentary on how this naivety placed her at a different vantage point compared to her partners. ### Relationship Age Gaps Age differences in relationships often bring a unique set of dynamics and challenges. The line "Don't you think I was too young to be messed with?" from "Dear John" touches on a core issue in age-disparate relationships: the imbalance of experience and power. This is further emphasized in lyrics like "You're so much older and wiser and I / I wait by the door like I'm just a kid." There's also a fascination and allure often associated with age differences. In lyrics such as "Younger than my exes, but he act like such a man, though" there's an evident attraction towards maturity and experience. Being told "If we had been closer in age maybe it would have been fine" was clearly devastating enough to write a 10 minute song about it. ### Growing Up Taylor's journey in the public eye has been a captivating narrative of growth and transformation. Her album "1989," a nod to her birth year, is more than a title; it's a declaration of her journey, symbolizing the evolution of an artist who has matured in the full glare of the spotlight. Swift's trajectory challenges the typical narrative of early fame leading to a tragic downfall. Instead, her career is a testament to resilience and adaptability. ## Don't Call me Kid In the dynamic tapestry of Taylor Swift's career, each chapter has been a testament to her enduring spirit and remarkable ability to evolve. Her journey, marked by milestones and challenges, has been a unique blend of personal and artistic growth. Swift's story is a powerful reminder that maturity and wisdom are not mere functions of age but are shaped by experiences, resilience, and the courage to embrace change. Her music, ever-evolving, resonates with audiences across generations, affirming that in the world of Taylor Swift, the phrase "Don't call me kid" is not just a lyric—it's a declaration of her journey from a talented young artist to a seasoned, influential force in the music industry…and the world stage. --- Title: Productive non-productivity Description: How rest and breaks can enhance creativity and productivity Date: 10/18/2023 Source: boleary.dev Tags: Leadership, Leadership I love the show Mad Men. I'm not sure any artistic work has so accurately captured the "feel" of an era - from the positive to the negative, from the real to the imagined, from the glamours to the errors in judgment. And to top it off, I love a good Old Fashioned. There are many recurring themes in the series, but a seemingly innocent occurrence that happens a handful of times in the run has always gnawed at me. Don Draper - the main character - goes to the movies in the middle of the workday, multiple times. He expresses once that this is something he needs to reset his thinking - as Creative Director, his persuasiveness and creativity are the main value he brings to his firm. And that always has left me thinking: is this another one of Don's many character flaws, or one of the signs of his creative genius? In today's fast-paced work culture, productivity can come from non-productivity. Busyness is often mistaken for productivity, but true productivity may come from periods of apparent non-productivity, and there is evidence to back this up. But two preconceived notions have always been a challenge for me, personally, in accepting this idea: - The belief that relentless effort guarantees superior outcomes: burnout is merely a phase to "persevere" through. - While others may justify moments of inactivity, I often hold myself to a stringent standard, valuing perpetual busyness over genuine downtime. Yet, despite those two thoughts occupying my immediate attention, the notion of how non-productive time influences an individual's overall productivity, particularly in the realms of creative or knowledge work, continuously simmered in the background. In an almost self-referential moment, I realized I had been crafting this article mentally while being engaged in other tasks. Through extensive research and introspection, I've delved into how taking breaks or allowing our minds to drift can indeed foster heightened creativity and improved performance. Let's take a look together. ## The Enigma of the Resting Creative Mind In a study by Quentin Raffaelli et. al., "Creative Minds at Rest," there's a fascinating correlation between originality and one's state of mind during periods of rest. Individuals with higher originality scores (divergent thinking, and thinking "outside the box") showcased specific characteristics during their idle or resting state. For instance: - **Associative Thought Patterns:** These individuals exhibited more freely moving thoughts and a higher percentage of associative transitions. - **Semantic Similarity:** A greater semantic similarity was observed between their thoughts, suggesting a richer tapestry of interconnected ideas. - **Engagement:** There was a heightened engagement with their resting state thoughts, evidenced by decreased levels of boredom and an increased total word count in expressing these thoughts. - **Curiosity:** Higher originality was also linked with greater curiosity, further underscoring the idea that a wandering mind is not an idle one. But what does this tell us? Simply put, when our brains are "at rest", they're far from inactive. Instead, they're potentially cultivating the next big idea or solution to a problem that our 'busy' selves haven't yet solved. ## Micro-Breaks: Small Pauses, Big Impact While the wandering mind holds secrets to our creative potential, there's also power in intentional rest. Enter micro-breaks. A recent meta-analysis titled "Give me a break!" delves into the role of these short, deliberate pauses taken between work tasks. The findings? Micro-breaks, though small, have a tangible impact. They've been shown to: - **Boost Vigor:** These breaks significantly enhance our sense of energy and enthusiasm - **Reduce Fatigue:** They also play a pivotal role in reducing feelings of tiredness and fatigue - **Performance:** While the overall effect on performance is not statistically significant, it's intriguing to note that the breaks had a more positive impact on tasks with lesser cognitive demands. Plus, the longer the break, the greater its impact on performance, suggesting there's an optimum break length for different tasks. At a foundational level, the cognitive load theory underscores the limits of our mental capacity in working memory. When we exert too much cognitive effort on a single task, it naturally impedes our learning capabilities. With finite cognitive resources at our disposal, allocating them towards one activity leaves less available for others. Through this lens, micro-breaks appear as an organic counter-response, allowing our minds to recalibrate from potential cognitive overload which can dampen performance. So, while we've known anecdotally that "taking a break" can help, it's now backed by science. These short pauses can rejuvenate us, preparing our minds for the next bout of intensive work. ## Not All Work is Created Equal: The Balance of Mindful and Mindless Work In our quest for productivity, there's a tendency to pile on the tasks, assuming that filling every hour with challenging work will yield the best outcomes. However, a study titled "Enhancing creativity through 'mindless' work: A framework of workday design" offers a different perspective. It contends that an overdose of mindful tasks might be the very thing inhibiting our creative juices. When professionals are subjected to an incessant flow of high workload pressures, their workday shifts from a balanced state of mindful work, which fosters creativity, to an overwhelming state of "relentlessly mindful work". Such an environment not only drains the individuals, but can lead to an actual decrease in creativity. So, how can we break this cycle? In the study, researchers found that rather than inundating ourselves with a ceaseless barrage of challenging tasks, we should consider designing workdays that strike a balance between mindful and mindless work. By deliberately incorporating bouts of "mindless work" – tasks that don't demand intense cognitive processing – into the daily routine, professionals might find moments of respite, where their minds can wander and perhaps stumble upon creative insights. This approach, as the study's framework suggests, is not about promoting lethargy but strategically alternating between different types of tasks to maximize both productivity and creativity. It's evident that the value of pauses, breaks, and even seemingly "mindless" tasks cannot be overlooked. Just as micro-breaks rejuvenate the mind and boost well-being, periods of lighter, less demanding tasks can serve as fertile ground for innovative thoughts. By appreciating and integrating these moments of "mindless" work, we might find that they hold the key to unlocking some of our most creative ideas. ## Marrying Creativity and Productivity As we pull these insights together, a coherent narrative emerges: The path to greater productivity and creativity might not always be through relentless work but rather through intentional periods of rest and unstructured thought. Our brains, when allowed to meander, can tap into reservoirs of creativity that structured environments might stifle. For organizations and individuals striving for innovation, recognizing the value of these periods of 'non-productivity' is crucial. Whether it's encouraging daydreaming, fostering environments that allow for non-linear work days and downtime between tasks, or even embracing the entirety of remote work culture, organizations would do well to not create systems that stifle this kind of creativity. The journey of unraveling the intricacies of the human mind never ceases to astonish. While our society often equates continuous work with success, the science suggests a nuanced view. The rhythm of our productivity is punctuated by moments of "productive non-productivity" — where our creative minds wander, and we find respite in brief pauses. By recognizing and honoring these moments, we nurture our well-being and pave the way for unexpected sparks of creativity and innovation. So, the next time you find yourself taking a step back or daydreaming, remember: it's not just a break; it's a bridge to a more vibrant, creative, and efficient self. Maybe take in a movie or, if it's the poison you'd pick, have an Old Fashioned. I wouldn't recommend doing much else that Don Draper did in his life. ### Citations - Quentin Raffaelli, Rudy Malusa, Nadia-Anais de Stefano, Eric Andrews, Matthew D. Grilli, Caitlin Mills, Darya L. Zabelina & Jessica R. Andrews-Hanna (2023) Creative Minds at Rest: Creative Individuals are More Associative and Engaged with Their Idle Thoughts, Creativity Research Journal, DOI: [10.1080/10400419.2023.2227477](https://doi.org/10.1080/10400419.2023.2227477) - Elsbach, Kimberly D., and Andrew B. Hargadon. "Enhancing creativity through 'mindless' work: A framework of workday design." Organization science 17, no. 4 (2006): 470-483. - Amabile, Teresa M., Constance N. Hadley, and Steven J. Kramer. "Creativity under the gun." Harvard Business Review 80 (2002): 52-63. - Fox, Michael D., and Marcus E. Raichle. "Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging." (2007). - Hammond, C. (2015, November 6). "Why we should stop worrying about our wandering minds." BBC. [https://www.bbc.com/future/article/20151106-why-we-should-stop-worrying-about-our-wandering-minds](https://www.bbc.com/future/article/20151106-why-we-should-stop-worrying-about-our-wandering-minds) --- Title: What they don't tell you about being an executive Description: Lessons learned about executive leadership Date: 8/31/2023 Source: boleary.dev Tags: Leadership, Leadership The corner office (ha! Remember offices?!). The coveted title. The authority. These might be what many envision when they think of executive roles. However, the road from being an individual contributor to a manager to an executive in charge of an entire function is one that is much more of a winding road than the traditional image of a "ladder." Or maybe it is a ladder, but with countless rungs missing. I've had the privilege of spending time as an IC, a middle manager at a large organization, an executive at a few small companies and many things in between all of those worlds. And I don't in any way consider myself an expert in this area - but I have learned a lot from my mistakes and by reading and listening to other leaders who I look up to. And so I wanted to add my small story to this corpus, and hopefully point folks in the right direction as they build their own careers. At the end, I've included a number of references from people much smarter than I so that you can continue learning from some of the best executives in technology. ## Executive Team as Your Primary Team Perhaps the biggest shift from being a front-line manager or an IC is where your primary team lies. Traditionally, most folks think of the function they are working in as their number one responsibility. So it would make intuitive sense that when you are leading that function that it would be clearly your number one priority. But when you're on an executive team, it's actually more important to firstly be a member of your own executive team. Patrick Lencioni in "The Advantage" provides a compelling argument for the latter. He states that members of a leadership team should give precedence to their roles within that team over the teams they individually lead. While it might seem natural to prioritize one's own department or function, such a mindset is fraught with risks. It inadvertently turns the executive group into a lobbying faction, each pushing for their department's interests, rather than collaboratively deciding what's best for the entirety of the organization. This shift in priority is not just a theoretical construct, but has tangible repercussions on the ground. Employees within departments want a cohesive and aligned leadership team, as the decisions and directions of this group directly impact their success. Chaos or a lack of unity at the executive level frequently leads to inter-departmental conflicts, hindering progress. Understandably, some leaders might resist deprioritizing their immediate teams. After all, it is the function of their role to oversee that team. However, Lencioni argues that such a shift is vital, not for hierarchy but for building a truly cohesive, integrated, and healthy organization. ## Meetings as Actual Work Okay, let's address the number one elephant in the executive room...meetings. I almost led with this, as I think it is top of mind for many, many leaders. Especially in a world where many people have shifted to remote work without having specific intention around that shift, it can be a real struggle to understand when a meeting is valuable versus a "waste" of time. My father plays golf with a Director at a large engineering organization, and in the post-COVID remote world (without adapting their work practices) they talked about being on Zoom meetings from 7 am to 7 pm every day. This is perhaps the obvious extreme, where it's clear there are more meetings than are valuable - but where is the line? When you're an IC, many meetings can feel wasteful...but that line is different as you move into leadership. As Satya Nadella, CEO of Microsoft, said, "You have to create clarity where none exists." That clarity often originates in these very meetings, setting the direction for the entire organization. And since driving that clarity is the most important thing you can do as a leader, you have to start seeing meetings as vital parts of your job. That's not to say you shouldn't make sure they are effective - I've seen LOTS and LOTS of ineffective meetings in my day. In fact...note to self: write an article just on running effective meetings. Once you've got that muscle of effective meetings worked out (and like a muscle, always working at it and improving, so it doesn't atrophy), your team, both the executive team and your functional team, will benefit. As Sarah Drasner put it in *Engineering Management for the Rest of Us*: > I get it; engineering teams like to . . . do engineering work! No one likes a lot of meetings, but getting a group together is valuable time spent. Your team needs to see one another with a regular cadence. They need to talk through their work, discuss issues, and hang out a bit. Meetings are part of the job. Make them count. ## The Evolution from IC to Executive Shifting from an individual contributor to an executive isn't just about scaling tasks. It's a paradigm shift. Instead of diving deep into the 'how', executives often navigate the realms of 'why' and 'what'. There is a pretty blunt story about this that is in the folklore around Apple founder Steve Jobs. The story goes: when employees rose to the position of vice president at Apple, Steve Jobs would share an anecdote with them. He'd describe a scenario where if the trash in his office wasn't taken out, he'd ask the janitor why. The janitor might reply, "The door lock was changed, and I didn't have the key." This excuse is acceptable because a janitor needs access to complete his task. In his position, having such reasons is permissible. Jobs said to his new VPs, "For the janitor, reasons are valid. But as you move from the janitor's role up to the CEO, those reasons become irrelevant." He would say that the line between reasons mattering is somewhere between a janitor and VP at Apple, so now that you were a VP, the reasons for something no longer mattered, the outcome matters. Perhaps a more positive spin comes from Drasner in *Engineering Management for the Rest of Us*: > Try to work on aligning people to the outcomes you are looking for from them. You don't have to code it all yourself, you need to articulate why the code is necessary, and what it will need to do in order to accomplish this. Trust in them to figure out the hows and consult when they need guidance. When you're an IC, the code you write and the things you produce matter. As you move to being an executive, your job is the alignment to the outcomes as well as the outcomes themselves. ## Giving Agency and Prioritizing People With power comes responsibility, particularly in decision-making. GitLab introduces the concept of the Directly Responsible Individual (DRI), streamlining decisions by assigning a clear owner. However, this doesn't exclude team input. Everyone should have the chance to voice opinions, even disagreements. Once a decision is made, commitment is crucial, but internal discussions can continue, ensuring the path taken is truly the best one. Give individuals the autonomy to prioritize tasks they find most valuable. If a meeting doesn't appear essential and a person's involvement isn't crucial, allow them the choice to skip it. "Trust in them to figure out the hows and consult when they need guidance." Empower every team member to act as a manager of one, eliminating the need for daily check-ins to meet their objectives. Grant them the autonomy to take charge of projects and initiatives, trusting them to drive successful outcomes. The wisdom encapsulated in Ben Horowitz's quote, "Take care of the people, the products, and the profits — in that order," serves as a guiding principle for any successful organization. It underscores the idea that the well-being of employees should be the foremost priority, as it is the workforce that drives innovation and productivity. When the team is motivated and aligned, superior products naturally follow. Profits, though important, come last in this sequence. They are often the result of an engaged workforce and exceptional products. In prioritizing in this manner, leaders can build a sustainable, healthy, and profitable organizational ecosystem. ## So, now what? In the dynamic landscape of leadership, the role of an executive is often romanticized, if not grossly misunderstood. While some paint a picture of grandeur and ease, the reality is marked by challenging decisions, nurturing talent, and striking the delicate balance between individual departments and overarching organizational goals. Regular meetings are not mere calendar fillers; they are the compass by which a company navigates its future. Being an executive is a journey, not of solitary leadership, but of collective progress and alignment to values; it means changing your perspective. You move from doing things yourself to having a broader vision that empowers and impacts others. It's rare that an IC's decision will affect the future livelihood of other team members, but executives make those kinds of decisions every day. Great executives understand the big impact of every decision and realize that disagreements with those decisions are inevitable. However, unity in the execution of those ideas is key. Most of all, being at the top of an organization isn't just about driving growth. It's not even mostly about driving growth. It's about creating an environment where people are empowered to innovate and are encouraged to prioritize what's essential: family, friends, and their well-being. ## References - [Engineering Management for the Rest of Us](https://amzn.to/3OTe6Se) by Sarah Drasner - [The Hard Thing About Hard Things](https://amzn.to/3KWOrHg) by Ben Horowitz - [The Advantage: Why Organizational Health Trumps Everything Else In Business](https://amzn.to/3EelZNj) by Patrick M. Lencioni - [High Output Management](https://amzn.to/3EaBo0Z) by Andy Grove - [GitLab's Handbook on Leadership](https://about.gitlab.com/handbook/leadership/) --- Title: AI Instagram Description: Experimenting with AI image generation inspired by Bo Burnham Date: 6/8/2023 Source: boleary.dev Tags: AI, Personal Do you think the first all AI Instagram account has been created yet - one that fools people into thinking it's a real person, but all the photos are completely AI generated? I was listening to Bo Burnham's [Inside](https://www.netflix.com/title/81289483) again, which I am convinced will be the key piece of art to come out from the pandemic that explains how it felt to those of us who lived through the pandemic in the United States. At least those of us who like Bo struggle with mental health and are extroverted but were forced inside for years at a time... Anyway, I digress. I decided to try and generate my own Instagram from scratch based on the lyrics to his song [White Woman's Instagram](https://genius.com/Bo-burnham-white-womans-instagram-lyrics). Here are the results. - An open window - A novel - A couple holding hands (I'm glad that AI is bad at drawing hands still too just like [humans](https://www.reddit.com/r/explainlikeimfive/comments/4zuvaa/eli5_why_is_it_so_hard_to_draw_or_animate_hands/)...) - An avocado - A poem written in the sand - Fresh fallen snow on the ground - A golden retriever in a flower crown That's it for the first verse of White Woman's Instagram. I have to say with some more time spent on the prompts, many of these could pass for real Instagram pics - the only problem being with rendering humans. But then again, I was using a free version of the DALL-E engine and could probably get better results if I paid for them. Special thanks to craiyon.com on Hugging Face for the generator - feel free to give it a try for yourself: [https://huggingface.co/spaces/dalle-mini/dalle-mini](https://huggingface.co/spaces/dalle-mini/dalle-mini). ## Bonus Pics As a bonus, I put an entire verse into the generator: > Latte foam art, tiny pumpkins > Fuzzy, comfy socks > Coffee table made out of driftwood > A bobblehead of Ruth Bader Ginsburg > A needlepoint of a fox We got socks that look like tiny pumpkins and latte foam art! But then I didn't get the thing I wanted most out of it - which was a bobblehead of Ruth Bader Ginsburg...so I spent far too much time trying to make that happen. ## One more bonus image From "That Funny Feeling" I had to try and produce "Stunning 8k resolution meditation app" --- Title: New Talk: Shakespeare, Bacon and the NSA Description: A talk about Elizebeth Smith Friedman and the history of cryptography Date: 5/25/2023 Source: boleary.dev Tags: Talks, Security, History I'm so excited to share a different kind of post with you - a recording of a recent talk I gave in Baltimore at BSidesCharm. Enjoy! ## Abstract A code-breaking Quaker poet from Indiana who hunted Nazi spies? All right, that sounds like some sort of comic-book superhero. And what is this superhero's origin story? Oh, they just were plucked from a library in Chicago to the secretive lair of an eccentric billionaire to study a secret code in the writings of Shakespeare that talks of a hidden heir to the English crown? Now it *must* be the latest in a series of multiverse-based superhero movies, right? As always, truth is stranger than fiction, and this is the actual life of Elizebeth Smith Friedman, who had a hand in not only breaking codes during both World Wars but, along with her husband, is credited as a founder of modern cryptology. Elizebeth's extraordinary life can serve as a lesson to all of us about what it takes to change the world. Even a poet can end up founding a science that today backs the entirety of technology and inspire some of the most sophisticated government agencies ever conceived of by humanity. In this talk, we'll follow Elizebeth's journey, learn the history of cryptography, and apply those lessons to how we should view technology and technologists today. Check it out! [BSidesCharm 2023: Shakespeare, Bacon and the NSA](https://www.youtube.com/watch?v=example) --- Title: AI - What can it do for me today? Description: Exploring practical uses of AI and LLMs Date: 4/20/2023 Source: boleary.dev Tags: AI, Tech With the rapid advancements in artificial intelligence (AI) and large language models (LLMs), such as OpenAI's GPT-4, the potential applications of this technology in content creation and other industries are staggering. However, finding the ideal use case for LLMs has been a challenging process, and differentiating between artificial general intelligence (AGI) and LLMs can be confusing for some. In this blog, we'll explore what AI and LLMs are good at today, and how you can leverage their capabilities to optimize your workflow. ## Artificial General Intelligence (AGI) vs. Large Language Models (LLM) Artificial general intelligence (AGI) refers to the hypothetical AI that can perform any intellectual task a human can do, encompassing a wide range of cognitive abilities. On the other hand, LLMs are a type of AI model that has been trained on vast amounts of text data to generate human-like responses to prompts. While AGI remains a theoretical concept and is yet to be achieved, LLMs, in their current state, can assist users in generating ideas, summarizing large texts, and brainstorming, with the right guidance from the user. ## AI and LLMs: What They're Good at Today I've personally struggled with how to make LLMs work for me – keeping in mind that I haven't yet spent a lot of time with them – not being able to have it output something that doesn't just sound like "Oh this was written by ChatGPT." I even complained about this on Twitter recently, generating a lot of feedback (including some automated bots powered by ChatGPT themselves): > Is it just me or is ChatGPT not actually that useful? I just can't seem to get output from it that is valuable and doesn't "sound" like ChatGPT. Am I wrong? — Brendan O'Leary 🐀 (@olearycrew) What I learned (at least from the humans responding) is that the problem could be partially mine. It turns out that even though I thought I had a good handle on the difference between AGI and LLMs, I don't think I fully appreciated what that meant for how I, as a user, interacted with LLMs. Additionally, while "3.5" to "4.0" may seem like a small jump, I have to say that GPT4 is massively better at generating original-sounding text than 3.5 was...a fact I didn't really encounter until I ponied up the $20/month for ChatGPT Plus. Furthermore, I've come to understand focusing on using LLMs for what they are good for. [Ali Abdaal](https://www.youtube.com/c/aliabdaal) has a great video on this where he shares some ways he uses AI in his content creation workflow. To generalize some key areas to keep in mind that "AI is good at today," think of things like: - **Idea Generation:** LLMs can help users come up with new ideas, storylines, or topics for their content. By providing a prompt, users can get a range of creative suggestions from the LLM, helping them overcome writer's block or explore new angles on a topic. - **Summarizing Text:** LLMs are particularly skilled at summarizing large volumes of text, enabling users to extract key insights and data from documents, reports, or research papers quickly and efficiently. - **Drafting and Brainstorming:** AI and LLMs can be used to kick-start the writing process by generating initial drafts or helping users brainstorm ideas, making it easier for them to develop and refine their content. Even given that context, though, there is one more thing that is pretty important. You may have seen folks on the internet joking about being future "[prompt engineers](https://twitter.com/shauseth/status/1648887952469291008)" ... but that's not that far from the truth! The right prompt can be critical in getting a LLM like ChatGPT to do what you want it to do. ## Crafting the Perfect Prompt A critical factor in getting the most out of LLMs is writing an effective prompt. By providing the LLM with the right context, users can significantly improve the quality of the generated content. To write a good prompt: - **Be specific:** Clearly define the topic, genre, or format you want the LLM to generate. - **Provide context:** Offer background information, examples, or guiding questions to help the LLM understand the desired output. - **Set limits:** Specify constraints such as word count, tone, or style to ensure the content stays on track. There are several resources available online that offer sample prompts, such as OpenAI's prompt library, which provides examples of successful prompts that can help users get the most out of LLMs. Additionally, I've found these tools really useful: - [Trickle Prompts](https://www.trickle.so/prompts-db), a warehouse of prompts folks have had success with - [SecGPT](https://twitter.com/Jhaddix/status/1644096430867644417): a prompt designed by Jason Haddix to aid with security-specific tasks - [Awesome ChatGPT prompts](https://github.com/f/awesome-chatgpt-prompts): A GitHub repo with an ever-growing list of crowdsourced prompts - [TypingMind](https://www.typingmind.com/): A better GPT UI and great prompt resource from one of my favorite indy developers, [Tony Dinh](https://twitter.com/tdinh_me) ## Other Applications of AI and LLMs Today Beyond content creation, AI and LLMs are proving to be valuable tools in various other industries and applications: - **Summarizing Recorded Calls:** AI can transcribe and summarize long recorded calls, making it easier for users to review key points and action items. (there are dozens of startups building this type of tooling as we speak) - **Text-to-Speech Conversion:** LLMs can turn written text into natural-sounding speech, providing accessibility options for visually impaired users or creating engaging audio content for podcasts, audiobooks, or e-learning. (see [https://speechify.com/](https://speechify.com/)) - **Machine Translation:** AI and LLMs can facilitate real-time translation between languages, enabling seamless communication between people who speak different languages. (if you haven't tried the real-time Microsoft Translate app, I highly recommend you do!) - **Sentiment Analysis:** LLMs can be used to analyze customer feedback or social media content, allowing businesses to gauge customer sentiment and tailor their products, services, or messaging accordingly. In conclusion, AI and LLMs have the potential to revolutionize the way we create content, communicate, and process information. By understanding the strengths and limitations of AI-generated content and LLMs, as well as mastering the art of crafting effective prompts, users can unlock the full potential of these technologies to enhance their workflows and optimize various tasks across industries. As AI and LLMs continue to develop and improve, we can expect even more groundbreaking applications to emerge, transforming our world in ways we have yet to imagine. --- Title: Don't let good get in the way of better Description: The importance of iteration in product development Date: 2/27/2023 Source: boleary.dev Tags: Tech, Leadership Every so often we hear the phrase "Don't let perfect get in the way of good" – but I am not convinced that goes far enough. What we should say is, "Don't let good get in the way of better." We need to challenge the conventional wisdom that urges us to avoid the pursuit of perfection in favor of delivering a merely good product or service. While it's true that striving for perfection can frequently be a recipe for disappointment and delay, settling for mediocrity is not the answer. The key is to find the right balance between getting something out the door quickly and continuing to improve it over time through a process of iteration. ## Iteration, Iteration, Iteration Iteration is a critical way to improve things over time. It involves releasing a product or service that is good enough to use, but not perfect, and then refining it based on user feedback, market research, and other information. This approach allows you to create something that is better than what you started with, without getting bogged down in the pursuit of perfection or even of "good enough". This philosophy can be applied to many types of knowledge work, from writing to software development. For example, developers often fall into the trap of trying to make a product perfect before releasing it to the public. This can lead to long delays, missed deadlines, and frustrated users. Instead, developers should focus on getting something out the door quickly and then iterating on it based on user feedback. This approach allows developers to respond to user needs and enhance the product much faster than if they were trying to create a perfect product from the outset. ## Not like this It makes me think of a drawing by Hendrik Kniberg, which has become popular and is used in presentations related to agile and lean development. The drawing is a metaphor about product development, using a car as an example. This shows us two ways of building a car. In the 'Not Like This' example, the product gets built entirely before delivery, which can be risky, and the final product is most likely to have design flaws based on incorrect assumptions. You might see this as iteration, as there are steps – but it's not iterating in making something incrementally more valuable. In contrast, in the 'Like This!' example, the team focuses on the customer's underlying need, delivers the smallest testable product to get feedback, and learns from it. The key question is to find the cheapest and fastest way to start learning. ## But we need it to be "right" The idea of getting things "right" is a tiny box invented by people who are afraid to be seen as less than. They believe that if they release something that is less than perfect, they will be perceived as incompetent. However, the reality is that perfection is an illusion. There will always be room for improvement, no matter how good something is. By embracing iteration, you can acknowledge this fact and focus on making continuous improvements over time. Creating a culture of iteration is key to making this approach work. This means encouraging a mindset where people are willing to release something that is good enough and then work together to improve it over time. It also means providing the tools and resources necessary to support this process, such as user feedback mechanisms, data analytics tools, and a willingness to experiment with new ideas. One of the benefits of this approach is that it allows you to deliver more value faster. By releasing something early and then iterating on it, you can get a product or service into the hands of users much faster than if you were trying to create something perfect from the outset. This can be particularly important in fast-moving industries where speed is of the essence, such as technology or finance. Another benefit of this approach is that it can help you to avoid costly mistakes. By releasing something early and then iterating on it based on user feedback, you can catch problems and fix them before they become major issues. This can save you time and money in the long run, as well as help you to build a reputation for quality and responsiveness. ## Shipped is better than perfect "Don't let good get in the way of better" is a powerful reminder of the importance of iteration in improving products and services over time. By focusing on getting something out the door quickly and then iterating on it based on user feedback, you can create something that is better than what you started with, without getting bogged down in the pursuit of perfection. This approach can be applied to many types of knowledge work, and can help you to deliver *good* things even faster than you thought was possible. --- Title: Using PD tools to find my first subdomain takeover Description: How I used ProjectDiscovery tools for security research Date: 2/16/2023 Source: boleary.dev Tags: Security, Tech Since [joining PD in December](/blog/2022/12-building-in-the-open-again/), I've been learning a lot about our community and what it takes to be a modern security engineer. One of the biggest lessons I think I've learned is: > To be a great defender, you have to think like an attacker Gone are the days when there were very separate teams—red and blue have even become "purple" in many organizations—doing security. And like the terminology or not, security continues to "shift left" and move into "DevSecOps." While anyone trying to sell you something with the buzzwords might not know it, practitioners are seeing how the worlds continue to combine. In that world, it can be tough to understand how to scan for, communicate about, remediate, verify and prevent future instances of vulnerabilities. That entire cycle is one some organizations struggle to get right and efficient. Attackers (and their nicer cousins, bug bounty hunters) on the other hand don't have the constraints of being a large bureaucratic organization. They can explore the attack surface and poke for vulnerabilities in a way that assumes nothing about how things "should" be and only focuses on how they "are." ## Recon To start figuring out how things really are, you have to do reconnaissance. While it's valuable to have an inventory of assets, subdomains, and other internet-facing properties – you shouldn't assume that represents the entire attack surface area for your organization. You want to look through public data about subdomains with a tool like [subfinder](https://github.com/projectdiscovery/subfinder) and ensure that you know what is out there. Once you have the list of possible targets, you want to know what's really going on with each. That's where tools like [httpx](https://github.com/projectdiscovery/httpx), [naabu](https://github.com/projectdiscovery/naabu), and [other tools](https://github.com/projectdiscovery) can be useful – figure out what the exposed surface area is for each of these assets. And once you know the entire breadth of the surface area, then you need to be scanning for not just things you think could be wrong, but also common and new attack vectors that may come up. What makes [nuclei](https://github.com/projectdiscovery/nuclei) so powerful here is twofold: - First, the community of open source contributors who update the baseline templates constantly to reflect the current state of the threat environment. - And second, the ability for you and your team to write your own templates in YAML – and easily communicate about vulnerabilities as code. Using the nuclei template as the main communication point allows even large enterprises where detection, remediation, and prevention of vulnerability exploits may span departments and geographies to communicate efficiently and effectively about what they need to look for. The next hidden power of all these tools we mentioned is the ability to create a meaningful pipeline to do all of this in a repeatable, continuous way, like say: ```bash subfinder -d boleary.dev | httpx | nuclei -tags mytemplates ``` Creating a pipeline like that and then using the tooling to communicate and check for remediation allows your team – big or small – to think like an attacker and prevent more bad actors from gaining a foothold in your infrastructure. And once I had that in place, I was able to identify a possible subdomain takeover target with nuclei. ## Subdomain takeover overview So, what is a "subdomain takeover attack" anyway? Why are they important? A subdomain takeover attack is a security vulnerability that occurs when a subdomain (e.g., subdomain.example.com) is pointing to a service (such as GitHub Pages, Heroku, etc.) that has been discontinued or deleted by its owner. An attacker can then claim this subdomain and set up their own content, effectively hijacking it. This can allow the attacker to carry out phishing attacks, host malicious content, and steal sensitive information, among other things. The attack is generally achieved by exploiting a misconfiguration in the DNS settings for the subdomain, where the DNS records for the subdomain still point to the discontinued service. The attacker then registers an account with the discontinued service using the same subdomain name, and the service will then treat this account as the new owner of the subdomain. ## Building the Proof of Concept (POC) To build a proof of concept (POC) of a subdomain takeover attack using [AWS Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk/), I followed the general steps below. Depending on the service that the target is using (AWS Elastic Beanstalk, GitHub pages, Heroku, etc.) the specific steps will differ, but the general principles will remain the same. 1. Find a target subdomain that is using AWS Elastic Beanstalk and has been discontinued or deleted by its owner, as we discussed in the recon section 2. From that target subdomain, note where the CNAME points – if it is to a publicly accessible part of the AWS infrastructure, it will be something like foo.region.elasticbeanstalk.com 3. Register an AWS account or use an existing one 4. Verify that the subdomain is pointing to the AWS Elastic Beanstalk service and is available for you to use 5. Create a new Elastic Beanstalk application in your AWS account and deploy a simple website or application. You can even just start with one of the sample applications. 6. Verify that your new website or application is accessible from the subdomain By successfully executing these steps, you would have successfully demonstrated a subdomain takeover attack using AWS Elastic Beanstalk. You can then report the finding to the organization through their responsible disclosure program. To protect against subdomain takeover attacks, companies should take the following steps when decommissioning applications: - Remove all DNS records for the subdomain - Ensure that the subdomain is no longer pointing to any external services such as AWS Elastic Beanstalk, Heroku, or other Platforms as a Service - Remove the account associated with the subdomain from any external services - Monitor the subdomain for any unauthorized activity By following these steps, companies can reduce the risk of subdomain takeover attacks and protect their sensitive information and intellectual property. Additionally, they should regularly review their DNS records and decommissioned applications to ensure that they are not vulnerable to subdomain takeover attacks. This should include regular scanning not just of known assets, but built in tools to scan for unknown assets. In an ideal world, the deployment of applications to your domain would be made secure through CI/CD and deployment systems. These systems should be designed to clean up after themselves when destroying an application. However, in reality, teams within large organizations often have different standards, making it crucial for blue, red, and purple teams to implement safeguards against subdomain takeover attacks. To mitigate the risk, regular scanning of the entire exposed infrastructure is a must. *Originally published on the ProjectDiscovery blog: [https://blog.projectdiscovery.io/using-pd-tools-to-find-my-first-subdomain-takeover/](https://blog.projectdiscovery.io/using-pd-tools-to-find-my-first-subdomain-takeover/)* --- Title: Using a katana to migrate my website Description: How I used Katana to crawl and migrate my website to Ghost Date: 1/30/2023 Source: boleary.dev Tags: Tech, Security, Tech For years, I coded my website as a static site generated amalgam. While I started from the base [VuePress](https://vuepress.vuejs.org/) as a building block – which is great, by the way – I also spent far too much time coding fun things by hand. While these things (like hidden keyboard shortcuts) are fun, I recently decided that I also wanted to be able to have folks subscribe to this blog….the one you're reading right now. And given the recent turmoil in various social networks with changes of ownership and rules left and right, I was very hesitant about entrusting this audience to "someone else." At the same time, I wanted to avoid building an email-blog-subscription-from-scratch into my existing website. That was a bridge too far. Enter [Ghost](https://ghost.org/) - an open source blogging platform. I could host that myself (I use [Digital Ocean](https://brendan.fyi/digitalocean) and [Cloudron](https://www.cloudron.io/) to host numerous services already). And while that in some ways was just trading the complexity of coding the site myself, it was at least not greatly increasing the complexity. Ghost also has a hosted service option…but I wanted to rip the whole band-aid at once and just host it myself. You can feel free to put a reminder in your calendar for 5 years from now to ask me how that worked out for me…maybe I'll live to regret it. ## 📦 Moving the blog So, it seemed simple enough – I spin up a Ghost container in my Cloudron dashboard, and I'm off to the races! And I can just specify the subdomain as blog.boleary.dev. That's the easy part. But two big questions came next: - How do I migrate my existing blog content and once it is migrated - How can I make sure that all the links out there in the world end up at the right place? It turned out both processes were relatively simple – the Markdown format of my existing blog allowed me to easily transfer each blog post over to Ghost. And then, while hosting my blog on a separate subdomain, I could just redirect the correct addresses to blog. for all of my previous posts. But once I had seen the simplicity of Ghost, I started seriously considering replacing the whole website with Ghost. However, that presented a new challenge: how do I make sure that all the links out there in the world still worked if I were to migrate the entire site? I could try to be as careful as possible, but how could I be certain I didn't miss something? ## ⚔️ Enter Katana [Katana](https://brendan.fyi/katana) is a modern web crawler built in Go from [ProjectDiscovery](https://brendan.fyi/pd), the folks behind such fantastic hacking tools as [subfinder](https://brendan.fyi/subfinder) and [nuclei](https://brendan.fyi/nuclei). Katana has numerous options, making crawling sites for all of their content simple. You can use the command `-u example.com` to crawl a simple URL, or `-list domains.txt` to input a list of URLs to crawl. Katana, like all PD tools, also supports stdin/stdout so that it can be easily placed into a pipeline. You can pipe the results of [httpx](https://brendan.fyi/httpx) into Katana or pipe the URLs Katana finds right into [nuclei](https://brendan.fyi/nuclei) and any other combinations of things you can dream up. But my use case for Katana was a little simpler - and less intent on finding and fixing vulnerabilities…except that I didn't want my site to be vulnerable (hehe) to dead links once I migrated it. To get started, I had Katana crawl all of boleary.dev just to see what it found with: ```bash katana -u https://boleary.dev ``` That works great for a website that is "static" – that is, it doesn't use any client-side JavaScript to create and display more content like a [single-page app (SPA)](https://en.wikipedia.org/wiki/Single-page_application) or other dynamic technology like Vue or React. But even if you're crawling a site with that kind of tech, you can use a headless browser to render the pages fully and process all the related JavaScript with: ```bash katana -u https://boleary.dev -headless ``` And there are a LOT of other great options that Katana has, like: - `-d int` Set the crawling depth (by default the depth is 2) - `-automatic-form-fill` Automatic form filling as part of the crawl - `-show-browser` to show the browser during a crawl - `-f` for extracting basic fields like the url or directory - and, you can write custom regex to extract exactly what you're looking for in the responses And you can learn more about all the features on [GitHub](https://brendan.fyi/katana). ## 🧑‍💻 Putting it into action Now I had a whole list of URLs that are linked from all around my website – one off pages I might have built for a particular demo or every blog post that I ever wrote on my blog. Then I could start the job of migrating pages to Ghost – which was still at blog.boleary.dev for the moment, and compare those outputs to the output of: ```bash katana -u https://blog.boleary.dev -headless ``` Ignoring differences I knew would exist because of the technology each was built with, I figured I could easily figure out what actual "web pages" I was missing and needed to convert. ## 🕵️‍♂️ Comparing the output As with other PD tools, Katana follows the Unix philosophy of [many sharp tools](https://en.wikipedia.org/wiki/Unix_philosophy). As such, manipulating the output and comparing it with tools like [diff](https://man7.org/linux/man-pages/man1/diff.1.html) or even side by side in vim [as a buffer](https://til.hashrocket.com/posts/akcktdrn1t-pipe-text-into-vim-from-stdin) was trivial. Now, being that I'm not a Linux expert, but I play one on TV, I did like to be able to as a human see the output of both. Then I could sanity check my greps and diffs were working to make sure that I was comparing apples and apples. That is also simple with katana with a command like: ```bash katana -u https://boleary.dev -headless -o bolearydev.txt ``` Then I could use the output, which is plain and doesn't have any of the formatting of the "pretty" output to the terminal, to compare the output of both sites and know that I hadn't missed anything. ## 🎁 Wrapping it all up Once I was ready to transition, I simply changed my DNS records to point boleary.dev AND blog.boleary.dev to the Ghost site. I actually kept the old site around at legacy.boleary.dev… because I again don't trust myself to have done it all right. In fact, if you're looking to get started with Katana, may I suggest that you try to find something that I "forgot" with the two commands below you can get the output of Katana from both sites. Putting aside things like JS files that are different because they are running different applications, is there anything I "missed" that you can only find on legacy.boleary.dev? ```bash katana -u https://boleary.dev -headless -o bolearydev.txt katana -u https://legacy.boleary.dev -headless -o legacy.txt ``` Reach out to me on Twitter [@olearycrew](https://twitter.com/olearycrew) or on the [ProjectDiscovery discord](https://brendan.fyi/discord) if you find something…there could even be some bounty swag in it for you if I unintentionally missed something 😉 --- Title: Building in the open, again Description: Joining ProjectDiscovery as Head of Community Date: 12/20/2022 Source: boleary.dev Tags: Leadership, Security, Tech import Callout from "../../../../components/Callout.astro"; As promised in a few of my previous posts, I wanted to write more not only about my career until now, but about where I'm going next – and why I chose this particular course over any other. I'm thrilled to announce that I'm joining [ProjectDiscovery](https://projectdiscovery.io/#/) as Head of Community. As the makers of projects like [nuclei](https://github.com/projectdiscovery/nuclei), [subfinder](https://github.com/projectdiscovery/subfinder), [naabu](https://github.com/projectdiscovery/naabu), and [many, many more](https://github.com/projectdiscovery) – PD already has a remarkable community. PD is on a mission to bring security to everyone—and their vision for doing that aligns directly with mine: allow everyone to contribute. For too long, security engineering and other critical cybersecurity practices have been dominated by large proprietary software vendors. We've grown too complacent to just spend a lot of money on security tools that might as well be a black box that says, "make me secure please" that we feed money into and hope that we are more protected. All of that limits the speed of innovation in space that needs speed more than perhaps any other. This is a pattern we've seen before – and one that I don't believe allows for long-term success in the modern world. In a world where every company is a software company, and all of those organizations rely on more and more software to bring value to their customers, we've already seen that the old-school model of security is failing us. ## Open always wins Open always wins While not entirely true in all instances, I have a strong belief that in broad enough problem sets there is no way to "win" with old-school proprietary systems. Systems built in that world are limited by the capital, cognition, and creativity of a very limited set of practitioners—those working inside a given organization. Open source, on the other hand, doesn't have these constraints. It allows entire industries to bring to bear on issues their collective experience and expertise. This scales in a way that you simply cannot reproduce with any real meaningful scale by raising venture capital and trying to hire the "smartest" people you can. And it allows it to scale faster than any company can alone. More eyes don't just make all bugs shallow—they bring pathways for bugs that you just wouldn't otherwise be able to conceive of, faster than you'd ever get there on your own. We've seen this in DevOps in the last 10 years or so. It hasn't been something that's successfully solved inside of one company, it's something that industry has had to choose to work together on in open source projects, DevOps Days, and communities of practice. ## Security isn't a zero-sum game A chain is only secure as its weakest link. We've seen this illustrated time and time again with the various supply chain attacks that have come to the forefront of our industry. And this means that not only do you have to worry about the security of your products – or your own "perimeter" – but actually have a lot more to worry about outside your direct control. At the same time, we also face ever-more-sophisticated attacks, including ones that are perpetrated by nation-state-level actors. Combine with that the explosion of internet-connected devices and ever-growing demand for always-on access to data, and you have enough to give even the most seasoned CISO a headache. It's dangerous to go alone, take this With this combination of factors, it's not possible for you to simply put a moat around your part of the world, pull up the drawbridge, and call yourself "secure." We have to work together as an industry to help everyone secure their systems better – because when one of us is insecure, we all are. Like it or not, we're in it together. As Kelsey Hightower would say – we're on the same team, just working for different companies. ## Security should be open source Given these factors, and so much discussion about "open source security" (meaning the security of open source libraries rather than security as an open source concept) these days, PD is working to flip that on its head. We want to bring to bear on the problem the way we've solved every other complex problem in software in the past decades: open source. By building open source tools, in the open, and encouraging and accepting input from the entire community, we have a chance to actually raise all boats with the tide. And that's why I'm joining PD and incredibly humbled to be working for not only PD but for our community. I see that community as my most important stakeholder, and I expect the community - and all of you - will hold me accountable because of that. As I've said, I don't see a path to success—be it for PD as a company or us as an entire industry—without the work of all of us. It's going to take a whole community of hackers, blue teamers, red teamers, CISOs, security engineers, and, yes, even developers to raise the bar for software security…and I'm so excited to be a part of that. --- Title: What I learned at GitLab that I don't want to forget Description: Key lessons from 5 years at GitLab about writing things down, empowerment, and transparency Date: 12/12/2022 Source: boleary.dev Tags: GitLab, Leadership, Tech After a little over 5 years, I'm going to be leaving GitLab for my next adventure. It's no surprise to those of you who have been following me that I have absolutely loved my time there. I'm so proud of what we built—and I'm still proud and awed by the remarkable people there. [GitLab's values](https://about.gitlab.com/handbook/values/) are the gold standard for what it means to be a wonderful company—for the team members, for the community, and for the shareholders. I hope that I can take even a small portion of that to my next adventure—but I wanted to be more specific to both myself and all of you about the things I think are most important to remember. It's hard to pick a "favorite" value or sub-value, so I've tried my best to summarize those life lessons that I've learned in three categories here: - Write down everything - Give and accept: ownership, agency, and responsibility - Be transparent with a low level of shame If you want to stay up to date with the rest of my journey, please subscribe to this blog, as it will contain the semi-irregular updates along the way 😁 ## Write down everything How many times in companies or organizations you work with do you hear (or say) "I wish we wrote more things down" or "we should make decisions and document them" or "we should do more documentation for X or Y." When you work closely with other people, it is easy to not write things down - humans are made for person to person interactions...which is fantastic. As an extrovert, I love person to person interactions. But when it comes to processes and memory, people are very inconsistent...that's just human nature. That's [why we invented writing](https://www.ewanclayton.co.uk/books/p/the-golden-thread-the-story-of-writing) to begin with. The superpower that is just [writing things down](https://about.gitlab.com/handbook/values/#write-things-down) is often overlooked or regarded as a task for "later." But if you instead flip that on its head and start with writing things down - collaborative meeting notes in real time, the why's of a decision, the process you are following - you gain super human abilities. A [single source of truth](https://about.gitlab.com/handbook/values/#single-source-of-truth) for everything allows you to more easily collaborate not just with other people - but with your past and future selves. And doing so [transparently](https://about.gitlab.com/handbook/values/#transparency) enables people to fit their work into their lives instead of the other way around, enables remote working to be actually better than in-person/in-office working. You can also reduce the number of meetings required to get things done. In addition to these benefits, [favoring asynchronous communication over synchronous communication](https://about.gitlab.com/handbook/values/#bias-towards-asynchronous-communication) means that when you do meet in person - virtually or otherwise - those meetings are MUCH more effective. Instead of spending half the meeting getting folks up to speed, you're able to hit the ground running and make decisions much faster. And when you [say "why" not just "what"](https://about.gitlab.com/handbook/values/#say-why-not-just-what), you enable everyone to reduce the back and forth that can come from not understanding a decision. ## Give and accept: Ownership, Agency, and Responsibility ### Starting with an example I am a strong believer that in order for a business (especially one in a field like software that is 99% if not more knowledge work) to be successful, you have to hire adults and then treat people like adults. And what I mean by that is that many times businesses - especially large ones - think that the right way to think of people is as resources. And resources must be managed in the sense that you have to create policies that bend them to your will. This never works, is an old style of management, and is one that has really no place in the 21st century. The number one example I always shared from GitLab's massive handbook was the policy on [spending company money](https://about.gitlab.com/handbook/spending-company-money/#). That policy, of course, has the usual items about travel, expenses and reimbursement. But the first line item from the policy is: > 1. Spend company money like it is your own money. No, really. Many companies will have massive policies to try and get you to conform to what they deem as "correct" at a perfect point in time. But exceptions arise to that - when a customer needs you or nothing is available that is "in policy." Instead of crafting the perfect policy to fit humans into, hire adults and expect them to act like it. If someone can't reign in expenses and stays at the Ritz-Carlton every night, the answer isn't to have a policy that prevents it, the answer is that person shouldn't work for your company. They aren't a good fit. ### DRIs and decision-making While that is a small and really insignificant example, it is indicative of a larger attitude and expectation that enables you to move fast and make the right decisions. You should expect people to all be [managers of one](https://about.gitlab.com/handbook/values/#managers-of-one) - responsible for their time and actions in getting the job done that needs doing and that you hired them to do. Couple this with a [bias for action over inaction](https://about.gitlab.com/handbook/values/#bias-for-action) and [measuring results and not hours worked](https://about.gitlab.com/handbook/values/#measure-results-not-hours), and you have a recipe for success. When it comes to decision-making, you have to balance the need to decide with the ability to weigh opinions on that decision. While on the surface this seems like a hard thing, there are two practices that make it clear to everyone how to do it. First, every decision needs a clear DRI ([directly responsible individual](https://about.gitlab.com/handbook/people-group/directly-responsible-individuals/)) who is responsible for the final decision. If it is not clear who that person is, the first job should be to clarify that, thus clearing the way for the decision to be made in the first place. Decisions can't all be made by committee and thus need to have a person who makes the judgment call. Second, enable a culture of [disagree, commit, disagree](https://about.gitlab.com/handbook/values/#disagree-commit-and-disagree). Everyone should be able to contribute their own perspective and experience to a decision. That may lead to them disagreeing with the path you are taking, but that doesn't change the DRI's ability to make the decision. And once the decision is made, you should all commit fully to the decision, as you've already agreed on whom the DRI is. However, you should also still feel comfortable internally continuing to disagree and discuss if the decision that has been made is the right one. ### Agency & Family and Friends First When you combine the concept of writing everything down with DRIs for making decisions, you come to a powerful place where folks can work asynchronously yet always "in sync." You can have the best of all worlds: collaboration that rivals an office job and flexibility that makes remote work a dream come true. Once you have established these norms, then giving [individuals and teams agency](https://about.gitlab.com/handbook/values/#give-agency) over working on what the most important things are is extremely beneficial. Gone are the days of needing a "command and control" management style, and now teams are freed to do their best work when they see fit. This enables things like a non-linear workday, creative thinking, exploration and invention, and extreme focus when needed. Coupling all of those things with an attitude that [family and friends have to come first, work comes second](https://about.gitlab.com/handbook/values/#family-and-friends-first-work-second) builds trust and loyalty in a way that is unable to be replicated with any other incentive - money, equity, time, benefits, titles, pizza parties. Simply by caring about people as individuals and [not as their work](https://about.gitlab.com/handbook/values/#people-are-not-their-work), allows them to actually do their best work. ## Be transparent with a low level of shame So, I have to admit, I've saved the "best" for last – and by "best" I mean both the largest force-multiplying superpowers and the hardest ones to really embrace. If you thought that canceling meetings, writing things down, and putting family and friends first was hard…buckle up. One of the key concepts to discuss here is that [everything is in draft](https://about.gitlab.com/handbook/values/#everything-is-in-draft). As often as possible, you should ship something rather than wait. At GitLab, one way we talk about this is the concept of "MVC" or [minimal viable change](https://about.gitlab.com/handbook/values/#move-fast-by-shipping-the-minimal-viable-change). Put out of your mind the minimally viable product or even the minimally viable feature. Those things are too large and complex to reason with – that's why we end up with all the jokes around estimation and shipping late in software. You can, however, reason about a minimal change—and if it makes the product better (even if you know the real feature needs a lot of work) then SHIP IT. Shipping [early](https://about.gitlab.com/handbook/values/#dont-wait), [frequently](https://about.gitlab.com/handbook/values/#reduce-cycle-time), and [on a cadence](https://about.gitlab.com/handbook/values/#set-a-due-date) allows you to shorten the time to feedback and accelerate getting you to where you really need to be. But this concept—iterating on small changes frequently—can be extended well beyond software. When you're already writing everything down, you can now iterate on everything the same way: presentations, policies, procedures, and the company as a whole. To do so, you have to ship all of these things early too, which requires a [low level of shame](https://about.gitlab.com/handbook/values/#low-level-of-shame). Instead of being scared to show your boss something that isn't done or perfect, sharing it early and often allows the final product to be even better, typically faster as well. And even then, remove the term I just used—final product—from your vocabulary. Everything should be in draft and thus always be able to be improved as things change. Doing all of this allows you not only to deliver better software faster, it allows you to manage your company in much the same way. This is especially critical in high-growth startups, but there is another concept here that is even more universal: focusing on [increasing decision velocity](https://about.gitlab.com/handbook/values/#why-have-values). Decision velocity is one of the main things we mean when we say "big companies can't operate like a startup." But you can reclaim some of that decision velocity by implementing the various techniques we've discussed. Having a DRI, treating everything as a draft, writing down decisions. These all enable it – and lastly, you should also recognize the difference between one-way and [two-way door decisions](https://about.gitlab.com/handbook/values/#make-two-way-door-decisions). One-way door decisions are when deciding to go one way or another isn't easily reversed. Think of removing a tier of your product or changing your pricing model. While you can, of course, change those things again, you want to be fairly confident in them as going back will be painful. But not all decisions are like that – many could be easily changed, especially if you're shipping on a consistent cadence. For those two-way door decisions, team members must feel empowered to make them quickly, without delaying for a committee or fearing reprisal if the decision is later shown to be "wrong." If your culture allows for adjustment and treating things as in draft, this will greatly decrease shame and worry and increase decision velocity—which is directly tied to product and company velocity. ## More Reading If you've gotten to this point, congratulations! And if you're still looking for more, let me recommend a few more things to read on the subject. Also, this serves as my set of links that I don't want to forget about after leaving GitLab - that seemed like the appropriate meta way to end this discussion. - [GitLab's values page](https://about.gitlab.com/handbook/values/) - [Spending company money](https://about.gitlab.com/handbook/spending-company-money/) - [Company cadence](https://about.gitlab.com/company/cadence/) - [Communication guidelines](https://about.gitlab.com/handbook/communication/) - [GitLab's pricing model & strategy](https://about.gitlab.com/company/pricing/) - [Our stewardship of GitLab](https://about.gitlab.com/company/stewardship/) - [Old-school Product Handbook page](https://web.archive.org/web/20171212010426/https://about.gitlab.com/handbook/product/) --- Title: Leaving GitLab Description: Announcing my departure from GitLab after 5 years Date: 12/7/2022 Source: boleary.dev Tags: GitLab, Leadership import Callout from "../../../../components/Callout.astro"; After a little over 5 years, I'm going to be leaving GitLab for my next adventure. It's no surprise to those of you who have been following me that I have absolutely loved my time here. I'm so proud of what we built—and I'm still proud and awed by the remarkable people there. [GitLab's values](https://about.gitlab.com/handbook/values/) are the gold standard for what it means to be a wonderful company for the team-members, for the community, and for the shareholders. I hope to take even a small portion of that to my next adventure—which I'll be saying more about soon. But for now—thank you to GitLab team members past and present, and GitLabbers inside and outside of the company for everything. I'm a GitLabber for life. Stay tuned—I've got two pieces of content that I'm excited about. One will be about those things I learned at GitLab that I want to remember for the rest of my career and life. And then once I start the new gig – probably around the new year — I would like to share what I'm doing next and why I think it is so important for our industry. You'll be hearing from me soon! To stay up-to-date, please be sure to subscribe to my blog - trust me I can't write that much to fill up your inbox too much. --- Title: AI-Generated blog on security engineering Description: An experiment with ChatGPT to write about DevSecOps Date: 12/2/2022 Source: boleary.dev Tags: AI, Security, DevOps *This blog was automatically generated using chat.openai.com and the prompt "Write a blog about security engineering and DevSecOps and why it is the next big thing in software engineering"* Security engineering and DevSecOps are rapidly becoming the next big thing in software engineering, and for good reason. In today's world, where cyber threats are constantly evolving and becoming more sophisticated, it's more important than ever to prioritize security in the development and deployment of software. Security engineering is the practice of designing, building, and maintaining secure software systems. This involves identifying potential security threats and vulnerabilities and implementing measures to prevent and mitigate them. Security engineering is a critical component of software development, and it's essential for protecting both the software itself and the users who rely on it. DevSecOps, on the other hand, is a software development methodology that integrates security into the entire software development life cycle. In traditional software development, security is often treated as an afterthought, something that is only considered once the software has been built. DevSecOps, on the other hand, prioritizes security from the very beginning, incorporating it into every stage of the development process. The benefits of security engineering and DevSecOps are numerous. For one, they help protect against cyber attacks, which can have catastrophic consequences for both businesses and individuals. By incorporating security into the development process, organizations can reduce the risk of data breaches and other security incidents, protecting both their own assets and the sensitive data of their customers. In addition, security engineering and DevSecOps can help organizations save time and money. By identifying and addressing security issues early on in the development process, organizations can avoid the costly and time-consuming process of fixing security issues after the fact. This can help organizations get their products to market faster, and can also reduce the likelihood of costly post-deployment security incidents. Finally, security engineering and DevSecOps can help organizations build trust with their customers. In today's digital world, customers are increasingly concerned about the security of their personal data. By demonstrating a commitment to security, organizations can build trust with their customers and differentiate themselves from competitors. In short, security engineering and DevSecOps are the next big thing in software engineering because they are essential for protecting against cyber threats, saving time and money, and building trust with customers. As the world becomes increasingly digital, the importance of security in software development will only continue to grow, making security engineering and DevSecOps indispensable tools for software engineers. --- Title: How to Learn Anything Description: Two key principles for learning new skills Date: 11/23/2022 Source: boleary.dev Tags: Learning, Personal ## How did you learn X? (technology or otherwise) Plenty of times, I get asked: "How did you learn X." X could be anything, typically technology related. How did you learn HTML? How did you learn JavaScript? How did you learn Excel? How did you learn PowerPoint? There are two main ways that I've learned every tool I've ever come to be half-way decent at using: - Don't be afraid to break something - Have something that you want to do with the tool ### Don't be afraid The first is especially true with computers. When it comes to things like power tools, you should be terrified of breaking something or cutting off a critical part, etc. But when it comes to computers and programming and the like - try to decrease your fear of failure or "breaking" it as low as possible. Computers are super resilient and don't need a lot of coddling - I'm not very nice to them. While they have interfaces that are increasingly supposed to seem more human, they are still very "dumb" machines (I'm looking at you, Alexa). They are at their core very binary (good computer joke, right?). It will work the way you want it to, or it won't. Either way, there's always a way to save your work, copy a new file so you don't break the first one, mash the "ctrl-z" or "undo" buttons and a myriad of other ways (have you tried rebooting) to resolve any issue that comes up. And by always, I mean: "So close to always that you should just act like it's always because it's most of the time, but when it's not it does suck, but you'll be okay anyway." Much like a bug in your house or a snake in your yard: the computer is (should be) more afraid of you than you are of it. ### Have a goal The second point - having a clear goal in mind - is almost more important than the first. It's (almost) impossible to learn Photoshop or JavaScript from a book. That's not to say books aren't valuable. Books can teach you a lot about theory and solidify your understanding of a topic. But without the proper motivation and desire to actually achieve a goal, knowing how to use a tool can only go so far. With most things, especially those around technology, it's learning by doing that will work the best. I used to think it was just 'me' that learned this way - that it just was an aspect of my personality. And it probably does have something to do with that because I have many friends who learn a lot through careful study of a book or the documentation. However, I wouldn't discount the value of learning through doing just because of that. I think there is a part of the knowledge of something practical (how to use a tool) that comes from the practical application of that tool. And that kind of practical experience - that muscle memory if you will - can only come from actually doing. So - don't be afraid. Don't worry! Try things and try to break them. But most importantly, have a goal in mind when you set out to learn something new. And make sure that goal is important to you - more important than even the goal of learning the tool. That goal will keep you motivated when the going gets tough. --- Title: What makes a great tech talk? Description: Five key ingredients for delivering excellent tech presentations Date: 11/18/2022 Source: boleary.dev Tags: Talks, Leadership As someone who spends a reasonable amount of time writing, rehearsing, and giving tech talks, I often find folks new to speaking about tech asking me: how do you do it? How do you know that you will be able to write and give an excellent tech talk? The simple answer is: I don't know. An excellent tech talk isn't definable and solvable like an engineering problem. It's part tech, part passion, part storytelling, and part luck. But in thinking about speakers and talks that I've looked up to throughout the years, I believe that I've found a few key ingredients in any tech talk that I've seen and would consider "great." In reviewing these, I actually came to appreciate what I somewhat already knew: the "tech" part of the talk is probably the least important part of a great tech talk. Yet, as professionals, that is what we get wrapped around. And it is what worries us when writing a speech. The best demo ever - that's what will save my talk! But in the end, it's not just the tech content that counts, so let's look at the five critical ingredients for a great tech talk: - Story & Narrative - Passion - Connection to the Audience - Balance - Call to Action For each ingredient, I've included a talk that best illustrates that principle and a link to the talk. As an aside, every moderately good talk I've ever written was inspired while listening to or after hearing a great talk from one of these amazing technologists. Remember: [good artists copy, great artists steal](https://www.youtube.com/watch?v=a6jeZ7m0ycw). ## Story & Narrative **[Keynote: Reflections](https://www.youtube.com/watch?v=jiaLsxjBeOQ)**: Kelsey Hightower, KubeCon CloudNativeCon North America 2019 Stories are how humans have always learned and taught each other. From the earliest stories around campfires to teach about the dangers of predators or the ways to find food to the modern world where we are bombarded by stories that we now call "marketing," - stories have always played a pivotal role in learning and teaching. And so, without a story, your audience is already lost. You can show some of the most incredible technology, a fantastic demo, and wow people with statistics...but if there is no connection to the real world - to their lives - then it will go in one ear and out of the other. And the story doesn't have to be complicated - a story is, after all, just a beginning, middle, and an end...maybe a conflict or two. But telling the story - showing how the technology or what you are presenting applies to real people in the real world - is critical to getting your point across. It's so critical that some of the best tech talks are only stories. In the "Reflections" Keynote at KubeCon CloudNativeCon 2019, Kelsey Hightower - one of the most respected tech speakers known for his impressive and fun demos - didn't even appear to bring a laptop on stage. Speaking from the heart, Kelsey tells the stories of the early days of Kubernetes, of showing inclusion, of practicing intentional inclusion. And with those few simple but powerful stories, the audience is captivated and learns more in 15 minutes about what it means to be an inclusive open source community than they would have with hours of slides of fancy graphics and data. ### More from Kelsey - [Kubernetes and the Path to Serverless](https://www.youtube.com/watch?v=oNa3xK2GFKY) - [Kelsey Hightower's Best Live Demo Yet](https://www.youtube.com/watch?v=U6SfRPwTKqo) - [TechExplorers: Kelsey Hightower](https://www.youtube.com/watch?v=9OHNejqXOoo) - [HashiConf 2017 Keynote](https://www.youtube.com/watch?v=v77FFbQwC6E) ## Passion **[Zebras All the Way Down](https://www.youtube.com/watch?v=fE2KDzZaxvE)**: Bryan Cantrill, Uptime 2017 Stories will help you make your talk more personal - both for you and the audience. But that won't carry much weight for long if you don't have passion for the stories and how they apply to the problem and solution you're trying to present. For the audience to stay engaged throughout the talk, they need to care about what you are talking about. And if it isn't clear from your speech, word choice, and energy that you are passionate about your topic, there is no way your audience will come along with you and care about what you have to say. No one will ever accuse Bryan Cantrill of not being passionate. And in his talk "Zebras All the Way Down," he brings that passion to advocating for one's own healthcare to understand at a deep level how our systems are impacted by the various layers of software. And that includes a lot of software we don't think about like that below the operating system. Turning a personal story about his physician father and his sister who had a rare condition into the way to think about solving hard debugging problems, Bryan brings the audience along. He makes you care about what he has to say...even if what he's talking about is far removed from your daily work. ### More from Bryan - [Debugging Under Fire: Keep your Head when Systems have Lost their Mind](https://www.youtube.com/watch?v=30jNsCVLpAE) - [Corporate Open Source Anti-patterns](https://www.youtube.com/watch?v=Pm8P4oCIY3g) - [Fork Yeah! The Rise and Development of illumos](https://www.youtube.com/watch?v=-zRN7XLCRhc) ## Connection to the Audience **[Why Open Source Firmware is Important](https://www.youtube.com/watch?v=mUTx61t443A)**: Jessie Frazelle, GOTO 2019 Once you've brought your whole self to the talk - your stories and your passion - you still need to ensure your audience will be engaged and want to hear about those things from you. To do that, you have to build a connection with your audience. The way to do this may seem simple on the surface, but it does actually take some effort. You need to understand at least two things about your audience: who they are and why they showed up to your talk. First - who is your audience? You have to understand who they are - what are their roles professionally? What is their experience like personally? What makes them passionate, and what are their stories? Understanding your audience will help you shape your talk to match their interests with your passions - a surefire method for success. Second - why did they show up to your talk? You've already won a little bit here - they came to the conference or meetup, they saw your abstract and maybe a little bit about you, and chose to come to hear what you had to say. That should give you confidence that the audience wants you to succeed just as much as you want. Think for yourself: have you ever shown up to a tech talk hoping the speaker would bomb? Probably not. So that's half of the battle won already, but you can't take it for granted. They showed up expecting to learn or get something out of your talk. You need to think about how they apply what they want out of it and then deliver. A great example is Jessie Frazelle's talk at GOTO Chicago in 2019 on "Open Source Firmware." On the surface, it might not seem like a great example - Jessie even has a disclaimer at the beginning of the talk. She's "forcing" an audience of software engineers to get a few rings lower than they are comfortable - down into the UEFI kernel, management engine, and other low-level firmware pieces. But Jessie's passion for this part of the stack and showing the audience how it directly applies to how we all build software with many abstraction layers above the firmware is offered throughout the talk. Jessie convinces the audience to care about the software turtles all the way down. Along the way, she teaches about the stack of code we don't know about...and the rings of trust below "0" and the kernel. ### More from Jessie - [Breaking Containers: Chaos Engineering and Kubernetes](https://www.youtube.com/watch?v=1hhVS4pdrrk) - [Benefits of isolation provided by containers](https://www.youtube.com/watch?v=fKDupfKu_Mw) - [Container Hacks and Fun Images](https://www.youtube.com/watch?v=cYsVvV1aVss) ## Balance **[The Art of Code](https://www.youtube.com/watch?v=6avJHaC3C2U)**: Dylan Beattie, NDC London 2020 Once you've got your audience bought in - and know what you're going to tell them and why - you've got to write the talk. Until now, not much has been focused on that. There are a lot of methods out there for outlining, writing, and structuring your speech. Far too many, in fact, for me to get into here. And that's not my goal - there isn't a one-size-fits-all method for creating a great tech talk. Much like many technical problems, the answer to "how should I structure this thing" is "it depends." However, the best tech talks I've ever seen strike a balance - a balance of the tech and the stories, learning and entertainment, questions and answers. This balance boils down to balancing the "three S's of a great tech talk": - **Style** - **Substance** - **Stories** Even though one of these S's (stories) repeats one of our early items, I think that only serves to express how important it is to a great talk. You must tell a story. But unless you're giving a keynote at a conference where you're the most respected person in the room (shoutout to Kelsey Hightower), the stories won't always be the whole package. In most tech talks, folks are coming to learn something about technology or how humans interact with technology - so bringing substance is essential. You have to prove you know what you're talking about and that it matters to your audience. But, as we've discussed already, that substance can't be just dry numbers on a chart or some other way to present cold unconnected data. While that is often the business of any serious engineering endeavor, a presentation on stage is more than that. You must also bring style - charisma, humor, fun visuals, and passion - all ways you can make sure style is balanced with the substance of your talk. Sure, some have a lot more style than others - but those with no style are the ones that are quickly forgotten. Perhaps one of the best speakers when it comes to this balance is Dylan Beattie. In "The Art of Code," Dylan takes us through various elements - from maths to retro computing to programming Fizz Buzz as an 80's hair ballad, complete with guitar playing and singing from Dylan. However, Dylan balances the exciting talk style with the stories he wants the audience to hear. And he sprinkles in the substance about how we as technologists have a responsibility to the world...and need to not take ourselves too seriously. ### More from Dylan - [Fractals, Factories and Fast Food](https://www.youtube.com/watch?v=Vs1DWYrw2Ps) - [Architecture: The Stuff That's Hard to Change](https://www.youtube.com/watch?v=3LtQWxhqjqI) - [Ctrl-Alt-Del: Learning to Love Legacy Code](https://www.youtube.com/watch?v=wPjHuvulivM) ## Call to Action **[Why work doesn't happen at work](https://www.ted.com/talks/jason_fried_why_work_doesn_t_happen_at_work/transcript?language=en)**: Jason Fried, TEDx Midwest This last key - a meaningful call to action - is the one I struggled the most to name. As they say, there are only two hard things in software development: naming things, cache invalidation, and off-by-one errors. The connotation behind "call to action" may come off at first as sounding too "sales and marketing" like many software engineers. But that connotation does not impact the importance of a call to action. Using the strictest definition of the word, it is a vital part of your talk. As we discussed, your audience came to your talk, hoping for you to succeed. They've now sat through you talking "at" them for 15, 25, 45 minutes or more. So the call to action is not for you - it's not self-serving like a sales pitch. Your call to action at the end of your talk should be your gift to the audience. It should be about them, not about you. It should give them concrete next steps they can take to positively impact what you were talking about in their own lives, organization, or world. This call to action is what you want the audience to remember, and the best tech talks are also the most memorable. And they don't have to be about tech at all even. In Jason Fried's TEDx talk "Why work doesn't happen at work," Jason presents the main ideas around how we've become accustomed to working together are broken...and, more importantly, offers concrete ways to fix them. And while those calls to action are simple, they also go to the heart of Jason's story and passion: making it less crazy at work by freeing up people to do their best work with time and space. ## What's Next So, given that you've come this far, I hope I've shared my passion for great tech talks with you. Hopefully, that passion is shared, and you've found a new way of thinking about your own talks. So what is my gift to you for having come this far? Well, it's going to sound like an oversimplified call to action, but I'm telling you it's exactly what you should do: **Go give a talk** Your unique stories are valuable. You are passionate about things that others should care more about. There are audiences out there - in meetups, small events, or large conferences - that want to hear what you have to say and will be rooting for you when you get up in front of them. All that's left is to strike a balance between those stories and substance with some of your own personal style to make it exciting and engaging. So go forth, and write that talk. I'm confident you can do it. --- Title: The Era of Platforms Description: The shift toward DevOps platforms and consolidated tooling Date: 10/3/2022 Source: boleary.dev Tags: DevOps, Tech In the 15 or so years we've been on a DevOps journey as an industry, we've seen several waves of tooling and processes. When teams were wholly separated, each brought their tools to bear on the problem. Then, as organizations brought teams together, they began to standardize onto a set of best-in-class tools. And more recently, groups have spent time on "DIY DevOps" - stitching together tools with custom integrations to attempt to create the original synergies promised by the concept of DevOps. But recently, we've seen yet another shift from two related forces. As software has eaten the world, businesses need ideas to move at the speed of software. Gone are the days of massive, long waterfall-style projects that gather requirements and then spend effort passing them from one team to another. Companies in every industry are now judged on their ability to iterate and ship software and value to customers quickly. Anytime wasted on undifferentiated "DevOps duct tape" has to be eliminated to free up teams to work on value for stakeholders. At the same time, the world of DevOps tooling has seen a shift toward consolidation. While previously best-in-class tools reigned supreme, teams are now looking to DevOps platforms to get more done with less integration, less wasted cycles, less opaque processes, and less distraction. All of this has led to a new era of platforms - and not just in DevOps. The tools we use to plan, build, deploy, operate, maintain and measure the software we produce must be more than tightly integrated - they must be one platform to allow for the seamless flow of information and ideas. And this principle also applies to the methods we use to deploy the software. The same shift to platforms - from premise purpose-built hardware to on-demand cloud-based infrastructure and beyond to homogenous platforms like Kubernetes which abstract away the hardware entirely has occurred on the infrastructure side. But each of these remains a tool - and as technology leaders, we have to take a step back and ask: what is our actual goal? Why do we consider any of these tools at all? That goal - the ability to ship better, more secure software to our stakeholders with less time between iterations - has to be at the forefront of the decision-making process when evaluating the tools and techniques we utilize to get the job done. We should measure what matters to know that we've made the right decisions. For over a decade, the DevOps Research and Assessment group (DORA) has been looking at correlation and causation of software delivery and operational performance. They have come to understand that measuring four key metrics - the lead time between changes, deployment frequency, change failure rate, and the mean time to restore services - are not only correlated to software and operational excellence but also indicators of that excellence. Organizations that perform well in these categories not only outpace the software performance of others, but see positive impacts on organizational goals such as revenue and employee job satisfaction that are directly related to these measures. To measure these metrics effectively and efficiently, a new era is needed - the era of platforms. Putting down the "screwdriver in a server" and letting your teams instead focus on software delivery and value creation is how organizations will compete and win in the coming decade. --- Title: Why is this war different from all other wars? Description: Understanding the unique aspects of Russia's invasion of Ukraine Date: 3/1/2022 Source: boleary.dev Tags: Politics, Politics *NOTE: Sorry for the interruption in your regularly scheduled tech content. I swear I'll write more about tech when I can.* Okay, I don't pretend to know everything, but I have seen a lot of ["whataboutism"](https://www.merriam-webster.com/words-at-play/whataboutism-origin-meaning) around the Russian invasion of Ukraine and "why are we treating this differently than X" Now - all war is terrible...full stop. But I do think several things make this unique: 1. **Sovereignty.** There is no question in the international community over the sovereignty of Ukraine and its borders. No unsettled debate; Russia directly invaded a sovereign nation. 2. **Unprovoked.** It was utterly unprovoked; only later did Russia try to come up with reasons for the aggression. 3. **Lack of Public Support.** The reasons Russia does list do not hold water with anyone in the international community. And they don't match with the public sentiment in Ukraine. I challenge someone to find me FIVE people in all of Kyiv (pop. 2.8 million) who support the aggression. 4. **No existing conflict.** To that point, there was no current conflict in Ukraine. There is no civil war, no break-off factions in any regions that Russia is bombing. In fact, the Ukrainian people did recently go through this, choosing democracy, choosing the EU, choosing the West through public protest and support. 5. **Democracy.** Ukraine has a fully functioning democracy. They had elected the government that is in place now. There was no dictator at the helm threatening the general public. Look no further than the massive public support for Zelensky - including from other parties - to prove that. 6. **Premeditated.** This invasion wasn't the result of some massive change in the situation on the ground. Putin has been planning this for a long time - probably since the Ukrainian revolution. He's doing it to suit his needs, not the needs or desires of any of the people he's invading. 7. **Superfluous.** Because Putin miscalculated the world, he has endangered Russia more than ever. Even the Soviet Union had more international support than the Russian government has today. Thus, any argument about Russian safety from NATO is null and void - it was about Putin and his power. 8. **War in Europe.** Many are tempted to say it's only covered because it's in Europe. Well, let's say that's true - it is because that is where World Wars have started. With all the wars we've seen since 1900, the only two to go global were ones on the [European Plain](https://en.wikipedia.org/wiki/European_Plain) because of its strategic importance. So it is a big deal. 9. **Rejected diplomacy.** In many cases before, where a larger nation thought they had cause to invade a smaller one, they have first offered diplomatic solutions. And when the smaller country refused, only then did they invade. This conflict flips it on its head. The smaller nation begged for diplomacy, and Russia rejected it. 10. **Nuclear Option.** In other wars of aggression from large, nuclear-capable nations, we did not doubt that nation would restrain itself from the unthinkable. Just mere days into this war, Putin raised the specter of nuclear weapons against anyone who would dare question him. Even if you disagree with me on points 1-9, number 10 presents a clear and present danger unlike any we've seen since WWII and the Cold War. If no other reason, that is why Putin must be treated differently. If no other reason, that is why the entire world has a stake in this war, unlike other current and past conflicts. *For a list of vetted resources you can donate to help Ukrainians suffering from this war, [click here](https://brendan.fyi/helpukraine).* --- Title: Support and defend the constitution against all enemies, foreign and domestic Description: Reflections on January 6th and American democracy Date: 2/3/2022 Source: boleary.dev Tags: Politics, Politics *NOTE: Sorry for the interruption in your regularly scheduled tech content. But I had to say something about a release that happened in my home country of the United States today* There are only two ways that history will judge the current Republican Party: - As an anti-democratic party that lost and couldn't destroy the American experiment - Or as an anti-democratic party that won and started the end of America. And with it, a slide away from democracy worldwide. There is no other way to look at it. Anyone who looks at the events of January 6th and sees "legitimate political discourse" isn't for American democracy. It's not more complicated than that. January 6th saw a breach of the very halls of the Capitol building while Congress was serving its Constitutional duty. The Constitution is very clear on this: > The President of the Senate shall, in the Presence of the Senate and House of Representatives, open all the Certificates, and the Votes shall then be counted. The Person having the greatest Number of Votes shall be the President. Even if one thought there was somehow a legitimate political discussion on who the President should be. Doing so with violence that left five people dead, a historic building forever changed, and saw Congresspeople with gas masks on is not a "legitimate" way to have that discourse. Thus, anyone who would call that "legitimate political discourse" has given up on participating in the American political system as we know it. Being that we haven't faced this before - a major party who refuses to participate in the democracy which the Constitution defines - I don't know how to save it. But I would say that I have a pretty vested interest in protecting it for myself and my children. The best way I can think of right now is to support the people doing the right thing. People who speak the truth. People who agree with the Constitution. People who don't support the violence of that day or try and reframe it as "legitimate political discourse." We all saw with our own eyes what happened on January 6th. No amount of propaganda from the GOP can erase that if we're very clear about what happened. And that involves stating clearly that violence has no place in our political discourse. And in getting to the bottom of what happened that day. That's why I'm thankful and supportive of people like [Liz Cheney](https://twitter.com/RepLizCheney), [Adam Kinzinger](https://twitter.com/AdamKinzinger), [Michael Steele](https://twitter.com/MichaelSteele), and [Larry Hogan](https://twitter.com/LarryHogan). And you should be, too, if you're pro-American democracy. Regardless of what policy differences you may have, I think the only way out of this mess is to support those with whom we disagree but who are clearly on the side of the truth, the Constitution, and America. --- Title: New Year, New Language Description: Learning Go in public in 2022 Date: 1/9/2022 Source: boleary.dev Tags: Tech, Learning Well, it's been a little while since I've written in this space. Who knows why that could be - say maybe it was the holiday time? Or a busy time at work? Or perhaps it's just the ongoing pandemic that is great at taking away the motivation to be productive in any way in any aspect of life. Whatever the reason, I'd love to change the fact that it's been months since I've written anything. To do that, I've combined a few things that I've wanted to do for a while into one "mega" project. And I'll be documenting the progress of that megaproject here - so if you're new here, then welcome, and this is what we do - take wide left turns now and then. ## My 2022 Goals What I've been thinking about doing this year are things like: - Learning a new programming language (like Go or Rust) - Teaching what I know more - Contributing to GitLab and the GitLab Runner - Helping others do the same - Promoting the idea that "everyone can contribute" to help folks who might want to learn about programming for the first time. When thinking about these goals at first, they seem to be very different - but I think I've found a way to combine them in a way that should be fun and educational. I'm planning on learning Go in public - probably through a combination of Twitch, produced videos, and written content. I'll do as much as I can to make the whole process open to the world - while also hopefully summarizing my learning in a valuable way to folks who come after me. ## The Project I don't have the exact structure or timing down yet. Still, generally, I think it breaks down something like: first, learn the basics of Go (concepts, terminology, etc.). Then build a basic project with Go. And finally, apply what I understand to contribute to a more mature project like the GitLab Runner or GitLab Container Registry. That will allow me to illustrate both the concepts required and their application to real-world problems...which has always been the way I've been able to learn best. And so, I held a [Twitter poll](https://twitter.com/olearycrew/status/1479504338494136327) to figure out what those early projects would be. It was a close race, but a Twitter bot was the winner. However, my impatience during the poll lead me to buy a domain for another answer - the Slack emoji package manager - so I think I'll build both 🙂. ## How to follow along The exact form of the output of these projects (other than the code) is still up in the air. But I'll be for sure updating you in this space as we go. And I've also created two public GitLab issues for you to follow along as well with your favorite project - or comment your ideas! - [Slash Bruno Twitter bot](https://gitlab.com/brendan/brendan/-/issues/39) - [Emoji Package Manager for Slack](https://gitlab.com/brendan/brendan/-/issues/35) I'd love to hear what you think - follow this space or [Follow me on Twitter](https://twitter.com/olearycrew) to see how this all works out! --- Title: Todd Beamer Description: Reflections on heroism and January 6th Date: 1/8/2022 Source: boleary.dev Tags: Politics, Personal ![Todd Beamer](https://blog.boleary.dev/content/images/2022/01/images.jpg) This is Todd Beamer. He and 2,976 other innocent people died on September 11th, 2001. I didn't know any of them, but when I went and saw their names etched in metal at the site of the Twin Towers, I sought out Todd's name. When I touched it, that was the time, the entire visit to ground zero, that I cried. It was because of what he represented to me. He was on Flight 93 - the last of the four hijacked flights that day. The passengers had learned from talking to loved ones that this wasn't a typical hostage situation. Planes had flown into the towers and the Pentagon. Their plane was headed at the Capitol Building. Todd was on the phone with a United Airlines supervisor and told her the passengers would storm the cockpit. The last words she heard him say to the other passengers were, "Let's roll." Shortly after that, an altercation in the cockpit and Flight 93 crashed into a field in Pennsylvania. Forty people onboard gave their lives to protect Congress and the Capitol Building. 19 years, 3 months, and 26 days later, the President, who had been lying to his supporters for two months about an election, incited them to take back the Capitol building by force. He spoke at a rally that later turned into a riot that threatened the life of the Vice President and Congress people. It trashed and desecrated the building. It left 5 people dead. An insurrection against the US government aided and abetted by the President. We have a President who chooses himself over America at every turn—juxtaposed with 40 people who were just on a regular flight and decided in a few moments to choose America over their own lives. It's always been America or Trump. If you don't see that after this week, I pray you will stop, think, and listen to this: the choice is America or Trump. --- Title: Michael Collins vs. the Billionaires Description: Contrasting Apollo astronauts with modern space billionaires Date: 7/20/2021 Source: boleary.dev Tags: Personal, Personal I've been trying to read a lot more [books](/books/) ever since the pandemic started. Massive shout out to the Libby App and the Anne Arundel County Public library because I've actually listened to more books than I've read...but I'll count them anyway. The most recent book that I've been reading listening to is Michael Collins's book [Carrying the Fire](https://amzn.to/3x6eMZP). Michael Collins is the oft-forgotten third astronaut on the historic Apollo 11 flight to the moon. Collins kept the command module Columbia flying while Neil Armstrong and Buzz Aldrin took the Eagle lunar landing module to the moon, first to step foot on it. The book is fantastic - written in the 1970s, when the flights were still fresh in Collins's mind. I'd had it on my "to read" list for some time, but hadn't gotten around to putting it on the top of the list until Collins passed away earlier this year (in April 2021). That reignited (get it?) my interest in Carrying the Fire. It just so happened that I was finishing it up through the past month that saw a renewed interest in space flight. With Jeff Bezos and Sir Richard Branson going to space (or the edge of space, depending on who you ask), the world was once again looking up to space. Again was sparked a curiosity that has waxed and waned over the years since the "space race" of the 1960s. Space has always fascinated me - [Apollo 13](https://amzn.to/2UA7WOP) was one of my favorite movies as a kid. In 2019, I built a [whole talk](https://www.youtube.com/watch?v=cRGjw04ZA4M) around the Apollo Guidance Computer and the accomplishments of the great Margaret Hamilton. I'm by no means the first person to draw both comparison and contrast with the days of NASA and the current private space race. However, I do think that reading Collins' book during this time has given me a particular perspective. ## First, the original "Space Race" The 1960s - a time that is oft relived in popular culture (I'm looking at you [Mad Men](https://en.wikipedia.org/wiki/Mad_Men)). The Cold War between two superpowers in the front of many Russian and U.S. minds. And many in the rest of the world are impacted by their clashes. One of the ways of romanticizing this time includes talking about the "Space Race." A prelude and parallel track to the ever-increasing military build-up of both superpowers in this time, the space race saw both nations moving quickly to try and dominate space. And by any measure - Russia was far ahead, putting both the first man-made satellite into orbit, as well as the first human. In the U.S., President Kennedy and then his legacy demanded action. A public effort at a scale not seen since saw the USA accelerate from having barely entered space with rockets in 1959 to landing on the moon just ten years later. There is no denying the nationalism and political posturing benefits derived from this race. But hearing about it directly from the astronauts who flew the missions (such as Collins) is different. They describe human space exploration as one of the most world-uniting activities one can imagine. All of the astro- and cosmonauts of this era talk about seeing the Earth from above. Up there, there are no political boundaries visible. One can better appreciate how small and fragile our planet is in the vastness of our solar system, galaxy, and the universe. They all describe it as a life-changing experience. In the book, Collins talks about actually meeting the "competing" Russian cosmonauts at one point during a press junket. And the concerns of both teams were a shared experience between the two groups, even as their countries were engaged in divisive political conflicts. In addition, Collins, the other Apollo 11 astronauts, and NASA were careful to be clear about how the mission was not only about the United States but about all humankind. Just a few examples of this include: - Neil Armstrong's first words on the moon: "That's one small step for [a] man, one giant leap for mankind." - The plaque left on the moon reads: "Here men from the planet Earth / First set foot upon the moon / July 1969, A.D. We came in peace for all mankind." - The map on the plaque includes the entire Earth - a circle with the Western hemisphere and a circle with the Eastern Hemisphere - In addition, a tiny silicon disk (about 1 1/2 inches in diameter) was left, which contained miniaturized messages from 73 different heads of state. In addition, NASA at one point considered leaving a flag from each U.N. member nation...however, this is where the international flavor lost out. It was only the American flag that would fly at the landing site. Despite that, the greater meaning of the journey - greater than one person or even one country - was clearly apparent and recognized by those closest to the mission. And it was one watched around the world, with an estimated 1/3 of the entire world's population watching the landing live. Debates about how international or national the effort was aside, it was - at the very least - clearly an effort bigger than the astronauts themselves. On the 8th day of their mission, after the moon landing and just one day before they would return to Earth, the three astronauts broadcast messages from the command module. Buzz Aldrin went second and laid it out like this: > As we've been discussing the events that have taken place in the past two or three days here onboard our spacecraft, we've come to the conclusion that this has been far more than three men on a voyage to the moon; more, still, than the efforts of a government and industry team; more, even, than the efforts of one nation. We feel that this stands as a symbol of the insatiable curiosity of all mankind to explore the unknown. ## Second, the spaceship measuring contest One can't help but contrast those gracious words, words that will indeed be studied and read by generations and generations in history books, with the words and actions of some of the newest astronauts - billionaires. First, let me state that I agree with the conclusion many have come to - that there isn't enough public interest in space travel to make public funding of a meaningful space program to continue at Apollo or even space shuttle levels forever. And I don't want some political struggle - like another Cold War - to change that. We need to make space travel more financially viable, and one of the ways to do that is through public-private partnerships like we've seen with NASA and SpaceX. But the last few weeks have shown us a very different view of space and programs to get there. The average psyche - American or otherwise - has shifted from "us" to "us vs. them" or "me" over the 50 years since the Apollo 11 landing. Through the monetizing and flattening of our social interactions, we've come to view the world and our place in it much differently now. Gone is the global view - the one without borders. If anything, it is quite the opposite. Now we seek to insulate ourselves not only from those outside our borders. We even divide ourselves from those in the same country or city or town as us if they dare have different opinions than our, obviously correct, ones. It's easy to see this in the actions of Branson and Bezos. The definition of things like "Karman line" and what record they can set themselves, using their own personal wealth, putting their own stamp on space have all become more important than the perspective gained there. It isn't about "us" - it's about them. One needs to look no further than the discussions they've had after returning to see this. Bezos went so far as to say, answering the question, "What was it like, was it everything you imagined": > The most fun vehicle...I also want to thank every Amazon employee and customer because you guys paid for all of this. ## Third, a decision for all of us The easy route to take here is to criticize Bezos - and that criticism is not without merit. But I think in some ways, this second space race and the comments around it say more about us as humans than it does about Jeffrey Bezos. I would call that more of a symptom than the root cause. We have a choice to make - do we continue to divide ourselves into smaller and smaller groups? Groups that are absolutely sure of their rightness - mainly because of the "obvious" wrongness of every other group. Or can we rekindle the group efforts of the Apollo missions - giving glory not just to Neil, Buzz, Michael but to all who helped accomplish their task? Or better yet, all of those who share in the common good of and work to make the Union they live in more perfect. Or can we do one better? Can we all ask for and receive the kind of perspective on the world available, until now, to a very select few. A view of the world that places the value of political divides and borders in their rightful place - small and invisible from the scale of our one small blue dot in a sea of seemingly endless space. I think that perspective could help many of us feel better about our place on Earth and the place our fellow humans have on it as well. > "When it is dark enough, men see the stars" — Ralph Waldo Emerson --- Title: Running meetings in read-write mode instead of read-only Description: How to make virtual meetings more effective with live docs Date: 3/18/2021 Source: boleary.dev Tags: Leadership, Leadership Early on in the pandemic, there was a mad rush by many to figure out how to do their jobs remotely. Collaboration, communication, and coordination, all typically done when together in person, were suddenly impossible or highly unlikely. And when you're used to being able to meet in person, "grab a conference room," or be at a whiteboard with others, this can be a very jarring experience. Many of us were not prepared as we've always been in an office and never experienced remote work before. *Meetings suck. Learn to make them better* While this was true across all industries and businesses, one place I got to see it up close was in education. While I've been working at home for years, for a company with [no offices at all](https://about.gitlab.com/company/culture/all-remote/), that is not the standard model for many. Not the least of whom are educators. My mother, my wife, my two sisters are all educators - and I saw the struggle it can be to take what is traditionally a very much hands-on, in-person endeavor and try and recreate it virtually. While I could - and may, don't tempt me - write an entire blog series on what I learned worked well and what didn't in remote education, I think there is a universal thing that came up relatively soon into the pandemic that has broader implications for educators and everyone still trying to understand how to work remotely with their colleagues. That subject? Well, it's everyone's favorite activity: meetings. ## Why meetings suck Meetings are often the WORST. You don't have to look far on the internet for the universal "this meeting could have been an email" memes - they are everywhere. And for a good reason. When we're in-person together in the exact physical location, and simultaneously, it can be the lazy way of getting things "done" to have a meeting. Want input from other people? Schedule a meeting. Want to reach a consensus on a decision? Schedule a meeting. Want to plan out the next fiscal year? You're probably going to need a few sessions to schedule the meetings you'll need. But that's what has traditionally sucked about meetings - the fact they are low friction in an in-person world, especially for the person calling the meeting or the person with the [HiPPO](https://www.inc.com/scott-mautz/the-highest-paid-person-in-meeting-is-most-dangerous-voice-according-to-whartons-adam-grant.html) (highest paid person's opinion) to feel better about their opinion by voicing it in a public forum. ## Making bad meetings worse Add into that problem the addition of being Suddenly Remote and the difficulties become compounded. When you're not used to running meetings remotely - those meetings can become worse and less valuable. For instance, it can be even more challenging for folks to participate or have their opinions heard or questions answered. Worse still, virtual meetings are more tiring than in-person ones. So if your organization just took your typical meeting schedule and put it online - you're in for a world of pain. Not to mention the inability to find a free minute to go to the bathroom. While meetings at the school where my mom works were not back to back - she talked to me early about how they weren't working. There seemed only to be two modes: either the principal would be presenting something and "talk at" the staff for an hour OR there would be chaos as people tried to collaborate and talk over one another - and scramble to find the mute button. She knew that there had to be a better way - and that she had a son (me) who had been working remotely long before the pandemic. She called and asked if I had any ideas...and boy did I! ## Make virtual meetings not suck At GitLab, we use something called [Live Doc meetings](https://about.gitlab.com/company/culture/all-remote/live-doc-meetings/). Every meeting has a Google doc agenda attached to the meeting invite. Every. Single. Meeting. I've been known to show up to informal "[coffee chats](https://about.gitlab.com/company/culture/all-remote/informal-communication/#coffee-chats)" with a Google Doc prepared just so that we can run it like an ordinary GitLab meeting. Having a live, multi-player document that serves as the agenda for a meeting - especially one to be conducted over Zoom - is incredibly freeing. It allows everyone in the meeting to contribute - you can combine both a formal agenda and points that the leader of the meeting wants to cover with participants' ability to contribute, ask questions and add their points. Using simple bulleted or numbered lists is built into any document program and allows the notes and agendas to flow together. Having it be a multi-player document like Google Docs lets folks add to any point in the schedule. Having it in the calendar invite means you can easily find it before the meeting - or at meeting time. When the meeting starts, we can all take notes, add details, and link to more context inline with the agenda. Want to ask a question? Just put it in the doc on the following line or after the rest of the schedule, and as the team goes down the list, eventually they'll get to the question - be sure to preface it with your name, so we know who to call on. Establishing these kinds of norms when meeting with colleagues will help your whole team feel more productive. Instead of meetings being one-way communication that could have been an email, you'll find they are transformed into the fantastic collaboration sessions you hope they could be. You'll also be able to show more inclusion - everyone contributing to the agenda means it's not only the loudest or highest paid people who are going to be heard. I'd love you to give this a try - and you can find a lot more details about live doc meetings [in the GitLab handbook](https://about.gitlab.com/company/culture/all-remote/live-doc-meetings/). Do these tips help? Still have meeting fatigue? Let me know how it works by finding me on [Twitter](https://twitter.com/olearycrew). --- Title: The Purse Caper: Debugging can be hard Description: A story about creative debugging in production Date: 1/27/2021 Source: boleary.dev Tags: Tech, Tech If there is one thing I've learned about actually doing software work in production environments over my career, it is: debugging is hard. And once you accept that it is hard, you have no option, in my mind, except to fall back on two essential but different tools to address problems. Firstly, the scientific method and the process of elimination. And secondly: as much creativity as you can muster. *Who would win: a purse or a 'sophisticated' software system?* ## The Story At a healthcare software company where I served many roles (it was a small operation), I encountered a significant problem with no obvious solution. Our CRM was homegrown - the company had been around for much longer than Salesforce, so years prior, our customer records, billing, and recurring invoices were all coded by hand. In the end, we ended up integrating this monolith into a lot of systems, but it remained the source of truth where customer records started and were stored. It was a relatively simple Windows Forms type application with a simple database backend. ### Something's rotten... Then something bizarre started happening: customers started getting deactivated. Almost every day, we'd come in to some number of well-known customers being deactivated. There didn't seem to be a pattern to the ones who were deactivated. They didn't share in ordinary things like alphabetical order, database ID order, length of time in the system, or other familiar attributes. As a homegrown system, there wasn't a significant focus on role-based access control. And while there were a few roles, many of us in leadership and the administrative staff "had" to have administrative access to perform other system tasks. With that access, it could have been any of several users at fault or the fault of some automated process. Looking at the automated processes we had in place, we couldn't find anywhere that would change or set the value of the "active" flag in the database. In fact, in the end, we realized the only place to change that flag was one check box in the UI, and nowhere else did any code we could find even touch that flag for a write. ### Culprit identified? Having little else to go on, we added more discrete auditing to the system to figure out who could be checking this box and deactivating the users. Once that logging was in place, we found an executive assistant whose account was causing the deactivations. She was mortified - she didn't have any knowledge of how this could be happening and wasn't - as far as she knew - intentionally deactivating customers. There was another odd pattern in the data: it was all from around 4:55 pm to 5:05 pm when they were happening. We knew something was happening at the end of the workday, but we didn't know what - so I decided a good old-fashioned "stake out" was in order. Okay, it wasn't actually a stakeout; she knew I was watching...but I asked her to go through her usual end-of-day routine while observing what was going on. She started to pack up. She got her lunch bag from the break room, put it on her desk. Then she picked up her purse, put it on the desk, and tidied up the rest of the desk. But the purse wasn't on the desk - it was on the keyboard. Where she put her lunch down relative to her bag made the edge of her bag lay on the enter key on the right-hand side of her keyboard. While she diligently went around her desk, cleaning and sorting for the end of the day, the CRM system was on the screen. One enter opened a customer record. The next enter caused the focus to go from the Customer Name field to the Address field. Enter was still depressed, so the form cycled focus through all the tabs. Eventually, it unchecked the "active" checkbox and cycled through to the "Save" button. ### Mystery Solved Mystery solved! Was the computer making a bunch of noise while this was happening from the stuck key? Yes. Did she notice it? Not really. Should we have sanitized the inputs and locked the system down more to begin with? Sure. Would it have been cheaper to buy a CRM in the long run? Quite possibly. ## Lesson Learned The real lesson here isn't anything technical. The lesson is that as humans interact with systems - or as systems become complex enough to take actions on their own - they will make mistakes. And while you can't possibly anticipate every one of those mistakes from the onset, when you encounter one, you can work on making sure you have observability at every level so you can see it when it happens. And you can apply creative problem solving - what I like to call "alternative methods for success" - to issues that seem particularly perplexing. --- Title: My epiphany about the protests Description: Understanding the January 6th protests and Constitutional process Date: 1/7/2021 Source: boleary.dev Tags: Politics, Politics I want to be precise. This isn't from a place of hyperbole - it is in line with the interpretation that Republicans and Democrats alike expressed in Congress. Anyone who protested on Wednesday, peaceful or not, and desired Congress to reject the votes from a state where their candidate didn't win wasn't protesting inside our accepted system of government. They were protesting instead for Congress to overthrow the rule of law, the people's will, and most critically, the Constitution itself. We can have disagreements about President Trump. We can have debates about policy matters. But most of the time, those are within the bounds of our system of government. However, any attempt or protest desiring for Congress to object to and reject the ballots from any state is outside our system of government. In our representative democracy, this is how voting for President works: 1. The people vote for President and Vice President in their state 2. Actually, they are voting for "electors" to select those people as winning the state's votes in the Electoral College 3. The electors from each state meet and send those votes to Congress 4. Congress opens the votes, confirms they are authentic, and verifies the winner In the laws governing elections, Congress can challenge and object to electors from the various states. The basis for this had its root in the 1860s, when some states were in full revolt against the country, and many sent TWO sets of electors to Congress. In that case, there needed to be a procedure to determine which electors were the right and lawful ones. This objection process is covered by [Section 3 of the US code](https://www.law.cornell.edu/uscode/text/3/15). It is dense but gives the whole process of reading the electoral votes. In particular, though, let's read this line: > no electoral vote or votes from any State which shall have been regularly given by electors whose appointment has been lawfully certified to according to section 6 of this title from which but one return has been received shall be rejected Congress is not deciding if they agree with the election. They are not deciding if they are okay with the way the state conducted the election. They are not deciding if they like mail-in ballots or not. Heck - the state didn't even have to have an election! If they decided that they were going to pick their electors by random draw on the nightly news, that would be acceptable if it was "in such a Manner as the Legislature" of that state directed (as per Article II Section 1 of the Constitution): > Each State shall appoint, in such Manner as the Legislature thereof may direct, a Number of Electors, equal to the whole Number of Senators and Representatives to which the State may be entitled in the Congress If, as the objectors in Congress attempted and the protestors I must assume supported, Congress were to reject the votes of any state - knowing that that state sent those electors - then it would be the end of American self-rule. That again is not hyperbolic. There would be no need to have an election for President and Vice President. All that would matter is control of Congress. Because if Congress has the unilateral ability to reject votes they don't like, then whoever is in power in Congress at the time of the election could pick and choose which states to count and which not to count and pick their candidate as the winner. This is what is so shocking to conservatives who don't support this effort. It is the conservative position, the Republican position (both big and little "r") that the states have the sole power to elect the President and Vice President. This is central to how our system of government works. It's central to how we've decided to give everyone a voice in this country. It's central to what the founders saw for our nation. The place to dispute any issue you have with an election is not in Congress - it's in the states. The proper forum for elected officials to raise concerns about fair elections is through the states and through the courts; that work was already done, in more than 60 cases. But honestly, that actually doesn't matter. I don't care in this case if you agree or disagree with how the states conducted their election - it doesn't matter. It may matter for who you supported in this election, but if we decide that one branch of the federal government gets to decide who is President...then your vote won't ever count again. Arguing that Congress can reject electoral college votes it doesn't like isn't arguing for the sanctity of elections - it's arguing for the abolition of elections. --- Title: Understanding what we don't understand in AI Description: Why we need researchers like Timnit Gebru in AI ethics Date: 12/4/2020 Source: boleary.dev Tags: AI, AI, Tech ## Why we need more people like Timnit Gebru "I know enough to be dangerous" is a term I often apply to myself in many technical concepts. Not being a [classically trained](/talks/2020-05-16-not-technical-enough/) computer scientist, but a technologist and developer nonetheless, I know that (a) I know more than the 'average' person about technology but (b) there are a lot of people that know more than me. Probably nowhere is this more true than with artificial intelligence and machine learning. There are a LOT of thoughts on the current and future state of this branch of technology and science, but I know that I know enough to be dangerous. One way I can be "dangerous" is I know that we're not looking at a Skynet type scenario anytime soon ([Skynet](https://en.wikipedia.org/wiki/Skynet_(Terminator)) is the AI technology that becomes sentient and seeks to destroy humanity in the movie [Terminator](https://en.wikipedia.org/wiki/Terminator_(franchise))). I don't need to look any further than my kids asking Alexa what they think are "simple" questions that I know she has no hope of answering. At the same time, I know that the technology that exists in the wrong hands is pretty dangerous. Maybe not in the same big-screen way we see in Terminator, but in some ways worse and more insidious. That's one reason I recommend everyone I know to watch the Netflix documentary [The Social Dilemma](https://www.netflix.com/title/81254224) to understand the impact on our social lives on the internet learning algorithms can have. But I also know that there is much more on this topic that I don't know, which is why I was so disheartened to hear of [Google's firing of Dr. Timnit Gebru](https://www.wired.com/story/prominent-ai-ethics-researcher-says-google-fired-her/). Timnit has devoted their professional career to researching and understanding how AI can benefit and detract from society. I'm incredibly sorry that Timnit has to deal with this personally. I'm sure they will continue to have to deal with lawyers and Google and fallout from this incident in their personal and professional life. At the same time, I hope that everyone - inside and outside of technology - can learn from this. I'm glad Google has shown us all quite clearly where they stand on one of the most critical issues of our time - how are we going to use AI/ML ethically and responsibly. I love technology; it's fantastic. It can positively impact humans. Technological advances in medicine, agriculture, and even the industrial revolution have solved major problems that have faced humankind. I believe technology has objectively made life more comfortable, safer, and better. The wild application of technology without either (a) understanding or (b) desire to act on the ethical consequences has also shown us the other side of this coin time and time again. Climate change, global surveillance, you name it. Much like everything in life, there is potential for good and evil in new technology. And if you're not careful, if you're not intentional, you can cause a lot of harm while also changing things for the better. And if we haven't learned that lesson that history has been trying to teach us, we'll keep repeating the same mistakes. AI/ML is at a critical point in its life - we can choose to let it grow unbounded, and eventually, we KNOW that will have significant negative consequences. And what's even worse? We KNOW those consequences will disproportionately impact folks who have been traditionally marginalized or are in the minority. Those of us that understand even a little about AI/ML comprehend that it has a unique ability to be particularly bad at this. But that is where my comprehension ends. That is why it would be dangerous for me to go further. And that is EXACTLY why folks like Timnit, who understand the technology way better than I do, and thus WAY better than the average person does, are so critical. And if large tech companies who can put their finger on one side or the other of this scale we've established choose not to allow dissenting opinions, ethical questioning, and solid scientific research in the room...we've already lost. That's when things get truly dangerous. --- Title: Three keys to making the right multi-cloud decisions Description: Visibility, efficiency, and governance for multi-cloud success Date: 7/13/2020 Source: boleary.dev Tags: DevOps, DevOps, Tech *This blog was inspired by my talk of the same name.* Recently, there has been a lot of discussion about the possibility of multi- and hybrid-cloud environments. Many business and technology leaders have been concerned about vendor "lock-in" or an inability to leverage the best features of multiple hyper clouds. In regulated industries, there can still be a hesitancy to move "everything" to the cloud, and many want to keep some workloads within their physical data centers. ## Today's Reality The reality in the enterprise is that multi- and hybrid-cloud is already here. A 2019 State of Cloud report showed 84% of organizations are already using multiple clouds today. On average, they used more than four clouds in those responses. At the same time, we know that software excellence is the new operational excellence. "Software has eaten the world," and our competitiveness depends on our ability to deliver better products faster. Based on those realities, the question isn't about whether you will be a multi- or hybrid-cloud company. The problem is: are you ready to be better at it than your competition? If we accept that a multi-cloud strategy is required, we need to systemize our thinking. There are three key enablers here to consider: workload portability, the ability to negotiate with suppliers, and the ability to select the best tool for a given job. The cloud promises to remove undifferentiated work from our teams. To realize that potential, we must have a measured approach. ## Workload Portability The most critical enabler is workload portability. No matter what environment your team is deploying to, we must demand the same level of compliance, testing, and ease-of-use. Thus, creating a complete DevOps platform that is cloud-agnostic allows developers to create value without overthinking about where the code deploys. In considering both the platform your developers will use and how to make the "right" multi-cloud decisions, there are three keys: visibility, efficiency, and governance. ### Visibility Visibility means having: information where it matters most, a trusted single source of truth, and the ability to measure and improve. Whenever considering a multi-tool approach - whether it is a platform for internal use or the external deployment of your applications - visibility is crucial. For a DevOps platform, you want real-time visibility across the entire DevOps lifecycle. For your user-facing products, observability and the ability to correlate production events across providers will be critical for understanding the system. ### Efficiency Considering efficiency may be simple at first, but there are multiple facets to consider. We must always be sure we are efficient for the right group. If there is a tools team selecting tools, the bias may be to optimize for their team's efficiency. But if a selection here saves the tools team of 10 people an hour a week but costs 1,000 developers even two extra minutes a month, there is a negative impact on efficiency. Our platform of choice must allow development, QA, Security, and Operations to be part of a single conversation throughout the lifecycle. ### Governance And lastly, governance of the process is essential regardless of industry. However, it has been shown that working this governance into the day-to-day processes teams use allows teams to move quicker than a legacy "end of cycle" process. Embedded automated security, code quality, vulnerability management, and policy enforcement enable our teams to ship code with confidence. Regardless of where the deployment happens, tightly control how code is deployed, and eliminate guesswork. Incrementally roll out changes to reduce impact, and ensure user authentication and authorization is enforceable and consistent. ## End Goal These capabilities will help you operate with confidence across the multi-cloud and hybrid-cloud landscape. --- Title: What's in a name? Description: Why software should handle names like O'Leary correctly Date: 7/8/2020 Source: boleary.dev Tags: Tech, Tech ## My Last Name is valid My last name is O'Leary. Spelling it over the phone is fun..."That's O, apostrophe, capital L, e a r y." Despite what many developers think, this is an entirely valid last name. I've dealt with folks too lazy to deal with a `'` in a last name my whole life, which is one thing. I get it, SQL can be hard. But then many "solve" this problem through form validation that says "please enter a valid last name." Talk about invalidating someone's experience in life. It's not the biggest deal, but I share because there are a LOT of less privileged people than me that deal with lots of ways that developers don't include them. Names, for instance, [have a lot of variation](https://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/), but it doesn't end there. [Accessibility](https://brailleworks.com/accessibility-matters/) is critical. Considering how your [terminology](/blog/2020/06-i-was-wrong/) may impact other people is essential. Without diversity of thought in your development teams, you risk repeatedly causing this same problem for people with more pressing concerns. So...don't be lazy. ## I swear it's not about me *A good friend expresses his empathy for my situation.* When I shared one of these examples, my friend William expressed what I'd expect from such an awesome person: empathy. And honestly, I appreciate that empathy. But I don't share these cases just for me. And I don't even share them for all my Irish brothers and sisters with O'. And I don't just share them for everyone with special characters in their last name, or no last name, or a name that doesn't fit into the confines of "first name / middle name / last name." It's mostly just annoying for me...I have the perspective that there are way more challenging things that folks deal with when apps are written without considering people like them. But I bring awareness around it for EXACTLY that reason. If we as a software industry can't get these little, easy things right, how will we ever bring genuine empathy to what we build?! There are a lot of people that don't look, think, behave as we do. If we want to be successful, we're hoping that a LOT of those people sign up. **Everyone Can Contribute** So, not only is it the right thing to do for humanity, but it's the right thing to do selfishly for our success to try our best to include everyone in our design, everyone in our use cases. ## Examples If you follow me on [Twitter](https://twitter.com/olearycrew), you'll sometimes see me post this when I find a new place my last name doesn't work. I hope to encourage people to consider those who are different than themselves who will one day (hopefully, if you're successful) be users of their app. ## Moving forward This is an important enough topic for me that I don't want it to be just one blog post. As such, I've also published this post at [boleary.dev/thoughts/name](https://boleary.dev/thoughts/name) and will keep THAT page up to date with more things I find similar to this name issue to try and raise awareness to developers about how we can do things better for everyone. ## The twitter feed To see those examples, you can search for [#oapostrophe from:@olearycrew](https://twitter.com/search?q=%23oapostrophe+from%3A%40olearycrew&src=typed_query) on Twitter. --- Title: Changing your default branch - all the places Description: A living document on updating default branches everywhere Date: 6/13/2020 Source: boleary.dev Tags: DevOps, Learning You can also read my blog post on [changing the default branch stream upstream in git](/blog/2020/06-i-was-wrong/) or [renaming your own default branch in GitLab](/blog/2020/06-git-rename-master-main/). This post is a living post where I document all the different places where one may need to update the default branch when changing the name of that branch. ## Contents - [GitHub](#github) - [GitLab](#gitlab) - [Netlify](#netlify) - [GitLab CI/CD](#gitlab-cicd) ### GitHub [Scott Hanselman](https://twitter.com/shanselman) wrote a great post about [renaming your branch and updating GitHub](https://www.hanselman.com/blog/EasilyRenameYourGitDefaultBranchFromMasterToMain.aspx). ### GitLab See my [previous post](/blog/2020/06-git-rename-master-main/) on updating GitLab. ### Netlify If you deploy to Netlify using their standard git connection, you'll need to tell Netlify you've updated the default branch (what they call the production branch). 1. Log in and find the site you just updated 2. Go to Deploys 3. Click Deploy Settings 4. Under Deploy contexts click Edit settings 5. Change the name of the production branch and click Save **Note:** There may still be a "branch deploy" for master after you click save. You can remove this by just editing the settings one more time and removing master from the "Let me add individual branches" section. ### GitLab CI/CD It's possible if you use GitLab CI/CD, then a simple find/replace of `master` in your `.gitlab-ci.yml` will be sufficient. However, you could make your builds future proof by using [environmental variables](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html) that come built-in. For example, you may have code that says: #### Old Code ```yaml only: refs: - master ``` Here, we could simply update `master` to the new name, but if we also look at the updated [rules:](https://docs.gitlab.com/ee/ci/yaml/#rules-clauses) functionality, we could refactor this to something like: #### New Code ```yaml rules: - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH ``` This allows us to use the variables GitLab provides to explicitly say, "run this job only on the default branch." Then, our CI configuration will respect any future changes to the default branch. --- Title: Rename your Git default branch from master to main (with GitLab screenshots) Description: Step-by-step guide to renaming your Git default branch Date: 6/11/2020 Source: boleary.dev Tags: DevOps, GitLab, Learning *This article was inspired by Scott Hanselman's article about the same topic.* You can also read my blog post on [changing the default branch stream upstream in git](/blog/2020/06-i-was-wrong/) or [changing your default branch everywhere else](/blog/2020/06-changing-your-default-branch-all-the-places/). Also while you're here, go [follow me on Twitter](https://twitter.com/olearycrew). So let's say you're bought in: master is not a great name for the default branch in your repository. Someone somewhere told you it meant "master" as in "master recording" or "master craftsman," and you just went with it like [I did](/blog/2020/06-i-was-wrong/) for a long time. But now you know, it would make way more sense for it to be named something like "main" or "default" or "develop" or "release"...but how do you make that change? Making the change in your repository is relatively simple - master isn't really that different than any other branch, it just happens to be the convention we've used for years. ## 1) Change it locally To change it, you can use the move command in git to copy the entire master branch (and its history) to a new branch like so: ```bash git branch -m master main ``` You can then push it to your remote repository with: ```bash git push -u origin main ``` Once you do that, you'll see the option to start a merge request, which we're not going to do because we want main to become the default branch, not master. ## 2) Change it on GitLab Now that the main branch exists on our remote - GitLab - we can change the settings there as well. On your project, go to **Settings ➡️ Repository**. The top section there is called "Default Branch" - expand it to see the drop down where you can select main instead of master. Then click **Save changes** to save it. Now, the next time someone clones your repository, they will automatically be on the main branch. When they make changes, the link to create a merge request will automatically be pointed at the main branch. ## 3) Other settings There are other settings that may or may not apply to your repository. When changing the default branch, you should also check: - Settings ➡️ Repository ➡️ Protected Branches - Your `.gitlab-ci.yml` file (or other CI configuration) for any hardcoded references to master - Any other third-party integrations that may rely on the name of the branch master ## 4) Remove the master branch Once you're sure you've gotten any dependencies updated, you can remove the master branch completely. This will help avoid any confusion around what branch is the "default" branch for developers. To remove the branch you can go to `https://gitlab.com/username/repository/-/branches` or follow these steps: 1. Go to your project and go to **Repository ➡️ Branches** 2. Under Active Branches find master 3. Click the trash can to the right to delete the branch 4. Say "Ok" to the warning about not being able to undo the delete --- Title: I was wrong Description: On the origins of 'master' in Git and why we should change it Date: 6/10/2020 Source: boleary.dev Tags: DevOps, Politics I've been using Git for years and always took for granted that the default branch name was master. After all, it's been that way since [May 2005](https://github.com/git/git/commit/cad88fdf8d1ebafb5d4d1b92eb243ff86bae740b#diff-8117edf99fe3ee201b23c8c157a64c95R41) long before I started using Git. At the same time, I've been very supportive of efforts to [eliminate master/slave metaphors in tech](https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1). I hated using those terms for databases or tools I used in my day-to-day work, but I gave Git a pass. I was privileged enough to hold this seemingly incongruent dichotomy in my head as "okay." When pressed and asked, I was happy to accept that Git didn't have slave branches. Thus, the use of the term master must reference a ["master copy"](https://simplicable.com/new/master-copy), like similar terms in art and audio engineering of a master or [mastering](https://en.wikipedia.org/wiki/Mastering_(audio)). I know that I've offered this explanation to folks before, while still being happy to use master myself and let other people [choose a different default branch](https://coderwall.com/p/wmizjg/set-the-default-branch-in-a-git-repository) if they wanted to. Whenever it came up in the Git community at large, this was the accepted answer (as far as I know). And in general, many tools - GitHub and GitLab, for example - allowed users to change their default branch. It seemed that it was settled. But still, the question always itched at my brain: Is it inclusive to use a term like master even if it's not attached to slavery? Well, even if you think the answer to that question is yes, here's the problem: We're wrong about the intended meaning of master in a Git context. My colleague [James Ramsey](https://twitter.com/_jramsay) who may know more about Git than any human on the planet found [this thread](https://mail.gnome.org/archives/desktop-devel-list/2019-May/msg00066.html), which explains it is just as likely that master is referencing Bitkeeper's master/slave mechanism. Bitkeeper uses [master/slave terminology](https://github.com/bitkeeper-scm/bitkeeper/blob/master/doc/HOWTO.ask#L223) when referencing repositories. Since Git was built as an open-source alternative to Bitkeeper, I think it is safe to say that the origin of git master is rooted in master/slave terminology. Any attempts to now say master refers to a master recording, or a master craftsman is a [false etymology](https://en.wikipedia.org/wiki/False_etymology). It's now clear to me that we have to move off of master as the term for the default branch in Git. There is a lot of effort around this downstream of git itself (e.g. setting [defaults in GitLab](https://gitlab.com/gitlab-org/gitlab/-/issues/220906), changing [git-for-windows](https://github.com/git-for-windows/git/issues/2674)), but I'm happy to see and support the fact that the upstream Git maintainers are also going to [seriously consider changing master to something else](https://lore.kernel.org/git/20200609223624.GO6569@camp.crustytoothpaste.net/T/#mf6bab91e35ad94fe6ce272615219f887b9b8b440) in Git 3.0. Also, sooner than a major release, the Git maintainers could choose to add in the ability for git init to prompt the user for a default branch. We need to make this change, and I was wrong to not push for it sooner. I need to do better. I need to make sure that ally is a verb and not a noun in my life. > "If you are neutral in situations of injustice, you have chosen the side of the oppressor." ― Desmond Tutu Visit [blacklivesmatter.carrd.co](https://blacklivesmatter.carrd.co/) to find out what you can do to make ally a verb for yourself. ### Update 2020-06-15 Additional evidence that the original reference was to master/slave terminology was found from [a 2005 post](https://marc.info/?l=git&m=111968031816936&w=2) from the creator of Git: > What does that mean? It means that in a mirroring scenario, you can, for each git tree, do: > (a) On the slave: `cat .git/refs// | sort | uniq > slave-ref-list` > (b) On the master: `cat .git/refs// | sort | uniq > master-ref-list` > (c) On the master: ... ### Update 2020-06-18 You can also find out how to change your own default branch name in [GitLab](/blog/2020/06-git-rename-master-main/) and [everywhere else](/blog/2020/06-changing-your-default-branch-all-the-places/) in other blogs I've written. It's a tiny step, but it is one you can control. There is so much more we can do - as I mentioned above in the original post, visit [blacklivesmatter.carrd.co](https://blacklivesmatter.carrd.co/) for other more critical actions you can take to help. --- Title: How I Slack Description: Tips and tricks for managing Slack effectively Date: 5/28/2020 Source: boleary.dev Tags: Leadership, Leadership Since I was an early adopter of [Slack](https://slack.com/), I've been using Slack for almost half of my career at this point. I've always been a fan of chat as a way to collaborate with friends and colleagues. Before Slack, it was [HipChat](https://en.wikipedia.org/wiki/HipChat) - which I brought into my first role out of college. And in college, even, chat was a big part of my life. It probably dates me, but a little thing called [AOL Instant Messenger](https://en.wikipedia.org/wiki/AIM_(software)) (AIM) was a big part of college life when I was there. Yes, Facebook was still ".edu addresses" only, but I graduated years before Messenger was released. *In the AIM days, you were defined by how witty your away messages were.* Having used Slack for some time now, I've developed several practices that have helped me deal with a sizeable Slack environment. I've promised my team at [GitLab](https://about.gitlab.com/) for some time that I'd write about how I manage my life in Slack. I wouldn't dare call these practices "best," but they are what has worked for me, and most of this will be just in my opinion...but here goes nothing. If you know your way around Slack somewhat already, you might just want the [tl;dr](#tldr-my-slack-setup) version below. ## Basics When starting with Slack, the interface can be overwhelming. The sheer number of options and mental models needed to get up to speed can be tough to reason. However, at its core, Slack is nothing if not a chat app. And those chats have different contexts in which they can happen. Just like in an office, you could have a meeting with everyone in it and contribute at the same time, or a smaller meeting with only a few folks or just a 1:1 with your boss or a direct report - Slack has many similar modes of communication. Two basic constructs here are direct messages (DMs) and channels. ### 📺 Channels Channels allow many people to contribute to a conversation or a decision. They are written as # followed by a channel name. Some channels can be public. For instance, many Slack instances have a #general where everyone can see common messages. But channels can also be private, only allowing invited team members to view and participate. In channels, you can "@ mention" other team members to get their attention or even add them to a channel they might not already be in. ### 🗣️ Direct Messages Direct messages are more...well...direct. They might be a 1:1 message with a colleague where you can discuss items in private. Here "@ mentioning" other team members or channel names won't alert them, giving natural links for your conversation partner to follow. Direct messages can also have multiple members - kind of like a group text message. But these have less functionality than private channels do and thus should be avoided. ## 🆕 A new 'office' While the office analogy can help in getting started, the differences from a typical office environment make Slack shine. Unlike meetings that happen once and then are lost to the poorly taken notes, Slack harnesses the power of synchronous AND asynchronous communication at the right times. There may be a flurry of activity where folks can chime in real-time, but that activity is preserved for later. If the people in the discussion - or people who weren't able to be there at the same time - want to remember or learn how the conversation went and how a final decision was made, the entire history is there. This blend of asynchronous and synchronous communication is what makes Slack special. But as with any software tool, the tool alone won't solve all your problems. ## 🤔 Slack Mentality For many people that are new to Slack, the closest analog to it they may have had before is e-mail (shudders). This mental model can be a substantial limiting factor for one of the most important things to getting real value out of Slack - having the right mentality. E-mail is strictly asynchronous communication. *Actually my dad used to use e-mail as a kind of chat in meetings that were boring. He would send subject-line-only chat messages to colleagues in the room who could reply with the same. Sorry for putting you on blast dad.* Also, frequently (at least it feels) e-mail requires a response from the sender...or at the least that the sender read the entire message to see if they are CC-ed to give an opinion or just as an "FYI." Slack solves both of these issues. An instant chat with history enables synchronous and asynchronous communication for those not "there" at the time to read the entire conversation. Also, most channels you're in should be seen as the "FYI only" - tagging team members allows the sender to make it very clear when expecting a response from the receiver(s). In a sizeable Slack environment, though, this can get overwhelming. Instead of being freed from being tied to "Inbox zero" in your e-mail client, it can feel like you have dozens of inboxes (channels) - many of which aren't yours. It is then that I often tell people to take [some advice](https://slack.com/help/articles/218551977-Reduce-noise-in-Slack) from Marie Kondo. If the channel doesn't spark joy, leave it. People shouldn't see that as rude - but just controlling your own efficiency. However, there will always be more information to consume in a large enough Slack instance than you'll ever have time for. So let's look at some features that help me manage that information overload...most of the time. ## 🧑‍🤝‍🧑 Some Slack features you should make friends with Many Slack features help you account for and mitigate the problem of channel sprawl or not knowing where to put your attention. While there is no "silver bullet" to solve the problem of information overload, hopefully playing with these features can help. Remember that it should always be an iterative process: none of these settings are set in stone - so adjust your approach when needed. ### 🙈 Show only unread conversations By default, Slack will show all DMs you've opened and all of your channels. One of the strongest moves is to turn on the ability to show only conversations with unread messages. With unread only enabled, you'll only see channels and conversations you haven't read yet. That can help to immensely cut down on the amount of scrolling you need to do within the sidebar. The downside is that if you want to send a message, you have to find the channel through the channel picker or keyboard shortcut. However, there are additional options you can combine with this feature to start to provide a ranking of how critical a conversation is. ### ⭐ Starring conversations One of my favorite features is the ability [star a conversation](https://slack.com/help/articles/201331016-Star-channels-and-direct-messages) - be they DMs or channels. When you first launch Slack, and especially before channel groups, everything is a wide even mess of channels, DMs, group DMs, etc. Starring allows me to focus on those places I know that I'm going to be needed most, or where I want to be most responsive. I also star/unstar viciously. Going to an event next week? That event's channel is starred until the minute I leave, and then it's "see ya later." For DMs, I can have even more precision. Someone who I need to both be responsive too AND want to start conversations with often? Star and in my sidebar. Maybe I don't start many discussions, but if one does start, I want to be all over it (say your boss's boss, for instance). I star that conversation but close it, meaning if they ever DM me, they will be right to the top of my list. ### 🤫 Muting/Unmuting channels Another simple but powerful option is to ["mute" a channel](https://slack.com/help/articles/204411433-Mute-channels-and-direct-messages). When you mute a channel, you're still a member, but it is less prominent in your sidebar than other channels. I use this for channels I want to stay up to date on, only when I have time. They aren't time-sensitive or directly related to my role, so I can follow up when I have a spare minute. ### ⏰ Slackbot reminders Another Slack feature I couldn't live without is the ability to use [Slackbot to remind me about messages](https://slack.com/help/articles/208423427-Set-a-reminder). Sometimes a message requires action from me, but I can't immediately deal with it. When that happens, I use the context menu on the issue to set a reminder for an hour. Once the hour (or day) is up, Slackbot sends me a DM reminding me about the message (and even has a snooze button 😉). *Have Slackbot remember things for you 🐘* ### 🧵 Threads While the other features work just for you, many features work well...IF your whole team buys into using them. One of the most powerful of these is [threads](https://slack.com/help/articles/115000769927-Use-threads-to-organize-discussions-), which allow conversations in channels to be threaded rather than a massive stream of consciousness. Threads help when trying to come back and understand what's going on asynchronously if you were away at the time of the conversation. Rather than having to scroll back in time through dozens of messages, you can read only the handful of thread-starting topics that turned into discussions. You can then filter signal-from-noise when it comes to which threads you want to dig into and which you can leaving for "later" (read: never). In GitLab's Slack, we even have an emoji reaction that is a spool of thread called `:threadplease:` to remind friends to please use threads. ### ALL the keyboard shortcuts In any application I'm in, I always love keyboard shortcuts. Whenever the industry decided to standardize on having `/` or `?` be the standard for "show me the OTHER keyboard shortcuts" is my favorite day ever. As a software developer, I judge other software developers on their keyboard shortcuts. Slack wins big here. There are shortcuts for making a new message, finding a conversation by name, finding a message in a discussion based on phrase AND metadata, and much more. I use lots of keyboard shortcuts to navigate through Slack quickly (such as going through messages or unread conversations with alt / option and the arrow keys. To learn more about my specific setup, see the end of the article. To get a feel for keyboard shortcuts yourself, hit `cmd` (or `windows`) + `/` in Slack to see a whole list. Unlike [some situations](https://xkcd.com/1205/), the time you spend now to master a keyboard shortcut will save you countless hours later. ## New features? Yas, please! Those features I described above are tried-and-true in the sense that I've been using them for years. But Slack has recently added some features that can - in theory - help even more with information organization. The biggest one here is [channel sections](https://slack.com/help/articles/360043207674-Organize-conversations-with-custom-sections). Of any new feature in the last 3 or 5 years with Slack, this one changed how I used it the most. Now I can have more than just the "starred" or "unstarred" groups I mention above. The level of fine-grain control I can now have on what grabs my attention is what lets me find the time to do other, more important stuff like writing a blog post about how to use Slack, for instance. ## TL;DR my slack setup This section is a quick run down of my setup / how I use Slack. I will also be updating this section as I add ideas. - **Only show unread conversations** everywhere except starred conversations - **Star conversations I need**, including channels and DMs: close DMs that aren't active, but leave them as starred. - **Multiple levels of "in" a channel** ends up with prioritized groups in order: - Starred Higher on the list (will typically be "above the fold") - Not muted Higher on the list - Muted Other - Not muted Other - Muted - **Channel Groups & sidebar layout:** - Starred - My Team - Friends (people in other teams I'm friends with) - Rest of Marketing (marketing department channels for not my department) - Company & Industry (company-wide and industry-specific channels) - Social (purely social channels, interest groups) - Help me help you (git help, MR help channels where I can help and channels I go to for help) - Customers (customer/account specific channels) - The rest of the channels - DMs - Apps - Group DMs somewhere? Can I delete them forever? - **Keyboard shortcuts I love:** - Use `option` (or `alt`) + `shift` + `up/down` arrow constantly to jump to the next unread conversation - `cmd` (or `windows`) + `t` to jump to a channel or DM - `cmd` + `f` to search - `cmd` + `1` or `2` to switch between workspaces - **Search keywords** like `in:#channelname` and `from:@Brendan` when I remember pieces of a message besides content that will help. --- Title: A guide to Git for beginners Description: Git basics explained by a senior developer evangelist Date: 4/13/2020 Source: boleary.dev Tags: DevOps, Learning If you're just learning about software development, or are brand new to open source, it won't be long before you encounter Git, a source code management tool and arguably one of the most successful open source projects ever. We asked senior developer evangelist [Brendan O'Leary](https://twitter.com/olearycrew) to fill in the background on Git's history and successes in honor of its 15th anniversary. ## What is source code management? Before you start in software engineering, it's important to understand the concept of source code management. In its simplest form, software is a bunch of text files and if I'm using those by myself it's not a big deal. But when multiple people use multiple files it gets out of hand, and you need some way to manage it all. Humans can't necessarily manage all of that easily: If you're working with files A and C, and I'm working with C and D, you need a way to bring all the changes we've made together without overriding anything or causing any conflict. A computer can more easily figure that out, and in a nutshell, that's what source code management is. ## Why the term Git? There are several different urban legends about this. Linus Torvalds, who wrote it is a pretty gruff person [with some acknowledged sharp edges](https://www.newyorker.com/science/elements/after-years-of-abusive-e-mails-the-creator-of-linux-steps-aside). And so, the story suggests he actually named it after himself, as in the British slang word, ["git"](https://www.merriam-webster.com/dictionary/git). That may be apocryphal. Also, it's a three-letter combo, meaning it's short and didn't conflict with any existing Unix commands. Now, people say it stands for "Global Information Tracker" or "GD Idiot Truckload of..." if you're mad at it. ## But wait. Who is Linus Torvalds? Linus Torvalds is a Finnish-American software engineer who developed the [Linux kernel](https://www.howtogeek.com/howto/31632/what-is-the-linux-kernel-and-what-does-it-do/) and then invented Git 15 years ago. Torvalds has been quoted as saying he's more "famous" for Linux but that over time, Git will [end up being more important](https://www.techrepublic.com/article/linus-torvalds-git-proved-i-could-be-more-than-a-one-hit-wonder/). Torvalds is also widely seen as the godfather of the open source movement. ## Can you explain the rationale behind the cult following of open source? Torvalds himself has a cult following, and open source has been around for a very long time, long before Git was invented. But open source wasn't widely accepted and, in some cases, companies were actively hostile to the concept. Torvalds wanted to create a project everyone could contribute to, and Git was born (literally developed by Torvalds over a weekend 15 years ago). Git solved a problem that was common across all types of software development, and it not only welcomed contributions from users, but it also needed contributions to grow. The idea of a practical solution everyone could contribute to created a kind of zeitgeist, and today open source is widely embraced as a result. ## How could I explain Git to my neighbor? We tend to discuss Git as a tree, but I really don't know if that is the best analogy for it. It's a tree in the sense that it makes branches, but then those branches come back together, and that doesn't happen in a tree. Instead, I'd say Git is like a time machine. The whole history of everything that happened on any branch in alternate timelines is brought back together magically. Nothing is lost or changed, and you can look backward and move forward. It's magic. ## What's the most important thing I should know about Git? That's easy: You can't break it! Because it's a magical time machine, you really can't do anything to it that can't be fixed. So, I always tell beginners to relax and play around with your copy. No matter how many mistakes you make, you can't break it in a way that's not fixable. Have no fear. *This was originally published on the [GitLab Blog](https://about.gitlab.com/blog/2020/04/13/beginner-git-guide/)* --- Title: GitLab CI/CD for Google Firebase Description: How to deploy Firebase projects with GitLab CI/CD Date: 3/16/2020 Source: boleary.dev Tags: GitLab, DevOps, DevOps ## What is Firebase? Building mobile apps can be painful - especially when it comes to finding a way to provide all the tooling needed to make the application feasible without becoming an expert in many different disciplines. [Firebase from Google](https://firebase.google.com/) aims to take away that burden by providing an app deployment platform and a BaaS - Backend-as-a-service. While the offerings can vary greatly, most BaaS providers include a database, object storage, push notifications and some sort of hosting package. Firebase goes beyond this, also providing user authentication built-in as well as serverless functions, telemetry, and Google tools for growth. Those tools are very appealing to mobile and even web-app developers, and Firebase has been successful in that market, seeing usage from The New York Times, Lyft and Duolingo just to name a few. But even with all of the fantastic BaaS tools Firebase brings to bear on a project, it is critical to have source code management and CI/CD tools to match. As Firebase configuration for important settings such as database security, serverless functions, and hosting can all be stored "as-code" inside your application's repository, GitLab paired with Firebase can make for a powerful duo. ## Our app Our application will be a relatively simple link shortener for use with the domain [labwork.dev](https://labwork.dev/). To build a link shortener, we'll need the ability to log users in, a database for storing the links and a way to redirect folks coming to the short links to the longer website. Firebase comes with these items packaged together - which should make it relatively painless to get stood up (famous last words right?). I intend to cover the application in more detail in the future, or if you want to jump to the end, you can find the [completed project here](https://gitlab.com/brendan-demo/labwork/homepage/). For now, I wanted to at least introduce the architecture plan. I'll use [Vue.js](https://vuejs.org/) for the frontend. That will be a web application that lets users log in using Firebase Authentication. Once logged in, users will have access to a form that allows them to create new short URLs. That form will call a Firebase Function that checks to see if the shortcode requests already exist (or create a random hash if not specified). If the shortcode is unique, the function adds the shortcode and longer URL to the urls collection in Firestore and returns okay. Once the shortcode is in the database, I'll use another cloud function to retrieve the long URL associated with it. Firestore has a great feature that allows you to redirect traffic based on a pattern to a specified function, and I'll use this so that anything that comes to `/go/{shortcode}` gets magically redirected to the correct long URL. ## Add Firebase to the project Once we have this architecture finalized, and have built the skeleton of the project and are ready to start deploying and testing, it's time to add Firebase to our project. Firebase provides a [very helpful CLI tool](https://github.com/firebase/firebase-tools) for getting started here, and we'll use that to get started. The first command `firebase init` starts the project initialization process. From there, you can select which services you want to use with this project. You'll also be able to decide to create a new Firebase project, or use one you previously created in the [Firebase console](https://console.firebase.google.com/). You also can select where to store the configuration files. I'll add a folder called `firebase-config` to store all of these files. Now you can source control all changes to your Firebase architecture - from indexes to security rules - all in the same repository as your project. You can see all the changes required to add Firebase to the project [in this merge request](https://gitlab.com/brendan-demo/labwork/homepage/-/merge_requests/1). ## Deploy project to Firebase Now that Firebase is installed to our project folder and configured, we're ready to deploy for the first time. In order to deploy the Vue.js portion of the project, we first need to build it to production HTML, CSS and Javascript. So before deployment, run the `yarn build` command. This will output the build to the `dist` folder by default, and I've configured Firebase to recognize that directory as the hosting direction in the `firebase.json`. Once the project is built, running a simple `firebase deploy` will deploy ALL the features of the project to Firebase: the security rules and indexes for Firestore, the Firebase Functions and the Vue.js project to Firebase Hosting. If desired, we can also choose to deploy just a particular part of the project with the `--only` flag. For example, to only deploy a new version of the functions, we can say: ```bash firebase deploy --only functions ``` This is a feature that we'll combine with GitLab CI/CD in the next step to make our deployments as efficient as possible. ## Automate deployments with GitLab CI/CD Now that we have the project deploying, we can automate that deploy process so that we don't have to be at our computer authenticated to Firebase in order to deploy new changes. The steps to automate the deploy are relatively painless and include: (1) acquire a Firebase API key to use during deployment, (2) setup the `.gitlab-ci.yml` file to install the firebase CLI before running any other steps and (3) issuing the deployment commands for each part of the infrastructure depending on the change in a particular commit to the main branch. First, we need an API key so that GitLab CI/CD can authenticate to Firebase and perform the deploy. To get the API key, we can run `firebase login:ci` from the same place we were deploying the application previously. This will provide a key that we'll add to GitLab. When you enter `firebase login:ci`, open the URL provided in your browser. That will open a Google authentication page, log in with your Google account and click Allow. Then return to the terminal and you'll see the authentication code. Once you've successfully authenticated and obtained the token, go to your project on GitLab and go to Settings -> CI/CD -> Variables. Here's where we'll add the token as an environmental variable to be used in our deployment jobs. The key is `FIREBASE_TOKEN` and then the value is the token that was printed to your terminal. I've made mine both a [protected](https://docs.gitlab.com/ee/ci/variables/#protected-environment-variables) and [masked](https://docs.gitlab.com/ee/ci/variables/#masked-variables) variable. That means the variable will only be exposed to protected branches and if it's accidentally echoed to the job output, GitLab will hide it from leaking into there. Now we can start on the configuration for our `.gitlab-ci.yml`. At the top of the file I'm going to set the default image to be the current node alpine image from docker hub: ```yaml image: node:12.13.0-alpine ``` Next, I'll create a `before_script` that will install the firebase CLI before running any jobs in the file. In the future, I could bundle that CLI into my own custom Docker image to avoid doing this every time, but for now I'll go with the boring solution. ```yaml before_script: - npm i -g firebase-tools ``` For the build steps, I want to create a separate job for each part of the infrastructure: Firestore, Functions and the Vue app into Firebase Hosting. To do this, I'm going to utilize the [only: changes](https://docs.gitlab.com/ee/ci/yaml/#onlyexcept-advanced) feature to only deploy that part of the infrastructure when changes impact that part of the infrastructure and have been merged to master. For example, we'll only deploy the Firebase Functions when something changes in the `/functions` on the master branch: ```yaml deploy-functions: stage: deploy script: - cd functions - npm install - cd .. - firebase deploy --only functions --token $FIREBASE_TOKEN only: refs: - master changes: - functions/**/* ``` We'll repeat this same pattern for both Firestore and the Hosting project, adding the `yarn build` step before deploying hosting each time. Once that's completed, every time a merge request is accepted, GitLab CI/CD will automatically deploy the changes into our live production application. You can view the [completed .gitlab-ci.yml here](https://gitlab.com/brendan-demo/labwork/homepage/-/blob/master/.gitlab-ci.yml), or check out the link shortener for yourself (and try and [Rick Roll](https://labwork.dev/go/30201a) your friends at [labwork.dev](https://labwork.dev/). --- Title: My next move at GitLab Description: Becoming a Developer Evangelist at GitLab Date: 3/11/2020 Source: boleary.dev Tags: GitLab, Leadership I'm coming up on my third year at GitLab. Or as some people like to put it - I've been at GitLab for [1,049+ hires](https://about.gitlab.com/company/team/). GitLab has been growing rapidly, and I've been lucky enough to be there for [three](https://about.gitlab.com/blog/2017/10/09/gitlab-raises-20-million-to-complete-devops/) [separate](https://about.gitlab.com/blog/2018/09/19/announcing-100m-series-d-funding/) [funding rounds](https://about.gitlab.com/blog/2019/09/17/gitlab-series-e-funding/). All of that growth has been incredible to watch and be a part of. And even before I worked at GitLab, I was a large proponent of GitLab and the concept of bringing CI and CD right next to the code. Having worked as both an Engineering Manager and a Product Manager in the past, I've lived the pain of having a team have to spend more time fixing the tools than actually coding solutions to customer problems. ## Always an Evangelist Given that background, and previous lives spent wrestling with various DevOps tool chains, I've always, always been an evangelist for GitLab and GitLab CI/CD. Ever since I wrote my first pipeline for my side project - [listMe, a Slack app for managing lists](https://listme.chat/) - I knew GitLab CI/CD was something special. From there, I went on to evangelize GitLab in my role at the time as the director of DevOps at a small federal government contractor. Their environments were vastly different - production was bare metal, the customer's test environment one virtualization platform, and our DEV environment yet another platform. Bringing GitLab CI/CD to that problem allowed me to control those variables that we could, and eliminate mistakes by reducing complexity. In late 2017, I was provided the opportunity to get "on the bus" at GitLab. After a friend was recruited to GitLab, they recommended I take a look. "Oh...it's a company, not just a tool," I thought. And later that month, on a rainy beach vacation, I had enough time to read and fall in love with the [GitLab handbook](https://about.gitlab.com/handbook) and [values](https://about.gitlab.com/handbook/values/). From that day on, I was an evangelist for GitLab, the company, too. Much to the chagrin of my wife and family that week. At GitLab, I was able to help start our professional services group, run product management for a time for the CI product I loved, and contribute in plenty of other ways. True to our Everyone Can Contribute mission, I was capable of pitching into the product itself, various functions, and help onboard lots of new team members. That whole time, I have been fortunate enough to be able to talk, write, and tweet about GitLab, the company, and the product to anyone that would listen. So I'm excited that now as we scale, I've been given the opportunity to become a Developer Evangelist full time. My new boss, [Priyanka Sharma](https://twitter.com/pritianka), is building a fantastic team. We [plan to focus](https://about.gitlab.com/handbook/marketing/technical-evangelism/) on doubling down on our open source roots, evangelize DevOps and simplified tools chains and ensure that we can make GitLab as amazing for everyone as it was for me in those early days. I can't wait to see where we'll go next, and I know that I will be a [GitLabber](https://about.gitlab.com/handbook/communication/#top-misused-terms) for the rest of my life. For now, I'm excited that sharing my passion for DevOps tooling, #AllRemote, and the [GitLab values](https://about.gitlab.com/handbook/values/) is now officially my full-time job! ## Follow Along If you'd like to follow what my team and I are doing - we're of course, [public by default](https://about.gitlab.com/handbook/values/#public-by-default). I would love it if folks took a look at my [2020 Technical Evangelism](https://gitlab.com/brendan/2020-dev-evangelism) plan, which outlines at a high level what I plan to focus on this year. You can also check out our [issue board here](https://gitlab.com/groups/gitlab-com/-/boards/1565342?scope=all&utf8=%E2%9C%93&state=opened&label_name%5B%5D=tech-evangelism) to see all the tactical things we have in the pipeline. And of course, [follow me on Twitter](https://twitter.com/olearycrew) to see the day-to-day work it takes for me to realize this dream and evangelize GitLab. --- Title: Getting [meta] with GitLab CI/CD: building build images Description: How to build Docker images as part of your CI/CD pipeline Date: 8/19/2019 Source: boleary.dev Tags: GitLab, DevOps, DevOps An alternative title for this post could have been: I heard you liked docker, so I put [dind](https://hub.docker.com/_/docker/). ## Getting Started It should be clear by now that I love building stuff with GitLab CI/CD. From [DNS](/blog/2019/01-ci-cd-all-the-things-pi-hole/) to [breakfast](/blog/2018/06-auto-breakfast/) is a pretty wide range. However, past those "fun" use cases, I also like to share some ~best~ practices I have acquired through my years of using GitLab CI/CD. Both for software and non-software projects alike. I crossed out "best" above because I don't really like the term "best practices." It implies that there is only one right answer to a given question - which is the opposite of the point of computer science. Sure, there are better and worse ways to do something - but much like many things in life, you have to find what works for you. "[The best camera is the one you have with you](https://www.amazon.com/Best-Camera-One-Thats-You/dp/0321684788)" comes to mind when building CI/CD for projects. Something that works is better than something that's pretty. But, enough of my transgression, let's get to the practice I wanted to share in this post: building build images as part of the build process. Yes - it is precisely as meta as it sounds. ## Why? Often when building a particular project, you may have several unique build dependencies. In many languages, package managers solve for the majority, if not all of these dependencies - at least for build time. (think [npm](https://www.npmjs.com/), [RubyGems](https://rubygems.org/), [Maven](https://maven.apache.org/what-is-maven.html)) However, when we are building and deploying (CI/CD let's remember) from a machine that is not our own, that may not be enough. There may be a few dependencies we might need from elsewhere. The language libraries themselves are one such dependency - to build Java, I'm going to need the JDK or JRE. To build Node, I'll need...well Node, etc. In a docker-based environment, those languages and dependencies typically have an official image on Docker Hub ([JRE from Oracle](https://hub.docker.com/_/oracle-serverjre-8) or [Node from Node.js](https://hub.docker.com/_/node) for instance). Assume, however, that I may need a few other things not included in either those official docker images or the package manager I'm using. For instance, maybe I need a CLI tool for deploy ([AWS](https://aws.amazon.com/cli/), [Heroku](https://devcenter.heroku.com/articles/heroku-cli), [Firebase](https://firebase.google.com/docs/cli), etc.). We also might need a testing framework or tool like [Selenium](https://www.seleniumhq.org/) or [headless Chrome](https://developers.google.com/web/updates/2017/04/headless-chrome). Or there may be other tools for packaging, testing, or deployment that I need. Sometimes there is a docker image on Docker Hub for these combinations - or some of them - but not always a maintained version. One easy solution to this could be to just run the installation of the tools before every job that needs it. This can even be "automated" using something like the [before_script](https://docs.gitlab.com/ee/ci/yaml/#before_script-and-after_script) syntax. However, this adds time to our pipeline and seems inefficient - is there a better way? ## Enter the GitLab Docker registry Since GitLab is a single application for the entire DevOps lifecycle - it actually ships out of the box with a built-in [Docker registry](https://docs.gitlab.com/ee/user/project/container_registry.html). This can be a useful tool when deploying code in a containerized environment. We can build our application into a container and send it off into Kubernetes or some other Docker orchestrator. However, I also see this registry as an opportunity to save time in my pipeline (and save round trips to Docker hub and back every time!). For builds that require some of these extra dependencies, I like to build a "build" docker image. That way, I have an image with all of those baked right in. Then, as part of my pipeline, I can build the image at the start (only when changes are made or every time). And the rest of the pipeline can consume that image as the base image. ## Putting it in practice For example, let's see what it looks like to build a simple Docker image to use with deploying to [Google Firebase](https://firebase.google.com/). Firebase is a "backend as a service" tool that provides a database, authentication, and other services across platforms (web, iOS, and Android). It also includes web hosting and several other items that can be deployed through [a CLI](https://firebase.google.com/docs/cli). This makes getting started straightforward. You can deploy the whole stack with `firebase deploy`. Alternatively, you can deploy a part (like serverless functions) with a command like `firebase deploy --only functions`. Making this work in a CI/CD world requires a few extra steps, though. We'll need a Node Docker image that has the Firebase CLI in it, so let's make a simple Dockerfile to do that. Putting this Dockerfile in `.meta/Dockerfile`: ```dockerfile FROM node:10 RUN npm install -g firebase-tools ``` After that, I'll add a job to the front of my pipeline. Added to the front of my `.gitlab-ci.yml`: ```yaml meta-build-image: image: docker:stable services: - docker:dind stage: prepare script: - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - cd .meta - docker build -t $CI_REGISTRY/group/project/buildimage:latest . - docker push $CI_REGISTRY/group/project/buildimage:latest only: refs: - master changes: - .meta/Dockerfile ``` Let's break down that job: - We use the `docker:stable` image and a service of `docker:dind` - The stage is my first stage called `prepare` - In the script, we log in to the GitLab registry with the built-in variables and build the image. For more details, see the [GitLab documentation for building docker images](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html) - We only run this on master and only when the `.meta/Dockerfile` changes. This makes sure we are specific about when we change the docker image. We could also use the commit hash or other methods here to make the image more fungible. Now, in further jobs down the pipeline, I can use the latest build of the docker image like this: ```yaml firestore: image: registry.gitlab.com/group/project/buildimage stage: deploy 🚢🇮🇹 script: - firebase deploy --only firestore only: changes: - .firebase-config/firestore.rules - .firebase-config/firestore.indexes.json ``` In this job, we only run the job if something about the [Firestore](https://firebase.google.com/docs/firestore) (the database from Firebase) configuration changes. And when it does, we run the firestore deploy command in CI. I also added a token for deploy as a [GitLab CI/CD variable](https://docs.gitlab.com/ee/ci/variables/), based off the Firebase documentation for [using firebase with CI](https://firebase.google.com/docs/cli#admin-commands). ## Summary In the end, this helps speed up pipelines by ensuring that you have a custom-built build image that you control. You don't have to rely on unstable or unmaintained Docker Hub images or even have a Docker Hub account yourself to get started. To learn more about GitLab CI/CD, you can [read the GitLab website](https://about.gitlab.com/product/continuous-integration/) or the [CI/CD docs](https://docs.gitlab.com/ee/ci/introduction/). Also, there's a lot more to learn about the [GitLab docker registry](https://docs.gitlab.com/ee/user/project/container_registry.html). --- Title: CI/CD all the things: Pi-hole Description: Using GitLab CI/CD to manage Pi-hole configuration Date: 1/14/2019 Source: boleary.dev Tags: GitLab, DevOps, DevOps I've spent some time exploring how to apply the concepts of CI/CD to everyday life. In [my first post](/blog/2018/06-auto-breakfast/), I went all in - making breakfast with GitLab CI/CD as the orchestrator. In this post, I'll tell you about a home project I undertook over the holidays: getting a [Pi-hole](https://pi-hole.net/) up and running at my house. (mmmm 🥧) Pi-hole is a great little [open source](https://github.com/pi-hole) tool that lets you run your own DNS server on your network and send advertisement and tracking requests to a "black hole" of 0.0.0.0, providing network-level ad blocking…rather than installing ad blockers on every device, browser, and IoT item in the house. Pi-hole even ships with a great dashboard and has a number of extension points to customize it. I added an hourly speed test - just to keep my ISP honest 😉. ## Installing Pi-hole Installing Pi-hole is relatively simple, with [straightforward instructions](https://github.com/pi-hole/pi-hole/#method-1-clone-our-repository-and-run), and the only thing I had to watch out for was DHCP. Since my router provides Circle from Disney for parental controls, it has to retain the DNS/DHCP server but I could still map its upstream DNS to Pi-hole. This means I lost some of the granularity in the logs as most requests come from my router rather than the individual devices, but once I had it all set up Pi-Hole worked as designed. ## Source control for Pi-hole There are a lot of great resources out there for Pi-hole, including things like [commonly whitelisted domains](https://discourse.pi-hole.net/t/commonly-whitelisted-domains/212) to make sure some general services aren't negatively impacted by it. There are also many [additional blacklists](https://firebog.net/) available on the internet, depending on what you're concerned about. And, since FTL is extensible, it is even possible to use it as a lightweight local DNS server. Because all of those elements are presented in a fashion that says, "Here's how you can run a command in the terminal," or "Here's how you add it in the UI," my internal "must source control all things" tick was activated…and so I created [a repository on GitLab](https://gitlab.com/olearycrew/pihole/). I added a couple of scripts to automate the whitelisting for common and custom domains as well as a file to contain the hosts on my network. But this still meant I had to manually check out the repository on my Raspberry Pi and manually run the scripts. That's where [GitLab CI/CD](https://about.gitlab.com/product/continuous-integration/) comes to the rescue! Luckily, it's available on our free tier, because I was using my [personal account](https://gitlab.com/oleary) for all of this, while I was on holiday. ## Enabling CI/CD to Pi-hole Now that I had a repository ready to go, I just had two steps until I had access to all the CI/CD goodness my little heart would ever desire. - Make my Raspberry Pi a [GitLab Runner](https://docs.gitlab.com/runner/) so that GitLab.com changes could get into my home network with no holes poked in my firewall. - Set up a [.gitlab-ci.yml](https://docs.gitlab.com/ee/ci/yaml/) to run updates when changes are made. ### Installing GitLab Runner on my Raspberry Pi Installing GitLab Runner is easy on any platform that supports Golang, including [Linux](https://docs.gitlab.com/runner/install/linux-manually.html), [OSX](https://docs.gitlab.com/runner/install/osx.html), [Windows](https://docs.gitlab.com/runner/install/windows.html), [FreeBSD](https://docs.gitlab.com/runner/install/freebsd.html), [Kubernetes](https://docs.gitlab.com/runner/install/kubernetes.html) and (soon) even [z/OS](https://gitlab.com/gitlab-org/gitlab-runner/issues/3263). The Raspberry Pi has an ARM chip, so I used the ARM binary to install it with: ```bash sudo wget -O /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-arm ``` Then I made the gitlab-runner binary executable with: ```bash sudo chmod +x /usr/local/bin/gitlab-runner ``` After that, I installed it as a service with these commands: ```bash sudo gitlab-runner install --user=pi sudo gitlab-runner start ``` Once it was installed, [registering the runner](https://docs.gitlab.com/runner/register/) to be a private runner on my project was easy. I also added a [tag](https://docs.gitlab.com/ee/ci/runners/#using-tags) of `prod` to make sure it only ran code I tagged to run on "production" aka actually on my home network. I'm hoping this protects me from someone I like to call "future Brendan" who may or may not have the best memory when it comes to breaking things. I chose a [shell](https://docs.gitlab.com/runner/executors/shell.html) [executor](https://docs.gitlab.com/runner/executors/README.html) so that it was simple to understand. Basically, the runner would just execute the commands I put in the `script:` section of my YAML. Just as if I had logged in and ran them myself. ### Setting up the .gitlab-ci.yml I had a few thoughts in mind when it came to setting up my pipeline for deployments. Some of the scripts required restarts of the DNS services on the Pi-hole, so I didn't want them to be running all of the time. To prevent that, I used a number of GitLab CI/CD features: - `only:refs` - Ensures we only run this on master - so if I'm on a branch trying something out (shakes fist at future Brendan) I don't break anything. - `only:changes` - This allows me to limit the scope of the run to only the changes needed. For instance, a change to the whitelist won't trigger an update to the local DNS list and vice versa. - `tags` - I mentioned this tag before, but this also means that any job I add must have the `prod` tag before it will run in production (aka the live Pi-hole). As an example, let's look at automating the whitelist. I already created a script ([whitelistme.sh](https://gitlab.com/olearycrew/pihole/blob/master/whitelistme.sh)) that automates it when ran. So getting that to run in GitLab CI/CD is as simple as this script: ```yaml whitelist: stage: deploy-prod script: - echo "Run whitelist.sh" - ./whitelistme.sh only: refs: - master changes: - whitelistme.sh tags: - prod ``` Let's break down each section of that script: - For the **stage**, I created my own custom stage called `deploy-prod`. - The **script** is very simple and echoes what it is about to do and runs the script at the root of the checkout with `./whitelistme.sh`. - The **only** section implements the two controls I was talking about earlier. `refs: - master` means this will only run on the master branch of the repository. `changes: - whitelistme.sh` means this will only run on a change to the whitelistme.sh script and not on every change to the repository. This ensures that script only runs when it needs to. - **Tags** ensures that it runs on a tagged runner - in this case, the Pi-hole at home. Success! Running my jobs on my Pi-hole in my home network - orchestrated all from GitLab. 😺 ## DNS as Code Since FTL is mostly just dnsmasq with some customizations for Pi-hole, it is relatively easy to customize. In fact, by default, it includes an additional local file (and hostnames for the Pi-hole itself) in `/etc/pihole/local.list` like this: ``` 10.0.0.xx pihole 10.0.0.xx pi.hole ``` Again, I wanted to make sure this was source controlled - and the dream of source controlled DNS is now a reality for me. The way I implemented it was to: Create a `localDNS` file that would contain all of the local DNS entries I wanted: ``` 10.0.0.xx pihole 10.0.0.xx pi.hole 10.0.0.1 orbi.myhouse 10.0.0.xx pirack0.myhouse ``` Then I used GitLab CI/CD to automate replacing the `/etc/pihole/local.list` file with this one anytime it changed: ```yaml local-DNS: stage: deploy-prod script: - echo "Copy localDNS to /etc/pihole/local.list" - sudo cp ./localDNS /etc/pihole/local.list - echo "Restart Pi-hole DNS" - pihole restartdns only: refs: - master changes: - localDNS tags: - prod ``` And Voilà! Source controlled and automated DNS-as-code (DaaS™) ## Frequently Asked Questions Anticipating the questions you'll have, I've prepared a short primer below: - Yes, I did bring down the internet in the whole house for about 20 minutes while I was tinkering away. Yes, this was while all my in-laws were here on their phones. Yes, I got a lot of grief for it - Yes, this will restart DNS as it is running, thus I wanted to only run the jobs as restricted above - What is this business about pirack0 in the DNS entry?! That will just have to wait for another post 😉 --- Title: CI/CD All the things: Introducing Auto Breakfast from GitLab (sort of) Description: Making breakfast with GitLab CI/CD and sous vide Date: 6/28/2018 Source: boleary.dev Tags: GitLab, DevOps, Personal A big part of [GitLab's culture](https://about.gitlab.com/company/culture/) is [saying thanks](https://about.gitlab.com/handbook/communication/#say-thanks) to one another for doing a great job. That can be anything from helping with a tough technical problem to simply sharing a nice [coffee chat](https://about.gitlab.com/company/culture/all-remote/#coffee-chats) to break up the work day. One day a Sales team member thanked someone from Customer Success for a great demo of GitLab CI/CD. The customer commented afterwards, "Okay, what doesn't GitLab do?" Plenty of heart-themed emoji reactions followed. We've seen users do some pretty amazing things with GitLab CI/CD, from [ramping up to weekly mobile releases](https://about.gitlab.com/2017/06/07/continous-integration-ticketmaster/) to [automating boring Git operations](https://about.gitlab.com/2017/11/02/automating-boring-git-operations-gitlab-ci/), to [saving 90 percent on EC2 costs](https://about.gitlab.com/2017/11/23/autoscale-ci-runners/). However, there was one thing we hadn't seen. So in addition to this love, the question also garnered a semi-sarcastic answer: > It won't make breakfast for you, unfortunately. Never one to let a Slack conversation go unnoticed, I replied with one of my favorite phrases: > Challenge accepted. I have to admit that the fact that my status was :coffee_parrot: could have been related to my enthusiastic reply... ## The challenge At the time I had only a vague idea of how I would accomplish this. Many suggestions about Internet of Things devices followed my comment. And while a toaster with a version of Linux that will never be patched was intriguing, I wanted to do something bigger. A few years ago some friends got together and bought me an [Anova Sous Vide](https://anovaculinary.com/), knowing that I loved to cook. What they failed to calculate was that having four kids in eight years was counterproductive to learning the time-tested [French cooking method of sous-vide](https://en.wikipedia.org/wiki/Sous-vide). As such, the tool has not had a whole lot of use in its time. However, at this point I thought of two things: - I love a new sous-vide egg bite offering from a well-known coffee shop - The Anova Sous Vide uses bluetooth [low energy (BLE)](https://en.wikipedia.org/wiki/Bluetooth_Low_Energy) to allow you to control it through an app ## The recipe (culinary) While I did like the egg bites from a coffee shop that shall remain nameless, I don't have them all the time. I would give them a 5-star rating, but they cost a few more bucks then I'd like to spend 😉 So I found a [sous-vide egg bite recipe](https://recipes.anovaculinary.com/recipe/sous-vide-egg-bites-bacon-gruyere) on Anova's website. ## The recipe (technology) Once I had the recipe, all I needed was to reverse engineer the BLE connection, figure out how to get that to work from the command line, set up a project and get it integrated with GitLab CI/CD… no big deal. Luckily I found a fantastic project called PyCirculate that had already worked out a lot of the BLE connection issues with the Anova. It made me wonder if someone else had automated breakfast before… but I've yet to find them! Now that I had both recipes and all the ingredients, it was time to git crackin'… (I can't tell you how happy I was when I thought of that joke. Did I mention I'm a dad?) ## Setting up the breakfast pipeline Once I had that project installed and working on my laptop, I uploaded the code to GitLab in the public repository in the [auto-breakfast group](https://gitlab.com/auto-breakfast/eggs/). Next, I installed [GitLab Runner](https://docs.gitlab.com/runner/) on a [RaspberryPi](https://www.raspberrypi.org/). I registered the Pi as a [specific runner](https://docs.gitlab.com/runner/register/) for my project. I used a runner [tag](https://docs.gitlab.com/ee/ci/runners/README.html#using-tags) so that I could ensure the cooking job only ran on a device with a Bluetooth connection. When I run a pipeline on auto-breakfast/eggs it uses the RaspberryPi to execute and thus can create the BLE connection to the Anova. With the click of a button in GitLab, my breakfast pipeline was running. All I had to do was sit back, relax, and let GitLab CI/CD do all the work. ## The results The egg bites were great! I even modified the recipe with some great Kerrygold Irish whiskey cheddar cheese. However, I would say that it did take a little more effort to get things set up. However, now that it's done, I have a repeatable, single-button way to cook the recipe again (minus the egg cracking and food processing). Just like CI/CD with a .gitlab-ci.yml can help make software build and deployment more reliable and repeatable, it can also make a fantastic breakfast 😎 *Not pictured: A very messy kitchen and a very perplexed wife.* --- Title: Using GitLab to project manage home renovation priorities Description: How I use GitLab Issue Boards for home projects Date: 2/18/2018 Source: boleary.dev Tags: GitLab, Personal Last summer, my wife and I bought a new house for our ever-growing family. Before we moved in, we had a couple of improvements made — wood floors to replace the aging carpet in the master bedroom, some required structural fixes. However, when we bought the house, we knew there would be a lot more we wanted to do over the years. When it came to organizing those ideas into things that need to happen sooner rather than later and those that could wait, however, we found ourselves struggling to keep all of the plans in order. ## Trying to get organized I've been able to complete a few other projects since we moved in — but most were small in scale. A built-in shelf wall for my wife's office, painting and staining the new deck, and of course a DIY standing desk to use in my new office kitchen (which is also the house's kitchen...[working from home for the win](https://about.gitlab.com/company/culture/all-remote/). These projects were great, but we needed a way to organize and prioritize larger renovation projects. Clockwise, from left: built-in shelf wall, painted and stained deck, DIY standing desk. I was a GitLab user for years before I even became a GitLabber. I've always hosted my side-project code in GitLab.com since GitLab offers [unlimited private repositories](https://about.gitlab.com/pricing/#gitlab-com) for free. For project management in my "day job" I've used dozens of other tools outside of GitLab, so when I joined it was the first time I saw the full breadth of what GitLab offers in issue management. In thinking about the other tools I've used in the past, they didn't seem to meet the full bar of what I was looking for to solve our problem. As a mother of four young children, my wife is always on the go… but I'm on a computer all day long. So we needed something that worked seamlessly between platforms. We also needed to be able to easily re-arrange and re-prioritize items. Also, I fancy myself a bit of a DIY-er, so I wanted to be able to label some items as at least possible for me to maybe complete myself. All of these requirements had me wondering what tool would be best for my wife and me to collaborate on. ## Enter GitLab Issue Boards With these requirements, and my newfound GitLab knowledge, I was able to come up with a novel solution to the problem we were having: why not use a [GitLab Issue Board](https://about.gitlab.com/product/issueboard/) to manage our ever-changing home renovation priorities? With Issue Boards, we would have a fantastic solution for mobile and desktop (shout out to the [GitLab UX team](https://design.gitlab.com/)). With [labels](https://docs.gitlab.com/ee/user/project/labels.html), I could organize and group issues however we wanted. And the customizable columns would allow us to prioritize, track and manage the various issues and ideas. ## How the board works To start, I [created a new group on GitLab.com](https://gitlab.com/groups/new) to house (pun intended) everything for our family. I made a project in that group called priorities to be the central place to collect all the renovation ideas we had. In the future, I may have a project for a specific renovation, managing purchases, and contractors, etc. As with every GitLab project, issues and issue boards were baked right in. I started adding issues right away — beginning with those that were at the top of mind, like the water heater that is at the end of its usable life, repairs to our front entryway, and window replacement. My wife didn't have a GitLab.com account yet, but it was easy to add her to the project as a member just by putting her email address in on the member's page, allowing her to sign up and get access to the project in one step. To get organized, I created a few labels: P1 for top priority items, DIY Possibility for those I might be able to tackle on my own, and Furniture for those that involved furnishing various rooms. The labels will help filter issues so that if I find a free weekend, I can search for DIY Possibility issues to maybe get started on. Or if we go to a furniture store, we could filter to those issues to get an idea of cost while we are there. For the board columns, I decided to use P1 as the first column after Backlog to highlight those issues. From there, it's a matter of agreeing on an organization of priority 😃 ## Where to go next Now it's time to execute! One thing we didn't account for in the first iteration was the scope of issues. Some things were relatively minor regarding time and investment. Others (like replacing all 27 windows!) are larger projects for which we need to budget. For this, we will be using issue weight to understand how different projects align with budget and time investment to pull off. It's been an exciting experience using GitLab Issue Boards for something outside of the development space. We'd love to hear from you too about "non-standard" uses for GitLab's features. Feel free to comment on this post or tweet us [@GitLab](https://twitter.com/gitlab). --- Title: The Space Between AI Hype and AI Denial Description: Don't fear AI, be curious about it. Date: 1/1/2025 Publication: Kilo Blog Original URL: https://blog.kilo.ai/p/between-ai-hype-and-ai-denial Tags: ai # The Space Between AI Hype and AI Denial There has been a [lot](https://x.com/dhh/status/2007503687745490976) [of](https://x.com/rakyll/status/2007889761328046432) [discourse](https://x.com/jonas/status/2007752678676537696) over the extended winter break about where AI is taking software engineering. [I weighed in](https://blog.kilo.ai/p/senior-engineers-use-ai-now), along with other people I respect a lot, like [Lee Robinson](https://x.com/leerob/status/2007203275461009508) from Cursor, and [Janna Dogan](https://x.com/rakyll/status/2007239758158975130) from Google, [DHH](https://x.com/dhh/status/2007512760733831575) from Rails. And I’ve some strong reactions to all, saying that this is just fear mongering, or “Oh great, another post telling us we HAVE to use AI or we’re DONE.” Or worse, associating us with the _real_ AI maximalists who _are_ fear mongering, like recruiters using it to scare candidates. (“Learn AI or lose your job!”) But that’s not what I said, and it’s not what the smartest people in our industry like Lee, Janna, and DHH are actually saying. So let me be clear: **I don’t want you to fear AI, I want you to be curious about it.** [![](https://substackcdn.com/image/fetch/$s_!ZePf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869dd714-2717-4570-9347-5c046e7e6b54_1408x768.png) ](https://substackcdn.com/image/fetch/$s_!ZePf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869dd714-2717-4570-9347-5c046e7e6b54_1408x768.png) The frustrating part is that the people who have made it the hardest to have an honest conversation about AI coding tools are, ironically, the AI companies themselves. For the past two years, we’ve been bombarded with predictions that software engineers would be obsolete “within six months” or “by 2025.” Other AI executives echoed it. The doomers ran with it. And what happened? Software engineers are still here. Still employed. Still solving hard problems that LLMs can’t figure out on their own. In fact, the major AI labs _**themselves**_ are mostly [hiring](https://www.anthropic.com/jobs) [more](https://x.ai/careers/open-roles?dept=4024733007,4062428007,4046295007,4052172007,4046294007) [engineers](https://openai.com/careers/search/?c=e1e973fe-6f0a-475f-9361-a9b6c095d869%2Cf002fe09-4cec-46b0-8add-8bf9ff438a62%2Cab2b9da4-24a4-47df-8bed-1ed5a39c7036%2Cee8390dd-108f-4cc6-a851-f7d590798ca2%2C29457f80-62c5-4420-b64b-53037e8dc25e%2C7cba3ac0-2b6e-4d52-ad38-e39a5f61c73f%2C2808218a-d9fc-426e-9c4c-9e575b3842c8%2C91e04f97-2d0d-4cda-8890-0dcae25e5efd%2Ca639b09c-145d-4edb-bef0-3f3ad8a770d9%2C4d63f1fc-1f6f-40ee-80c3-16b97261c943%2Cec712b2d-1b07-4d50-a27f-7d1153e0a5df%2C0df0672c-86c0-46ee-b3dd-3cf63adb5b08%2C8cb35b37-f31f-4167-84ca-ba789cf36142%2Cf32f653e-df5a-407a-ab39-901459f5f6c1%2C68998f96-ac93-45a9-aa44-dda4adf7a47b%2C795ae415-f19a-41c9-8acd-b1b8c08c4896). The predictions were wrong, and now there’s a trust deficit. When you’ve been told the sky is falling for two years and it hasn’t, it’s natural to tune out. The boy (bros?) cried wolf. But dismissing the tools entirely because some people overhyped them is just as much of a mistake as believing the hype. Since I wrote that last piece, more data has come in that’s worth paying attention to. Greptile’s [State of AI Coding 2025 report](https://www.greptile.com/state-of-ai-coding-2025) shows developer output up 76% year-over-year. Lines of code per developer grew from 4,450 to 7,839 as AI tools acted as a force multiplier. At the same time, median PR size increased by 33%, from 57 to 76 lines changed per PR. What does that tell us? Developers are shipping more, and they’re shipping bigger chunks of work at a time. Meanwhile, there’s this: [![X avatar for @samwhoo](https://substackcdn.com/image/fetch/$s_!vEFT!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fprofile_images%2F1900547256228438017%2F1PZIf68N.jpg) Sam Rose@samwhoo StackOverflow graph of questions asked per month. Holy shit. ![A graph of stack overflow questions asked per month. It’s almost a perfect curve starting at 0 on the far left, in 2009, peaking in the middle in 2017 at about 200,000, then dropping down to near 0 today on the right.](https://pbs.substack.com/media/G9xlSUcWoAAJpTZ.jpg) 11:12 PM · Jan 3, 2026 · 4.46M Views 878 Replies · 1.92K Reposts · 28.2K Likes ](https://x.com/samwhoo/status/2007590903448514921) Now, you can interpret this a few ways. Maybe developers are just searching less. Maybe the documentation got better. Or maybe—and I think this is the obvious answer—developers are asking their questions to AI instead of posting them on Stack Overflow and waiting for someone to tell them it’s a duplicate. [![Generated Image](https://substackcdn.com/image/fetch/$s_!dsN2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76e8f726-c92e-440e-b86b-7c9c45cca516_1408x768.png "Generated Image") ](https://substackcdn.com/image/fetch/$s_!dsN2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F76e8f726-c92e-440e-b86b-7c9c45cca516_1408x768.png) Are we really going to be nostalgic for the days of getting dunked on by Stack Overflow mods? At least Claude always tells me I’m absolutely right! **The behavioral shift is real. Developers are changing how they work.** Jaana Dogan [nailed this recently](https://x.com/rakyll/status/2007917889672942017): [![X avatar for @rakyll](https://substackcdn.com/image/fetch/$s_!AQzz!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fprofile_images%2F1876116639093346305%2FSaIybg3m.jpg) Jaana Dogan ヤナ ドガン@rakyll The developer community has always been divided into a few camps: - People who are open to new modern tools - People who are building tools with new modern tools and - People who think you should lose your job for using any modern tools 8:51 PM · Jan 4, 2026 · 35.9K Views 54 Replies · 47 Reposts · 594 Likes ](https://x.com/rakyll/status/2007917889672942017) That last group has always existed. They were there when [IDEs replaced text editors](https://news.ycombinator.com/item?id=34010065). They were there when [high-level languages replaced assembly](https://en.wikipedia.org/wiki/Grace_Hopper#UNIVAC). They were there when version control became standard. And they’re here now, insisting that “real developers” don’t need AI. Those people have always been wrong. Not because the old ways were bad, and not because the new ways were the only way to do things, but because refusing to learn new tools is a choice to be less effective than you could be. DHH—who built Rails, who has opinions about everything, who is definitely not a hype merchant—put it this way: [![X avatar for @dhh](https://substackcdn.com/image/fetch/$s_!i-co!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fpbs.substack.com%2Fprofile_images%2F1746980162607140864%2FfG9Fj4K_.jpg) DHH@dhh You can't let the slop and cringe deny you the wonder of AI. This is the most exciting thing we've made computers do since we connected them to the internet. If you spent 2025 being pessimistic or skeptical on AI, why not give the start of 2026 a try with optimism and curiosity? 5:25 PM · Jan 3, 2026 · 276K Views 204 Replies · 395 Reposts · 4.6K Likes ](https://x.com/dhh/status/2007503687745490976) That’s the framing I want people to take from this: not fear but curiosity. Let’s zoom out for a second. [![Generated Image](https://substackcdn.com/image/fetch/$s_!qque!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb954aea-612f-4e2c-bd92-78583cce5365_1584x672.png "Generated Image") ](https://substackcdn.com/image/fetch/$s_!qque!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbb954aea-612f-4e2c-bd92-78583cce5365_1584x672.png) Every few years, the industry shifts. New languages emerge. New frameworks become dominant, and new paradigms take hold. And every single time, some engineers embrace the change while others dig in and resist. I remember when people insisted you didn’t need to learn Git because SVN was “fine.” I also remember when DevOps was just “ops with a fancy name” and containers were a fad that would never work in production. To be fair, those technologies were _also_ overhyped. DevOps consultants promised the moon. Docker evangelists acted like containers would solve world hunger. Git zealots were insufferable. The hype cycle was real, and plenty of it was annoying or flat-out wrong. **But the hype being overblown didn’t mean the underlying tools were useless.** Git actually _was_ better than SVN for most workflows. Containers actually did transform how we deploy software, and DevOps principles really did improve how teams ship code. The hype merchants were wrong about the timeline and the magnitude, but they weren’t wrong that something real was happening. The engineers who stayed curious—who learned the new tools even when the marketing around them was obnoxious—consistently came out ahead. They didn’t have to abandon everything they knew, they just added new capabilities to their existing skillset. AI coding tools are another tool in that lineage, with an important distinction. **AI is non-deterministic.** The same prompt doesn’t always give you the same output. The model might nail it on the first try or hallucinate nonsense. It might work perfectly on your codebase and fail on someone else’s. So this isn’t like learning a new framework where you read the docs, understand the API, and get predictable results. Working effectively with AI is less like learning a new programming language and more like learning how to work with a very fast, very well-read, somewhat unreliable junior developer. The [mental models](https://blog.kilo.ai/i/175061191/the-mental-model-that-works), feedback loops, and failure modes are different. Because the question isn’t “Will AI replace me?” It’s “How do I use AI as a multiplier for my engineering work?” And answering that question well requires treating this as a new skill to develop, not just a plugin to install. AI coding tools have crossed a threshold. The senior engineers I respect most are using them daily. The data shows measurable productivity gains. The behavioral shifts are visible everywhere. If you tried these tools in 2024 and wrote them off, your experience is outdated. If you’ve never tried them at all, you might be missing something useful. If you’re using them but just for autocomplete, you’re probably not getting the full picture. None of that is fear-mongering—it’s just observation from the facts as I see them, and I’m not alone. Engineers have always learned new tools. This is another new tool. The only unusual thing is how quickly it’s changing. Stay curious. That’s all. — _This is something I’ve been thinking about a lot, and it’s why I’m building out [Learn Agentic Engineering](https://www.youtube.com/watch?v=SYeVcOSAweY&list=PLT--VxJTR64Nu5UcZjiKAu_9jEXfjW5Ig)—free content on YouTube now, with a [paid certification](https://learn.kilo.ai/) coming soon. It’s my first attempt at articulating what it actually means to be effective with these tools._ #### Discussion about this post ### Ready for more?