In part 1, I wrote about how the macro-economic environment is driving the timing of this conversation, and in particular the impact that a decade of cheap money had on how we build software, and what the end of that decade means. In part 2, I explored how building software is counter-intuitively harder today than ever, because the complexity of building software has exploded. This post will build on part 1 and part 2 to talk about how the economics of employing a team of software engineers has changed.
Layoffs suck. I’ve written about them a bit in Building Layoffs on a Healthy Foundation. If you were recently laid off, especially if you’re vulnerable to our capricious immigration system, you have my sympathy, and please don’t hesitate to ask on the off chance there is something I can do to help.
It is awkward to write about reducing working in tech just to its economic mechanics at the exact moment when many of the most successful and profitable companies in our industry are doing layoffs. My hope for this series is to help people understand this system and how it’s changing, in order to have better tools for shaping the outcomes. Especially for the estimated 50% of the industry who joined in the last 10 years and have known only boom conditions.
As the boss in the recent McSweeney’s article said, “I wish I wanted to pay you, but I don’t.” As a sometimes boss myself, I might say it with more nuance. Let’s assume we all agree that everyone deserves food, shelter, education, health, and meaningful dignified work. (though that is very much not the case in our society). As a boss I can either pay you out of the sustainable profits our company generates or I can compensate you with the money the market has lent me in the form of our company’s valuation. In tech companies, in general, we’re compensated inline with our company’s valuation, which is often many multiples larger than our profits (assuming we have any). Those multiples are driven by Silicon Valley’s (and the SV-adjacent ecosystem’s) special relationship with software.
The Silicon Valley model, at its most basic, is that we spend a relatively small fixed cost upfront to develop software, and then each new customer has near zero marginal cost – they’re just another row in the database after all. In this model once you’ve acquired enough customers to pay back your initial investment all the subsequent customers are pure profit (and that profit can be reinvested into further improving your product and protecting your base). At scale this becomes a machine for printing money. Contrast this to selling a physical product. In that model you have to pay for materials and labor for each and every widget you manufacture.
The equation of low fixed cost and scaling with near zero marginal cost is why “scale” is such a power word in our industry, and justifies ongoing investments in tech that otherwise boggle the mind.
However, as the decade of cheap money recedes, we see that the equation changed while no one was looking.
As anyone who has spent any time hiring engineers in the last decade knows: competition, always intense, is now fierce. If you haven’t spent any time hiring engineers, I’ll try to set the scene. The perks at a Google or a Facebook are legend, but perks are only a small fraction of the resources tech companies dedicate to hiring. Speaking with leaders at software companies, many will tell you their number one job is hiring. In fact hiring is one of the most important jobs a tech company has, often spending upwards of 20% of their employee’s working hours on nothing but hiring. This time spent includes:
An entire industry has emerged helping candidates prepare for interviews, and another industry in evaluating candidates. Employees can make a serious side income of hundreds of thousands of dollars referring candidates to their firms. A $10k bonus for successfully referring a single candidate isn’t uncommon. Mind-blowing when you consider that average tenure for a tech employee in San Francisco before the pandemic was 18 months.
The competition justified by the potential for asymptotic returns, and the competition driven by structural changes like increased team sizes (as we touched on in part 1, “what were we spending the cheap money on“), and the simple inflationary pressures of too much money chasing too few potential employees have all, unsurprisingly, driven up salaries.
From 2010 to 2020 software salaries went up roughly 35% – prior to the pandemic and the current wave of inflation. Stock-based compensation practices also became more generous. Practices like annual refreshes, accelerated vesting, and widespread usage of RSUs all became more common in the last decade. As an engineering leader, a key new challenge in the last few years has been ensuring long-tenured employees aren’t unfairly under-compensated compared to new hires, a challenge that has driven change and innovation in equity approaches. The most dramatic percent raises in compensation have gone to early career folks and new grads, with interns regularly making 200% of the US median income. We’ve also started to see management and IC tracks diverge in compensation, heresy among the startups of the previous decade. These costs aren’t just rising in traditional high-cost locations like San Francisco and New York, but globally. Globally we’ve also see rapidly expanding domestic technology ecosystems competing for talent with companies looking to offshore out of high-cost locations, driving demand in markets where employers historically had the leverage. Average compensation in traditionally high talent density, low- and medium-cost locations like India, Poland, and Israel has more than doubled, and during the pandemic, when salaries shot up again, salaries for some specialties were inline with San Francisco. The number of companies shifting to remote-first or hybrid, which had been accelerating before the pandemic, exploded during the pandemic, driving increases in what had been, from a cost perspective, second- and third-tier talent markets. Some companies have gone as far as embracing one pay band globally.
Larger teams with rapidly rising median salaries and narrower compensation windows have put significant pressure on the “relatively low fixed cost” side of the equation.
For the decade between 2000-2010, the most important trend to understand what was happening in tech was broadband adoption doubling annually. This was followed rapidly by an even more extreme curve for smartphone adoption. These trends created the opportunity to build thousands of new businesses largely without competition, and with a guaranteed pipeline of new customers. Building a successful technology business in this period was mostly about getting the damn code to run and being there waiting at the end of the conveyor belt as new customers were continuously minted. Those new customers made the theoretical returns to scale a reality, and made them seem like an inevitability.
Twenty years later, things have changed. Broadband and smartphone adoption have largely saturated. There are no new customers moving from snail mail to email, the video store to streaming, classifieds to web advertising, or the filing cabinet to online banking waiting to be snapped up. People have largely already adopted computers and the Internet to assist in their personal and work lives. A company starting today isn’t competing against an incumbent from an earlier technology regime, but a savvy technology native competitor. And the current generation of tech giant monopolists have consistently proven themselves extremely effective at avoiding being disrupted by upstarts. (The effectiveness of the current tech giants has also reinforced their technical and cultural practices without the blunting derision of being seen as “dinosaurs’’, a key contributor to the aesthetic of complexity we talked about in part 2, and a general cargo-culting across the industry)
That changing landscape has led the cheap money to search further afield for opportunities that can be tackled with software. Companies have pushed into industries that share very little with the return to scale model at the heart of the Silicon Valley tech salary math. We now see businesses with significant physical costs, and high costs per customer like ride sharing, delivery or hardware businesses. We see businesses with high per-customer licensing costs, like the streaming music companies. We see pushes into logistically complex businesses, e.g. health care, where regulatory oversight raises the cost to scale significantly.
Our industry’s constant search for greener fields, jonesing for the same broadband/smartphone adoption high, also drives the goldfish-like optimism characteristic of our industry. The procession of “next big thing, this will change everything” includes: AI, Web3, NFTs, blockchain, chatbots, gig economy, AR, appstores, big data, smart assistants, peer-to-peer, 3D printers, IoT, SoLoMo, feed driven virality, RIAs, virtual worlds, apps, applets, and portals. VR is probably the canonical example as it’s been the next big thing at least 4 times in my career, and I think I missed the first wave, having first shown up for VRML.
Success is more difficult and elusive than ever.
Much smarter people than I am have pointed out that all this tends to be cyclical. “Technological Revolutions and Financial Capital” by Carlota Perez is the canonical text on the topic if you wanted to dive into the deep end.
The 15 years following the rise of the web was a magical era for many looking to build new businesses. (Also magical for a whole bunch of other reasons that aren’t relevant to this article.)
You could lay out a relatively small amount of capital, hire a software engineering team, and build a wildly successful business that challenged existing industry titans in relatively short order. That expectation still forms the foundation for much of our industry. The decade of low interest rates and cheap money allowed us to put off critical reexamining those narratives.
Instead of re-examining those narratives, we spent. As complexity made software development harder, we hired larger teams, adding more complexity. As blockbuster businesses got harder to build, we spent even more on the talent needed to give us a shot at these increasingly elusive prize. As cost and complexity went up so did the precarity of the house of cards, and the difficulty of seeing a return.
In our current era it would be hard to point to a company more widely respected than Stripe. Now in its 14th year, it still isn’t profitable, still hasn’t gone public, and still primarily operates in a commodity market with well-established competitors. Compare that to Amazon who IPOed after 3 years, Apple after 4 years, Yahoo after 2 years, Google after 6 years, Netflix after 5 years, and Facebook after 8 years, with Microsoft being the outlier at 11 years. An IPO isn’t the definition of success, but it points to the new pressures the industry is experiencing, pressures that, among other things, have led to changing expectations in the workplace, and increased conflict between leadership and the workforce, even during a time of prosperity. Which we’ll talk about in part 4.
]]>see: Software and its Discontents, January 2023, Part 1 for more context and background.
In my conversations I found 4 interdependent trends that have substantially increased the difficulty of building software.
Talking primarily to engineering leaders, but also CEOs, VCs, ICs, and other practitioners, the most common response to the question of “has something substantially changed?” is that software, counter intuitively, has gotten harder to build. This is counter intuitive because the tools are orders of magnitude better, the amount of work you can cheaply outsource is nearly miraculous, computers are so damn fast and cheap these days, the quality of resources, much of it free, is off the charts, and the talent pool has exploded, and shows every sign of being smarter and better educated than ever. But software has gotten harder to build in one very particular and important way: it’s gotten more complex.
In both systems thinking and software the term “complex” is a technical one. It refers to the number of distinct parts in a system, and the connections between them. Complex systems are characterized by nonlinearity, randomness, emergence, and surprise. Complexity is why communication and coordination dominate all other costs when it comes to building software. And complexity has exploded. (thank you to John Allspaw for first introducing me to the concept of complexity as opposed to the merely complicated)
Complexity has not only exploded, it’s exploded in multiple distinct ways that have distinct root causes but interact. I’ve tried to break up the explosion in complexity into the following categories:
Some of this complexity is directly attributable to the decade of cheap money, some is just the natural result of our industry maturing. Some of this complexity will be addressable with better practices, better leadership, and a better understanding of the sources of complexity. Some of the complexity is here to stay, and we’ll need to recalibrate our expectations about how difficult it is to build software.
We expect more of software than we used to. Some of this is customer preference, some is regulation, and some is professional aesthetics.
Regulatory requirements, e.g. around data privacy and financial controls, are significantly more complex than they used to be. GDPR, AADC, DMA, DSA, HADOPI, FOSTA-SESTA, BITV, etc. But also FedRAMP, HIPAA, SOX, not to mention SOC2, and HITRUST, have become critical much earlier in a company’s life cycle, either to access critical customers, critical resources, or both. The regional and geographic variations can be particularly challenging and undermine a key productivity win that early online businesses enjoyed. Amazon, for example, didn’t even bother collecting sales tax in their early days, a price win for customers, but also a massive reduction in complexity vs a multi-geography brick and mortar business. In the early days, we on the Web, were all playing on regulatory easy mode. That window has largely closed. Especially as startups, searching for new problem spaces to deploy their capital and technology, have moved into highly regulated domains, like health, finance, and civic infrastructure.
The web, at its inception, was a triumph of simplicity. Its rapid rise to dominance was driven in large part by how it reduced the complexity of delivering software to customers. It was a single unified platform. It was open and non-proprietary. It was simple by design, built around a stateless protocol and a simple declarative UI paradigm. It was available over the internet and didn’t require anything to be bundled or shipped. These radical simplifications allowed effective asymmetric competition with established players developing desktop software and delivering it via physical media. Over the intervening decades we’ve largely compromised all these simplifying properties of the web. Even when all we’re doing is delivering software via the internet (and not say scooters out of the back of a fleet of vans) we’re now targeting many different platforms: desktop web, mobile web, and also the two dominant and semi-incompatible mobile walled garden ecosystems. Meanwhile state management has become so complex that it is the primary job we adopt heavy frontend frameworks, like React, to help us address. This complexity has driven the need of a specialized frontend engineering discipline, someone who can wrangle a Typescript type system of modular components populated via React Query talking to Apollo GraphQL backed by a gRPC Envoy proxy to a SOA stack. Similarly machine learning, mobile, infra, and backend, have all specialized with their own unique complexities. With multiple specializations, we now have more distinct “resources’’, each with their own work in progress queues, biases, hiring loops, onboarding, culture, sick days, and needs to coordinate. Explosions of complexity.
Rising standards have benefits as well as costs. Regulatory complexity is often driven by regulators’ concern for customers. More directly however, the raised expectations of what success looks like means that customers who were ignored in the early days of tech can no longer be ignored by a team wishing to be successful. Accessibility and internationalization have both become critical for success. In the early days, when broadband and then mobile adoption were rapidly doubling, you could count not just on new customers being regularly minted, but that the vast majority of those new customers would match the demographics of early adopters: young and wealthy, with many of them living in US cities. Even those early web adopters aren’t that young anymore, and a company that is only able to get adoption among some idealized fantasy model of young, perfectly healthy, US consumers isn’t viable in 2023. But both accessibility and internationalization require higher coordination of software development across previously unexplored dimensions with adaptive designs and translation. And, perforce, at least some of this work your software team is unlikely to be able to evaluate, complicating your acceptance criteria. Complicated processes are a classic source of complexity.
Similarly even without regulatory pressure you need to be designing for safety, security and anti-abuse from day one. Succeeding at defending against the global legions of the poor, bored or both is a high bar and now required at launch.
In many ways we’re living through a golden age of software development: more tools than ever, more affordable than ever. I’m old enough to remember when IDEs cost hundreds if not thousands of dollars, and there was a real ecosystem of people selling third party libraries and widgets (advertised in the back of Dr. Dobbs). Today we have more: more tools, more languages, more frameworks, more databases, and more services. Most of these tools represent real progress in terms of increased capabilities, and outsourcing non-core parts of your business. However the range of choice has real impacts on complexity.
Anyone joining a company today is looking at a stack that is at least as bespoke as the worst Not-Invented-Here stacks of the previous era. Rails was Rails, LAMP was LAMP, and while Vercel is better than anything we built for ourselves during that earlier era, it comes with a full manual, and its own quirks. So does Google PubSub versus some shitty solution we built on top of MySQL, and Launch Darkly can do so much more than anything we might have expressed with a shitty YAML config file. Those home rolled systems of an earlier era lacked both features and documentation, but our current systems are just as unique in their composition. Given the huge number of choices and the configurability of each of these professionally developed and documented components, the odds that you’ve seen this exact combination of technologies, tools, and services configured this particular way before is extremely low. We’re a long way from the era when everyone configured their LAMP app the same way, and a community of practice grew up around it.
Not only is each stack novel to each new team member this cross product of complexity means we have fewer mavens and experts. At Etsy when we needed to scale PHP we could hire Rasmus. Very few teams these days can find that kind of expert, and fewer of those experts will have seen the relevant scale on that exact stack.
In the conversations I’ve been having with engineering leaders a huge source of anxiety has been the impact that the explosion of technical choices has had on the quality of technical decision making.
As an engineering leader raising the quality of technical decision making is arguably your most important job after building the team itself. Eight years after I left Etsy I’m still getting new notes from people telling me that, no matter how frustrated they were with me at the time, in subsequent jobs they’ve come to appreciate and desperately miss how well defined the “Etsy Way” of building software was.
Today any team that has been around for more than a minute not only has chosen a unique combination of technologies, they’ve changed their mind about it a couple of times, often in logically inconsistent ways. With so many great technologies out there, and so many of them backed by well funded marketing teams (see: cheap money and marketing), it’s never been harder to keep your stack simple, and logically consistent. Many teams have given up entirely and are leaning into developer empowerment and polyglot infrastructures. We’ve collectively taken on the complexity of targeting multiple stacks, their idiosyncrasies, their need for training, and their upgrade cycles due to raising standards, while we’re simultaneously splitting our resources for managing that complexity by taking on the needed training, upgraded cycles, and idiosyncrasies of these complex polyglot stacks. Not to mention the unique interactions of these technologies, with our previous technology choices, which are still lingering in the stack. The real horror stories these days in infrastructure aren’t the load spikes of days of yore (“getting Slashdotted!”) but those complex interactions: how PHP’s GRPC library interacts with Envoy, how Scala’s JSON library tickles Varnish caching issues, how MySQL’s weird implementation of utf8mb4 is incompatible with storing your data literally anywhere else. There is a reason that tech debt has become the favorite bugbear of teams everywhere.
Without standardization in your company, without a small number of well known tools in which you’re developing expertise as a team, the hope that you can grow your team logarithmically but see exponential results is a fantasy. That discipline is harder than ever to enforce.
There is so much to say on the topic of large teams and aging code bases, and so much of it has been covered well elsewhere. I want to focus on just the important changes we’ve seen related to the other trends we’re discussing in this post.
Cheap money and founder friendly funding in the last decade has led to more founder control and deeper pockets. That control means we’re more likely to see attempts at continuity in companies. That means two decades into the Internet era of tech startups and a decade into cheap money, we’re seeing significantly older codebases. Older codebases compound the explosion of technical choices, and the sometimes poor technical decision making. Older codebases, with a longer history, mean more choices. More choices, and a lack of clarity around which of those choices are load bearing means significantly increased complexity for anyone onboarding to the codebase.
Teams are also getting larger, as we discussed in part 1. As teams get larger, complexity goes up for several reasons. First as we slice up responsibility for developing our software into thinner slices there are fewer people who have touched the whole system and have a coherent view of the whole architecture. Coherence is one of the key characteristics we look for in simple architectures, and its absence drives complexity. Additionally large teams spend more time dealing with coordination and are more likely to reach for architecture and abstractions that they hope will reduce coordination costs, aka if I architect this well enough I don’t have to speak to my colleagues. Microservices, event buses, and schema free databases are all examples of attempts to architect our way around coordination. A decade in we’ve learned that these patterns raise the cost of reasoning about a system, during onboarding, during design, and during incidents and outages. Finally, as teams have grown, and individuals’ scope of responsibility have narrowed, resume and promotion driven design has found increasingly fertile ground. How do you stand out as the 500th person maintaining a system you didn’t build? Build something new! And all of the complexity inherent in it. Google, as with so many of the best and most problematic patterns in this era, is well known as the epicenter of this phenomena, but you see it broadly as teams grow.
As an industry we’ve always been enamored with new technology and shiny objects. For years it was almost definitional, otherwise why did you go into this industry? Interestingly, even as the job has mainstreamed, the infatuation with complexity has remained, and even grown.
First, complexity lies at the heart of our industry’s mythologies. New people joining the industry are taught our myths about Google, Facebook, Amazon, and a sense that these companies’ approaches are what software is “supposed to” look like. And fewer and fewer people are in position to have a wide enough scope of responsibility to learn pragmatic counter lessons the hard way.
Second, during the era of abundance, when OpEx was easier to deploy than CapEx, cloud and SaaS exploded. These services come backed with significant marketing budgets whose job is to convince you that you need the complexity. Why deploy a database when you could deploy a non-relational datacluster, why deploy a server, when you could deploy a Kubernetes cluster, why build simple web pages when you could use React. Hacker News in particular has an interesting role in this cycle, being both a community driven by industry mythology, and also the marketing arm of a major source of funding for new developer oriented SaaS offerings. Now your community is reinforcing the message that good software is complex software, and that last year’s technical choices are out of date, and probably why your productivity is suffering.
And it was easier to raise capital if what you’re doing sounds high tech and complicated. Really it was a flywheel of people being able to raise money by sounding complicated and smart, and then spending that money on people who made them feel like they could help solve a hard problem in a complicated and smart way, with everyone getting paid and emotionally validated along the way. We’ve developed an aesthetics of complexity: the sense that a good system is a complex one, that you should prefer a SPA over a web page, a distributed system over a simple one, a service over a config file, the idea if you aren’t on the latest technology you’re wasting your time, and potentially damaging your career.
The race between improved productivity from better tools and the drag of increased complexity, inherent, accidental, and aspirational, isn’t particularly new for our industry. If you talk to people who worked at Sun, SGI, or Oracle at the end of the 90s they’ll quickly point out to you that much of this is cyclical. The era of cheap money certainly juiced some of these trends, but without other conflicts in the workplace around outcomes and expectations we wouldn’t be at this inflection point.
]]>This is not a topic that lends itself to a definitive answer, boundless and changing as the conditions are, but in talking with other engineering leaders, executives, CEOs, VCs, and a wide variety of practitioners, I found some trends that felt informative to me, and hopefully to you. I found in talking with folks not a single cause, but several interdependent causes. This isn’t a simple conversation, e.g. about remote vs hybrid, but a decade long set of trends contributing to why software engineering has gotten less successful, strains on labor relationships more pronounced, why managers are so fervent that their job has gotten harder, and why we’re having this discussion at this exact moment.
In this blog post, part 1 in the series, I’m going to try to set the stage for the next few parts by laying out the discontent I’m seeing, and what are some of the causes and trends. In particular I suggest that over the last decade we’ve seen:
Further I believe that we’re having this conversation at this exact moment because we’re at the tail end of a decade of cheap money. The relative ease of raising capital has both contributed to the trends that have brought us to this point of discontent, and allowed us to put off dealing with the challenges it created. Until now.
In future installments I’ll deep dive into the causes and trends, and share some ideas about how we can evolve our practices. I’m hoping this whole series will be useful for people thinking about the current state of the software industry, managers looking to ease their practice, individuals trying to understand the system that they’re operating in, and for anyone who joined our industry in the last decade and is looking for some perspective.
(And a sincere thank you to everyone who read the draft form of this when it was all one long rambling brain dump blog post. I can’t promise it isn’t still a rambling brain dump, but at least now it’s a rambling brain dump broken into sections and installments!)
The earliest signals that caught my attention that we had a phenomena that wasn’t just local to my own experiences was a sharp rise in discontent among three of the groups I speak with regularly: CEOs (and other executives), managers, and senior ICs (staff, principal, etc). “No one is impressed with their tech team”, was how one senior eng leader I spoke with put it.
CEOs, both in private and some in public, have been increasingly vocal about their skepticism regarding their engineering teams’ effectiveness. In public we’ve seen Sundar and Zuckerberg sharing these opinions, with a number of lesser luminaries following along (this is setting aside the toxic clown show that is Musk’s Twitter, and the sycophant he is inspiring). Layoffs have been one of the largest stories in tech this year. Companies have been quick to explain this trend as due to over-hiring during the pandemic. More quietly some have pointed to a shifting focus on profits over growth. But also privately and sometimes publicly the sentiment is that engineering teams just aren’t as productive as executives expect them to be, that the over-hiring represents bloat not just miscalculated ambition. It’s hard to overstate what a dramatic shift this is from how executives spoke about their engineering teams a decade ago, which piqued my interest.
Senior engineers meanwhile are feeling both frustrated and stuck. There is skepticism about whether early career folks are coming into the industry as well prepared as they used to (or into roles where they can be successful whether or not they’re prepared), but “kids these days” has a long history in our industry not to mention in every other human endeavor. Some of the increased pitch of frustration though is coming from the senior engineers’ own struggle to be effective. They feel “stuck”, with “entire chunks of [the] organization working on problems that feel self-inflicted and deploying skilled generalist engineers to seemingly low-value hyper-focused projects.”
Managers meanwhile experience their jobs as having gotten radically harder, caught in the middle of rising expectation and frustration on all sides.
One obvious caveat to call out upfront is we all just lived through a multi-year global pandemic that was filled with many tragedies, private and public, and forced all of us to make radical changes to our lives and work. Our industry was both touched relatively lightly given our comparative ease of working from home, and radically transformed as we stayed at home, and learned to do this work in our bedrooms and living rooms, and over Zoom, something very few of us signed up for. I don’t think that it can be overstated what a toll this has taken on all of us, even if the toll was different for each of us. I do, however, believe the trends I’m seeing are distinct from the pandemic, even if they interact heavily, but it’s reasonable to be skeptical of that conclusion. Even if all the changes we’re experiencing are attributable to living through a pandemic, I’m not sure what we do with that insight, so I’ve kept working on exploring these other avenues of understanding.
The dominant macro trend is fairly straightforward: interest rates are up. Interest rates being up creates better investment opportunities than tech, therefore tech stocks are down, and venture capital is harder to raise. This is not inherently interesting if we’re trying to learn about engineering and management practices. Stock prices have only ever loosely correlated with how well a company is executing and so the fluctuations only give us loose information about how companies may or may not need to improve execution.
What is interesting is what was hidden by a decade of cheap money that we find exposed as the tide goes out. As that tide has gone out we’re faced with a number of unique challenges, new conditions, broken practices, and dysfunctions which we’ve been able to avoid talking about by throwing money at the problems. And it is those challenges that can make it a uniquely difficult time for software development teams.
One way to understand what has changed is by looking at what we were spending the cheap money on.
Team sizes are up, creating a tight talent market, driving up salaries, with salaries for folks right out of college and early career rising fastest, compressing the compensation ranges. We also saw practices like annual stock refreshes become much more ubiquitous, and the emergence of other employee-friendly equity practices: like no vesting cliffs. All together the cost of paying an engineering team, especially a large team, is up significantly.
Marketing was another major sink for cheap money. This had several knock on effects on the costs of operating a tech company, and in particular engineering practices and costs. Beyond just the marketing budget, heavy investment in marketing has significantly raised companies’ expectations for the quality, scope, and speed of their analytical infrastructure. This has driven significant investment in data engineering talent and infrastructure, supercharged the MarTech SaaS sector (absorbing more of the limited data engineering talent), and required engineering and marketing to work in close partnership, two teams that have not traditionally been close partners.
Cloud and SaaS were major beneficiaries of cheap money with OpEx being much simpler to deploy quickly than CapEx. This is a key driver in the explosion of services that any single company integrates. Increased marketing budgets were also put behind these SaaS offerings, making a virtue of their adoption.
Cheap money has been used to finance startups that are outside of Silicon Valley’s traditional sweet spot; businesses with near zero marginal costs from growth. Instead we see a crop of businesses with significant marginal costs due to interacting with the physical world, logistical complexity, licensing liabilities, or all of the above and more.
Finally failing is a good way to keep complexity from growing year over year. With cheap money we’ve seen many more companies persisting and pivoting instead of simply folding. This has strained the capabilities of leaders, managers, and software architectures.
Thanks for reading part 1 of “Software and its Discontents, January 2023”. If you didn’t already believe we were struggling as an industry, it’s unlikely I’ve convinced you. If you were thinking about it, I hope I provided some systemic perspective on how, why and why now.
In part 2, I’ll be talking about the explosion of complexity we’ve seen in software development over the last decade, and in particular:
Apropos of having one of those conversations about how silly the term “full stack”
At my first job in industry you were expected to be full stack, though the term wouldn’t be coined for over a decade.
At the time full stack meant you could:
A small smattering of the things It did not require you to do included:
I’ve said it in different ways with each of the posts since I started trying to blog more regularly, but I thought it was worth writing a note dedicated entirely to making the point that this site is explicitly about trying to boil water with the lid off. The project is to publish early, and update, and clean it up as I go, and most importantly as people give me feedback.
As I said in the Obsidian tasks post, “The best way to ask a question is to share what you know, and have people tell you what you got wrong.”
And I’d encourage you to lower the bar for yourself as well. We’d love to see what you’re thinking about, however unformed. One trick that worked for me to lower the bar was to fork the writing between this site, and very slightly higer bar I hold for myself at Notes on engineering leadership. I’m sure I’ll come up with others. (maybe bring back some sort of explicitly shorter form category? or a link blog?)
]]>I’ve had this idea kicking around the ideas.txt in various forms since OAuth got rolling, had it crossed out as “done” during the period Keybase was a happening thing, and had to mark it as “undone” when Keybase kind of went to shit with crypto spam/scam. Took another stab at building over a weekend last April when the Elon craziness was just getting started, but didn’t get very far. So blogging instead. Who knows, maybe I’ll get back to it. But it would be cool if you built it.
A small service that you can authenticate to using various OAuth providers to prove that this me on Twitter, is that me on Google, is that me on Wordpress, is that me on Mastodon. Some folks have built this just for Mastodon to Twitter, but I think it would be more interesting as a service that lets people attest to their identity across multiple platforms.
Advogato also seems like similar prior art in its way, as much as warning on preventing gaming the system
— Daniel Onren Latorre, @danlatorre@mstdn.social (@danlatorre) November 5, 2022
https://indieauth.com/ uses what you set as XFN - like `a href="#" rel="me"` - to do something quite close to that
— Alister (@alister_b) November 5, 2022
Reminds me of https://microformats.org/wiki/RelMeAuth
— Brett Slatkin (@haxor) November 5, 2022
I got asked what I meant by “well known locations”. Technically well known locations refers to RFC 8615, but I mean it more loosely in the sense of demonstrating you control a website by changing it in a way a service specificies, e.g. uploading a specific file at a specific URL. Google Analytics list of ways to demonstrate you own a site, is probably best practices.
It’s @claimid all over again!
— Terrell RuSSL ˙ ͜ʟ˙ (@terrellrussell) November 6, 2022
cc @fstutzman https://flickr.com/photos/fstutzman/395522169
We published a couple papers and eventually life needed to keep moving on…
— Terrell RuSSL ˙ ͜ʟ˙ (@terrellrussell) November 6, 2022
Open verification needs a business model.https://weblog.terrellrussell.com/2013/12/goodnight-claimid/
Oh, and more than a year before FriendFeed. So much vision. So little cash.
— Terrell RuSSL ˙ ͜ʟ˙ (@terrellrussell) November 6, 2022
I was mostly on vacation last week, though I certainly indulged myself in keeping up on the drama of Musk happening to Twitter. I scribbled down the note: “Twitter’s messiness was a feature”. Mostly as a reminder to myself as we think about what comes next I’m expanding the thoughts while I still remember it.
What makes Twitter so ungovernable was also key to what made it successful. (aka “become ungovernable”)
Twitter has always been messy. Dan’s Weird Twitter post touches on how the white space left by Twitter’s internal challenges was key to much of the innovation that drove the product (and social media at large). Perfection takes no risks, invites no one to play, Twitter, at least early Twitter, never suffered from that problem.
The other messiness that was inherent to Twitter is it wasn’t a forum with a topic, it didn’t cater to one crowd, it wasn’t Google Plus, or any of the multitude of social software over the years that has encouraged people to fracture their personality by interest1.
Put another way, Google+ was famously designed to allow Sergey (or was it Larry?) to talk to solar experts. But solar experts want to interact with reporters. And the reporter wants to interact with the trend setters, who want to interact with celebrities, who want to interact with Beyonce. Everyone has an aspiration, but their participation has the side benefit of allowing us to listen in. I learn a ton from the snarky jokes that climate experts exchange with their colleagues on Twitter, including in particular who they trust and value, as distinct from official power structures.
But mess, at any meaningful scale, has inherent tension, and creates conflict. Boring and scaled is easy, messy and small is a familiar problem we’ve dealt with as humans for millenia. Messy and big may be inherently unstable.
And I’m including this just because I’ve always loved this “become ungovernable” video of Daisy (also in the photo above):
certainly there was some fracturing of identity, most notably in the time honored tradition of having a main and an alt, an organic feature that resisted being dragged into the mainstream official features for so long until Instagram cracked it with the “Close Friends” stories. ↩
As Ward Cunningham says, “The best way to ask a question is to share what you know, and have people tell you what you got wrong.”
Per Ishan Puranik, DataView queries are a much better solution. I was planning to hook something up clever with Watchman (as suggested by Channing) to re-run the shell script, but one thing I hadn’t figured out was how to keep my solution up to date when I made edits on mobile. DataViews solves all of that with a native solution. So now my tasks page is just:
```dataview
TASK FROM "/"
```
Also, if you’re playing with Obsidian, join the Discord.
I decided to ditch Craft in part because it didn’t have a way to show me all my open todos. Obsidian doesn’t have a default solution (as far as I can tell) to show me all the open checklist items. But Obsidian is an open system and you can make your own solution.
The community has a bunch of offerings for seeing your Tasks, but so far all the plugins have felt heavier than I wanted (and didn’t match my aesthetics in ways I can’t quite articulate).
I sat down this afternoon to see if I could solve this problem for myself. I poked around the documentation (which is sparse), trying to figure out if things I saw the community talking about like “queries” were core features or community plugins.
I didn’t find what I was looking for. So with 5 minutes of the 1 hour of yak shaving I allotted myself here is my current solution:
cd $your_vault
grep '\[ \]' * | sed 's/.md:- \[ \] /SEP/' | awk -F'SEP' '{printf "[[%s]] %s\n", $1,$2}' > Tasks.md
This creates a new document named “Tasks” with each of your unchecked todos with a link to their parent document.
]]>Real change takes resources. Organizations that know that they are supported by their community, and will be supported in an ongoing fashion are able to undertake meaningful change. Organizations that have to go back to funders periodically are going to circumscribed in what they tackle. We’re very aware of this in tech with the dichotony between self funded and venture funded companies. Non-profits have a discourse around the “the revolution will not be funded”. Unions, many churches, many community organizers, and more instead prefer tithing. An ongoing commitment to donate a percentage of your income to provide them with an ongoing revenue stream not dependent on a few key funders.
In tech we are surrounded by people just like us who have made unimaginable wealth. We all have at least one and often many stories about the near miss, the one that got away. It messes with our ideas about our own wealth. We lose sight of the fact that many software engineers straight out of college are being paid at the US 90% percentile.
Tithing: another way you can contribute that you are uniquely qualified to do because you work in tech.
]]>First set of thoughts.
That tech valuations are unlocked by prefering bits (software) to atoms (physical things) and the subsequent near zero marginal scaling costs is a truism deeply engrained in many of us in tech. Obviously some people and organizations have unlearned this lesson: Amazon notably, but a number of the companies building health offices, or scooter rentals. Still many companies that seem like they would be atoms companies, e.g. Uber or some of those scooter companies, are actually outsourcing the ownership of the atoms in favor of sticking with the bits. The work of decarbonizing our economy (and the work funded by IRA, and CHIPS) are profoundly physical undertakings. I think that’s going to be hard for many of us coming from tech.
Obviously. Work out in the physical world is going to benefit from the really ridiculous rate of improvement around things like AI/computer vision with robot drones doing things like monitoring the state of the transmission system and deploying repair before things become catastrophic, or ditto pipelines and methane leaks to implement the methane charges from the IRA.
The standard theory about how you drive down costs, especially for atoms, is economies of scale. That remains true, but the work to decarbonize is often going to be more like the R&D we do in tech than the streamlined manufacturing of established industrial practices. Here we should be thinking about how learning curves drive down cost. The climate industry knows this, as no one can possibly have missed the precipitious decline in the cost of photovolatics and a number of other green energy technologies. That said tech also has a high concentration of people who think about socio-technical systems for innovation and have spent considerable hours contemplating things like how to apply the Toyota Production System to our domain. Feels like we may have something to offer as a source of talent and writing on the topics of continuous learning and improvement, the role human factors play, etc.
The data from industry suggests there are technologies that benefit from learning curves, and technologies that don’t. Technologies that benefit tend to be those being built in a controlled environment, like a factory. Technologies that don’t tend to be those built in the field (will AI/ML/robots change this?). Given that decarbonization is going to need to happen both in the factory and in the field, I wonder what insights we can share about modular design. (including what to do with the insight that almost all software teams that try to do modular design usually fail for a handful of predictable reasons, mostly around complexity)
A lot of the conversation around knowledge work happens in and around tech for a few different reasons. The system we use to fund technology startups favors self aggrandizement, but also our roots in tech have always included a high degree of collaboration and information sharing. Either way we talk in public about a lot of things that other industries keep behind closed doors. Some of those topics:
Much like dealing with complexity (which is an overlapping topic) tech’s foundational understanding of being okay with failure as the way to unlock innovation is learned from our Defense industry origins. That said we’ve added a significant body of practice and tooling around hypothesis, experimentation, risk, data collection and iteration in information rich environments.
More than anything what I worry about most with the climate change legislation is that the $400b dollars all goes to large, slow moving organizations that specialize in slow moving CYA style planning that attempt to boil the ocean with their big centralized plans, but whose actual function is to make sure that no one is accountable for the inevitable failures. If tech has anything to offer it will be how to get comfortable getting started without knowing all the answers, and how small wins create momentum on the way to understanding that our the foundation to big wins.
How to deploy these insights into a world of physical infrastructure and government funding is an interesting and open question, though organizations like Code for America and e.g. their work on CalFresh are interesting data points.
]]>