During my master’s program, I was introduced to the concept of Chesterton’s Fence, first described by writer and philosopher G.K. Chesterton in 1929. The concept is simple yet profound:
“Imagine you come across a fence built across a road. You don’t see any immediate purpose for it, so you propose to remove it. However, before doing so, you are challenged to first understand why it was put there in the first place. Only after you’ve understood its purpose can you make an informed decision about whether or not to remove it.”
Having worked in the tech industry for several decades, I have seen firsthand how often this concept is overlooked—to the detriment of organizations. In our rush to innovate and improve, we frequently fail to consider the reasons behind existing systems and processes. This can lead to unnecessary complications, wasted resources, and even outright failures.
This is not to say that other industries are immune, but the tech industry is particularly prone to this trap. We are constantly bombarded with new technologies and methodologies, making it easy to get caught up in the latest trends without taking the time to understand the underlying principles. The further removed someone is from the actual work, the more likely they are to misunderstand the systems in place. This is especially true for those in leadership positions, who may be more focused on the big picture than on the details of how things actually work.
The Law of Unintended Consequences is another concept closely related to Chesterton’s Fence. This law states that any action, especially one intended to bring about positive change, can have unintended consequences that may be negative or even disastrous. Given the speed at which the tech industry moves, it is easy to see how this can happen. A new technology or process may seem like a great idea on the surface, but without understanding the existing systems, it can lead to unforeseen complications and failures.
In the late 1990s, Netscape faced increasing competition from Internet Explorer. To keep up, leadership decided to rewrite their entire codebase from scratch rather than iterating on their existing browser.
This decision proved disastrous. The rewrite took three long years, during which Microsoft’s Internet Explorer steadily gained market share. By the time Netscape finally released the new version, it was already irrelevant.
Looking back, this was a classic case of Chesterton’s Fence. The existing codebase had evolved over years of development, incorporating valuable features and optimizations. By discarding it without fully understanding its function, Netscape lost valuable time and resources—ultimately leading to its downfall.
In 2023, Reddit made headlines when it laid off a significant portion of its workforce and scaled back hiring plans. Leadership believed that fewer employees could maintain the platform just as effectively, allowing them to reduce expenses ahead of an IPO.
Shortly after, Reddit began experiencing frequent outages and performance issues, with fewer engineers available to fix them. More critically, some of the dismissed engineers had deep knowledge of legacy systems that weren’t well-documented. When things broke, there was no one left who truly understood how to fix them. (Ironically, Reddit was down off and on as I wrote this post.)
At play here was the loss of human infrastructure—the accumulated knowledge and experience of employees over time. In structural engineering, we distinguish between load-bearing and non-load-bearing components. The same applies to human infrastructure: remove the wrong people, and the entire structure can collapse. In this case, the layoffs were made without fully understanding the complexity of the existing system and the people who maintained it.
Before Elon Musk’s acquisition of Twitter in 2022, the platform’s verification system helped users identify legitimate accounts. This system placed a blue checkmark next to verified accounts, typically belonging to public figures, journalists, and organizations. Verification required documentation to prove identity.
After the takeover, verification became a commodity anyone could buy for $8 a month. It’s unclear whether leadership fully understood the potential ramifications of this change or simply dismissed them as unimportant.
The result was immediate chaos. A flood of fake accounts impersonated celebrities, politicians, and corporations, spreading misinformation. In one widely publicized case, a fake Eli Lilly account tweeted that insulin was free, causing the company’s stock to plummet.
This was a textbook example of the Law of Unintended Consequences. The old verification system existed to prevent exactly this kind of impersonation. Within days, Twitter was forced to revise the policy, but the damage to trust, user experience, and advertiser confidence was already done.
In 2005, Yahoo acquired Flickr, a rapidly growing photo-sharing platform with a strong community. Yahoo saw it as an asset to strengthen its media presence, but rather than treating it as a community, it treated Flickr purely as a product.
One of the first missteps was forcing users to integrate with Yahoo accounts, disrupting the identities and communities they had built on Flickr. Other changes followed, including intrusive ads, the deprecation of key features, and a failure to prioritize mobile development.
The result? Users left in droves for emerging competitors like Instagram. Yahoo ultimately sold Flickr to SmugMug, but by then, the damage was irreversible. Leadership had dismantled the very elements that made Flickr successful, failing to recognize why the platform thrived in the first place.
Each of these examples illustrates Chesterton’s Fence in action—a change that seemed logical but had catastrophic unintended consequences because leaders failed to understand why the original system existed.
To be fair, there are success stories where breaking with tradition has led to positive outcomes. But these are exceptions rather than the rule. In an industry that wears “move fast and break things” as a badge of honor, we often forget that some things, when broken, cannot be fixed.
While there’s no universal solution to this problem, we can take steps to mitigate the risk of unintended consequences:
The next time you’re faced with an opportunity to “fix” a system, ask yourself: Do I truly understand why this exists? Because if you don’t, removing it may cause more harm than good.