Category Archives: IT industry

General posts about the IT industry

Why I will most likely replace your systems and why it will cost you less of my time and your money

The Premise: We need our systems

A company’s performance can only be as good as it’s infrastructure allows. From logistics to office processes, timeliness, accuracy and efficiency rely on a solid foundation and a good operating framework. But that’s only half the equation: you also must offer good products and services to your market. A company’s technical systems, be they soft-, hard- or firmware are no exception. Whether these systems are part of the framework or they are the deliverable, success depends on these systems doing their job well.

Once a company invests in creating a system, the system becomes an asset. Like any other service, product or tool, there is now a relationship of ownership and reliance on the creation. Developers and engineers upkeep the system, colleagues utilize or provide the system to consumers, time passes. Regardless of what purpose these systems serve, they will age and as they age they will become less adequate in meeting the changing demands of their purpose. Eventually, every system will require updating.

So do we rewrite or refactor?

The Crux: There must be value in every endeavour

Most people will attempt to maximize any type of resource, getting as much as possible out of them for as long as possible.

Intellectual property is important to any company and attachments can be hard to sever. A programmer will become used to a code base, a CEO will be proud of her brainchild, a president will want to see an increase in profit.

The problem is, a lot of systems are unmaintainable. Whether a company subscribes to agile or big design or waterfall, they are susceptible to this problem. The truth is, for various reasons you may have ended up with property not worth owning.

There are two common scenarios for this. The first is your company simply didn’t have the time to allot that the project team asked for – due to time to market or extreme need to fill a operational gap. The second is similar, but distinctly different. You had a lead or a team who felt for whatever ever reason – they felt you wouldn’t accept push back or there was significant input from beginning experts – writing ad hoc was the best bet.

Now, am I saying this means the entire thing was botched by your operational needs or that your software team was horrid? Not at all. Your needs were being met and there is probably some very valuable code in there. Rather, I am positing that if you need to update significantly that time has passed rendering your original platform stack in need of replacement and if coupled with significant code wear, time is going to have to be spent to bring everything back up to speed.

The Phallacy: The mantra of refactoring

Rewriting code from the silicon or near the silicon up is time consuming and rarely beneficial to the owner of the system. Particularly for small, simple sounding changes revising the relevant code and adding the changes is always more efficient.

In a larger modification, you should refactor a system using an iterative approach and bring it up to date slowly. Extending existing code by directly inserting new functionality or even creating an adapter to bridge it into the old software is always preferable for new requirements. To improve existing functionality, patches and even monkey-patches are the favoured go to. There is a common idea that refactoring code is the only way to not lose the “value” of an old system in a business sense.

The idea that replacing a system was a critical business mistake gained traction a few years back and has become almost ubiquitous for some people and organizations. The main idea was that a rewrite would take too long and that important valuable bits of code are embedded somewhere inside. For example, the ancient protocol your client software speaks to your servers is underneath a layer of extensions, patches and modifications. There is also the accumulation of knowledge about how the code works embedded in the source itself – built upon by various checks and balances introduced to solve known issues and bypass emergent problems.

But how valuable is this labyrinth of patched-upon logic really? How important to the business are code artifacts vs the overall system? The reality it depends on the underlying architecture of the system and the goals behind the updating.

The Origin: Cult of the Dead Code

Modern Wrong Thinking: This was a valid view, once

While there were numerous people who constantly remade the wheel in a usually inferior way, there were also numerous people who kept trying to round square wheels on a moving wagon.

While the approach of writing ad hoc new solutions to common problems in past decades has given us it’s share of grief in the present, an equal amount is caused by people who extended those code bases long past the point they code hold up their own middles. Some of this code could never hold up it’s own middle to begin with, yet still received updates. Simply put – when code hasn’t got any architecture to speak of you shouldn’t deploy, but if you do refactor and revise it as part of your maintenance – before you build upon or expand it.

While tools were arguably less sophisticated in the past (it could be argued in some cases that sophistication is not verbosity), programming was commiserately less sophisticated as well. Some solutions that seem trivial to rewrite today were much more complicated to replace in the past. And while a system could not be quite as large and inflected as it is today, it also couldn’t sprawl and rot the way it can today. The cost of memory, storage and systems was prohibitive to the use of wrappers, APIs, comprehensive compilation and uncovered code we employ today on the scale it is employed. In the past the biggest obstacle to rewriting a solution was lack of specialized knowledge or outright laziness.

Rewriting questionable ASM, C, Pascal, FORTRAN, COBOL or C++ was just as sensible and efficient then as it is today to rewrite, port or upgrade questionable Python, C#, C++, Ruby, JS, et al. And for the same reasons.

Has anything changed in the last couple of decades?
Two truths have existed ever since the ancient days of ASM:

  1. Well architected and written code is easy to maintain and refactor
  2. Code that isn’t tends to not be worth the effort

With all of the advances in the field and the abundance of open source frameworks and libraries nobody has come back and questioned how long rewriting actually takes. The assumption that it’s particularly time consuming remains and is often unquestioned.

Rewriting lets you leverage all of the progress that has been made with frameworks/libraries/open source/etc.

It is faster to rewrite now than it was before. The speed of rewriting is now more favorable. Other than the speed of rewriting getting more favorable has anything else become more favorable for making a rewrite?

Leveraging new technologies/frameworks: how does this impact the project quality? Sometimes the advances that have been made in the broader ecosystem in the times since the software was first written are especially compelling. These can be security improvements, performance enhancements, better support, etc. Being able to leverage this value can sometimes be a significant win for a project, especially in reducing the long term maintenance cost of the code base going forward.

Has refactoring got faster too?

Rewriting has got faster in recent times, has refactoring got faster too?

IDEs and other tools like linting tools have made the task of refactoring faster. But how does the advances in refactoring compare to the advances in rewriting?

Refactoring remains an intrinsically difficult task, the main constraints are generally not tooling. You are greatly constrained by the existing codebase and the decisions made in that codebase, these constraints put a cap on how easy a refactor can be because you are dealing with the decisions made in the previous code. This is nasty because a refactor usually means that the decisions taken at the time the code were made no longer are optimal for the current moment. After all, if they weren’t you probably shouldn’t be refactoring in the first place.


Seems like an important topic to touch on, as understanding the refactor vs rewrite has a lot to do with architectural decisions. Refactoring involves improving the architecture of existing code, which means that you are in essence actually rewriting/recreating a substantial AND very important piece of the overall engineering of the product from scratch anyway. If a refactor doesn’t improve the architecture you are very likely wasting money. If you find that this is a common situation you might want to reconsider how much technical debt you are taking on in the early phases of your projects.

Getting more skilled at architecture will help you enormously when making judgment calls about refactoring vs rewriting.

Example of a time where I wouldn’t replace the system

Replacing or refactoring entire systems is expensive so we really don’t want to do this unless we need to. Essentially a rewrite is often a very bad idea if the following are true:

  1. We know that system works as expected.
  2. The API is well known and documented.
  3. Most of the code follows a sensible architecture
  4. The volume of patches and hacks is reasonably low compared to the overall codebase

It’s important that these hold because they are critical factors in the value of the old code. Old code that works has value.

This is a time where we might create a wrapper layer to use the old code from newer systems if we had to.

If the main pain point is interacting with the code then creating wrappers to reduce that pain is a very valuable technique. For example we have made Python and .Net wrappers for such systems on many occasions. Also we have sometimes made REST APIs to deal with some other services. The best choice will depend on the project.

How does Jagged\Verge approach legacy systems?

How do we get the best ROI on the time spent consulting/re-engineering systems?

Determining what the best action is with legacy systems comes down to assessing value, specifically the value to the people actually using the systems. We always try to determine how much value is in the old system and what the costs are for using that system. This gives us a point to compare against when we look at the costs of creating a new system and the value that is generated by making something new.

Which techniques help us be more productive?

A couple of years ago I was asked by a junior developer if I had any suggestions for techniques they could learn about to improve their productivity. It was a great question, one which I’m really glad I was asked as it showed interest and also got me seriously thinking about how you can improve the productivity of a development team. One of the striking things is how despite it being a very broad topic the best things to learn will be highly dependent on the skills of the individual in question and the projects that they are working on. In many ways in the IT industry the best approaches to improving productivity are related to the best approaches to learning, more so than in any other industry I can think of, as knowledge is often the biggest barrier to better productivity. In such a fast changing industry learning new skills effectively is a huge asset, being able to efficiently learn improves productivity enormously. Often the trickiest question is figuring out what techniques to learn next. Hopefully after reading this you have an idea of how go about answering that question. Continue reading