Some blog about some stuff (mostly programming)

Continuous refactoring, wildfire and possible remedies

Let me first preface this by saying that the content of this post is just an inference of my experiences as a software developer. I’m not trying to preach, insult or convince of anything anyone here. Take it as a just a point of view. Nothing else.

You’ve probably heard about the term “Continuous refactoring” or Opportunistic refactoring mixed in together with agile and scrum. With good reason actually, continuously refactoring the code-base goes hand in hand with the mantra of agile which states that programming cycle should constantly be in progress, iterative and improved on.

However, as with fire, it’s a pretty dangerous double-edged investment. If used cautiously it’s extremly useful and brings comfort. If not, it ends with deaths and psychological scars that never heal. Of course it would be easy to just end it with: “Be careful what you’re doing”, but that’s just ridiculous. That’s kinda an obvious statement to make, isn’t it?

So now that we know that continuous refactoring is dangerous and that we need to be careful, is there anything we can do to make it more fire resistant? In my experience it’s usually a good idea to go with it in two ways:

alt text


This one’s more risky of the two. It requires independent structuring in a team and refactoring as soon as you get the chance. Basically, devs would need to be on an intermediete to senior level. Since in almost every project you don’t have time for waste, reliance on their judgement and common sense is a must. You need to have people you can trust to go this way.

But this brings one major problem to the table, and that is Code consistency. If you’re going the constant refactoring route pretty quickly you can get in a hectic state where you won’t know what’s going on in the code. And that’s even with experienced devs i mentioned above. Because of that you will need to have strict rules in a project with as much externel tooling help as you can get. Some of those are:

In waves

Definitely safer, but slower route to take. Here you have a strict feature first orientation. It plays better with agile, but it requires a space, either between sprints or after a couple, to focus on refactoring and improvement. It usualy goes something like this:

  1. Place tasks in a backlog
  2. Revise and groom them
  3. Start working on them in a sprint
  4. While in sprint whenever you notice a possibility for an improvement/refactoring in code create a task, mark it as a technical debt and place it in backlog
  5. At the end of a sprint choose which technical debt tasks to work on and refactor the code in a space between sprints

The time needed for refactoring between sprints, in my experience, usually went between a couple of days or the size of a whole sprint. But never bigger. A day or two of the time used for refactoring is always taken for extensive testing and stabilizing of the product.


And here we touch the wildfire part. The fact remains that continuous refactoring is dangerous, it can lead to all kinds of undefined behaviour as it spreads which destroys the code and your hopes and dreams. Some of the problems include:

Not to mention the scale of refactoring. If you’re planning on changing the structure and/or patterns of the whole projects or its chunks you may just ditch everything and start all over again at that point.

Summa summarum

In the end, i’d say continous refactoring pays off. You keep the codebase in accordance with the time. Never keep the problems under the rug since you’re constantly refactoring and improving. But stricter handling and closer taking care of the project is required.


comments powered by Disqus