A few months ago, at the end of a customer presentation about “The Code Quality Paradigm Change”, I was approached by an attendee who said, “I have been following SonarQube & SonarSource for the last 4-5 years and I am wondering how I could have missed the stuff you just presented. Where do you publish this kind of information?”. I told him that it was all on our blog and wiki and that I would send him the links. Well...
When I checked a few days later, I realized that actually there wasn't much available, only bits and pieces such as the 2011 announcement of SonarQube 2.5, the 2013 discussion of how to use the differential dashboard, the 2013 whitepaper on Continuous Inspection, and last year's announcement of SonarQube 4.3. Well (again)... for a concept that is at the center of the SonarQube 4.x series, that we have presented to every customer and at every conference in the last 3 years, and that we use on a daily basis to support our development at SonarSource, those few mentions aren't much.
Let me elaborate on this and explain how you can sustainably manage your technical debt, with no pain, no added complexity, no endless battles, and pretty much no cost. Does it sound appealing? Let's go!
First, why do we need a new paradigm? We need a new paradigm to manage code quality/technical debt because the traditional approach is too painful, and has generally failed for many years now. What I call a traditional approach is an approach where code quality is periodically reviewed by a QA team or similar, typically just before release, that results in findings the developers should act on before releasing. This approach might work in the short term, especially with strong management backing, but it consistently fails in the mid to long run, because:
- The code review comes too late in the process, and no stakeholder is keen to get the problems fixed; everyone wants the new version to ship
- Developers typically push back because an external team makes recommendations on their code, not knowing the context of the project. And by the way the code is obsolete already
- There is a clear lack of ownership for code quality with this approach. Who owns quality? No one!
- What gets reviewed is the entire application before it goes to production and it is obviously not possible to apply the same criteria to all applications. A negotiation will happen for each project, which will drain all credibility from the process
All of this makes it pretty much impossible to enforce a Quality Gate, i.e. a list of criteria for a go/no-go decision to ship an application to production.
For someone trying to improve quality with such an approach, it translates into something like: the total amount of our technical debt is depressing, can we have a budget to fix it? After asking “why is it wrong in the first place?”, the business might say yes. But then there's another problem: how to fix technical debt without injecting functional regressions? This is really no fun…
At SonarSource, we think several parameters in this equation must be changed:
- First and most importantly, the developers should own quality and be ultimately responsible for it
- The feedback loop should be much shorter and developers should be notified of quality defects as soon as they are injected
- The Quality Gate should be unified for all applications
- The cost of implementing such an approach should be insignificant, and should not require the validation of someone outside the team
Even changing those parameters, code review is still required, but I believe it can and should be more fun! How do we achieve this?
When you have water leak at home, what do you do first? Plug the leak, or mop the floor? The answer is very simple and intuitive: you plug the leak. Why? Because you know that any other action will be useless and that it is only a matter of time before the same amount of water will be back on the floor.
So why do we tend to behave differently with code quality? When we analyze an application with SonarQube and find out that it has a lot of technical debt, generally the first thing we want to do is start mopping/remediating - either that or put together a remediation plan. Why is it that we don't apply the simple logic we use at home to the way we manage our code quality? I don't know why, but I do know that the remediation-first approach is terribly wrong and leads to all the challenges enumerated above.
Fixing the leak means putting the focus on the “new” code, i.e. the code that was added or changed since the last release. Things then get much easier:
- The Quality Gate can be run every day, and passing it is achievable.
- There is no surprise at release timeIt is pretty difficult for a developer to push back on problems he introduced the previous day. And by the way, I think he will generally be very happy for the chance to fix the problems while the code is still fresh
- There is a clear ownership of code quality
- The criteria for go/no-go are consistent across applications, and are shared among teams. Indeed new code is new code, regardless of which application it is done in
- The cost is insignificant because it is part of the development process
As a bonus, the code that gets changed the most has the highest maintainability, and the code that does not get changed has the lowest, which makes a lot of sense.
I am sure you are wondering: and then what? Then nothing! Because of the nature of software and the fact that we keep making changes to it (Sonarsource customers generally claim that 20% of their code base gets changed each year), the debt will naturally be reduced. And where it isn't is where it does not need to be.