Most organizations realize Application Code Review (ACR) is essential to maintain the quality of the applications they develop. The real challenge is finding the time to conduct those reviews in an era where organizations need to accelerate the rate at which applications are being built, deployed and updated.
There are clearly some very good reasons for conducting a rigorous review of application code. The first is to discover mistakes as soon as possible to both minimize their impact on the application itself and any external piece of code that might be dependent on the faulty code that gets discovered during the review.
To facilitate the ACR process, a reviewer clearly needs to first familiarize themselves with the feature they are about to review. Creating a ‘uniform’ set of coding rules for shared naming conventions and coding structures will make it easier to not only understand how the code is structured today but, just as importantly, six months later when developers or testers may need to revisit that code again.
Ultimately, ACR serves two higher level purposes. First, it makes the team feel like they now collectively own the code being developed. Secondarily, ACR creates an invaluable retrospective opportunity to teach developers and testers alike how to make sure the highest quality application experience is always provided.
The code review challenge
In organizations where the rate at which applications are developed and updated is prized, maintaining application quality can be difficult. Ensuring application quality can take a lot of time and effort. The return on investment on those efforts can often only be measured over an extended period of time. Code reviews can even mean there might be less time available for performing other tasks, such as creating additional user stories to test.
Other significant ACR challenges that need to be overcome often include:
It’s difficult to be consistent over time, especially when reviews are performed by multiple people.
- It’s challenging to maintain a positive culture, especially if reviewers find themselves repeatedly discovering the same issues.
- It’s hard to get developers or testers to take themselves out of a current programming flow to devote time to review an existing application.
- It’s easy to forget that people are not machines, so it’s not uncommon for reviews to need to be redone to make sure additional adjustments are not required.
Clearly, there’s an opportunity to rely on machines to both improve and accelerate the ACR process. If machines automate most routine processes, it becomes possible for reviewers to focus more of their time and effort on more complex logic and code.
Automating code reviews
Many parts of code reviewing can be easily automated. Checking for naming conventions, project settings, code size, complexity, and the number of coding rules should all be automated. As the saying goes: If it can be automated, it should be automated!
Best of all, developers can even save time for the entire team by running most automated tests themselves. That capability makes it possible for reviewers to focus more of their time and effort on more complex logic and code.
The code review ROI
Opting to skip the ACR process essentially means you’re building and deploying code that is not only going to be hard to maintain, it becomes spaghetti code very quickly. All it takes is a few sprints for a development team to find itself paying the price for a lack of regression tests, bad design decisions, and buggy code.
Every user story needs time to run the quality checks that should be included as part of the definition of done for a user story. Completed user stories are a shared responsibility for both the reviewer and the developer of the original code. If the original developer is absent, the reviewer should be able to take over and continue or enhance or make changes to the code.
The most senior ‘developer’ on the team should assume the role of architect, code reviewer, coach, and trainer. This person needs to be involved in all code to bring the team to a higher level. Naturally, this person needs to stay in touch with the code as it is written. However, by not assigning any user stories to this person or assigning specific responsibilities to minimize overhead, a more effective way to increase the quality of both the code being developed as well as the overall effectiveness of the team can be created.
Mansystems’ Application Code Review (ACR) for Mendix
The Mansystems Application Code Review (ACR) tool for Mendix automates code reviews of rules and naming conventions. This tool invokes the Mendix Model via the Mendix Model SDK to review that code. Test can be done manually by the developer or automatically whenever a commit is made. ACR processes that can be automated include:
- Project hygiene
In each of these categories, Mansystems has created rules such as:
- An example of a performance rule is not using time-consuming actions inside a loop. Commits to the database can best be done outside of a loop.
- An example of a security rule is checking all entities have xpath rules for any role that requires multi-tenant security.
- Examples of reliability rules are checks for timeouts on REST/Web Services calls and custom error handlers. There’s also a rule that requires a custom error handler to produce a log line.
- Maintainability rules include naming convention checks and whether entities or microflows of a certain size have documentation.
- Architecture rules include checks for cross-module associations and calls to support clean design.
- Project hygiene rules check project security against the settings in a production environment, including requiring users with administrative rights to be renamed.
Coding rules are always going to be a subject of debate, so at Mansystems we decided to make rules configurable. You can turn rules on/off per project. You can set limits. You can change the severity of a rule.
Rules should help organizations, not force them to write bad code to prevent an automated rule violation. If, for example, a microflow violates a rule but everyone agrees that ignoring that you rule is the best thing to do, the microflow can be added to a whitelist.
The ACR tool is also integrated with the continuous integration/continuous development (CI/CD) platform that Mansystems includes as part of the SMART Digital Factory. On a commit or daily basis, the automated code review can run. That not only makes automated testing directly available to the developer, it also makes the feedback loop even tighter.
A developer can even run the automated code review and fix violations before asking another developer to peer review their code. That approach not just saves time, it also turns the peer review into a real opportunity to learn versus just another check and review process.
Application Quality Monitor
The automated ACR capabilities provided via the SMART Digital Factory are designed to reinforce good code habits by providing immediate access to feedback. Without that level of automation, most developers won’t even remember how and when they might have written a specific piece code. They will have moved on to their next project.
A complementary AQM tool by Mendix provides a holistic approach to the application that, for example, grades properties such as the maintainability of complex code. The AQM tool monitors quality based on the ISO 25010 standard.
ACR is essentially geared toward the developer. AQM is geared toward management. Fixing ACR violations should lead to a better AQM scores. ACR and AQM together provide value that goes well beyond the sum of their parts.
Automating ACR should be approached with common sense. Code reviews are meant to improve the application experience, not just prevent bad habits from setting off an alert. I’ve seen developers change code just to get a better score that in my opinion made the code worse.
It may not be a good idea to randomly break up a large microflow just because a code review tool says it is large. Automated ACR is a tool to write better code. If the tool gets in the way of achieving that goal, feel free to ignore the tool.