Skip to the content.

A Method for Code Review

03 Feb 2024 - João Porto

An empirical (and relatively organized) method for code review.

Motivation

Once a coworker asked me how I do my code reviews. This question made me think about that.

Even though I had a process, I never thought about it with clarity, since it was built iteratively after several code rewiews.

Since its steps are clear to me now, I wrote they down in this post.

Step 1 - Create a text document

I have a text document where I put all the information related to the code review (CR) that I’m doing.

Why not just put this information in the Pull/Merge Request (PR)’s feedback? Because there is information produced through the CR that are not relevant to the PR’s author, but are useful to me.

In the beginning of a CR, if the purpose of the task that originated the PR is not clear to me (common in big projects), I write down a brief of the task, this helps me to absorbing the task. This is useful to understand whether the PR satisfies the task.

Also, I write in this document what will be my approach and definition of done (DoD) for the CR, based on the properties of the code and task that I’m reviewing. Once written the CR’s DoD, it’s easier define when stop it.

Two benefits emerge from this document:

  1. The document helps me to keep track of details and state of the CR. Then, while doing the code review, if I have to pause it, I can retrieve the exact point from which I have to restart, and
  2. The PR’s feedback is a subset of the document produced. So, once finished the CR, all I need to do is cherry pick the relevant information for the PR’s author.

Step 2 - Run tests and debug it

Once the text document is created, my start point is run the automated tests.

If there is no automated tests, this is a point to refer in the PR’s feedback, because automated tests are, in general, important and the absence of them should be justified.

If tests execution was not successful, then I reach out the PR’s author about that and I wait until the problem is solved (and, of course, I provide support if necessary).

When the tests pass, I select the test that cover the main scenario of the task for debugging.

While debugging each line of code, the logic of the implementation becomes more clear to me and I write the logic’s key points for not forget them.

If there is some point that is not clear enough or I think is not correct, I write it down in the document.

In general, execute the main scenario is sufficient to understand the whole logic, but, in the cases where it is not sufficient, I keep debugging the others tests until understand everything.

Step 3 - Analyze the code

In this step, I’m already comfortable with the task’s code and its logic is clear to me.

Then, I take a time to do a final look at the code. I check whether:

  1. There is opportunities for optimization, in terms of time or memory,
  2. There is code that could be refactored to be more simple/elegant, and
  3. There is code that is not compliant with the team’s guidelines.

Step 4 - Contribute with simple editions

If, and only if, I have time enough for this, I do the edition in snippets of code that don’t follow the guildelines, helping to turn the PR ready for merge as soon as possible.

But I only do edition that could be made by anyone, that is edition that if I do or the PR’s author do, it will not produce different results in terms of code. For example, there is a guideline that states that variable’s name should be written in snake_case and the PR’s code has the following snippet:

variableName = 10
print(variableName)

Whether I edit the code to follow the guideline or the PR’s author edit it, that’s will not make difference, because both of us will rewrite as:

variable_name = 10
print(variable_name)

I never do edition that is related to the logic/core of the task, because I assume that the PR’s author has more understanding (or should have) of the logic than me.

Things related to the logic/core will be in the document (as I suggested early) and they will integrate the PR’s feedback. And this bring us to the final step.

Step 5 - Submit feedback

Finally, in the end of the process, I provide the feedback. As I said early, what I need to put in the PR’s feedback is already present in the document created.

But, let’s dive a little bit into the structure of the feedback. It is basically:

  1. Whether the PR is approved or not.
    • If the PR is not approved, point out clearly why.
    • Doubts about parts of the implementation that I think is not adherent to the task’s purpose comes here.
  2. Any other information that PR’s author should know. Then, here can come:
    • Suggestions that could improve the quality of the code (optmize, refactor etc).
    • Scenarios of test not implement that might be and so on.

These are the main points, hence they come first. The other points could follow them.

  1. Additional information that could be interesting for the PR’s author.
  2. References to commits of contributions that I made (if they exist).

Conclusion

As you may notice, this method could be bureaucratic for simple PR’s, so try to extract only the relevant points for your context. For example, only do the steps 3 and 5, change step’s order etc.

My context is back-end web development, dealing with tasks to provide rolling releases for a SaaS, without dedicated QAs nor testers in the team and also without CI/CD setup. In this context, it has been useful, not in all, but in a bunch of PR’s code reviews.

In general, if you want to understand in details the PR’s implementation, I think it is a thorough method.