Highlights from the December 2021 Community Call on Constructive Code Review
Published: Dec 14, 2021
Last week, US-RSE held its monthly community call. For a topic, we chose “Conducting Constructive Code Reviews”, as suggested on GitHub. Judging from the number of new faces that registered for the call, the US-RSE community is very interested in this topic.
We’d like to share a couple of highlights from the call. If these highlights whet your interest, then the video is already up on YouTube for you to watch. We’ve also provided a link to notes from the discussion in the breakout rooms. If you don’t watch anything else, you can at least get a quick update on news from Ian Cosden, and check out responses to Chris Hill’s Mentimeter poll asking attendees what else they’d like to get the newly updated US-RSE logo on, after the US-RSE 2021 Winter T-Shirt campaign comes to an end. (Spoiler alert: “Mugs” was a popular choice.)
After news and the Mentimeter poll, the call moved on to the discussion topic. David Nicholson got things started by talking about how recently there has been growing interest in code review for RSEs. He mentioned the Research Code Review Community and the Software Saved Institute’s Collaborations 2022 Workshop where code reviews in research software will be one of the themes. Then David introduced two lightning talks that would help guide community discussion in the breakout rooms.
- What are the some of the goals of doing code review?
- How should I approach the review if I’m the one submitting code, or if I’m reviewing someone else’s code?
Interestingly, both Jeff and Pat made points about how reading code, reviewing others’ code, and getting feedback on how we write code can fill some of the gaps left in traditional computer science education. The traditional approach often encourages students to learn by writing, not reading. David pointed out that Greg Wilson had shared, in the US-RSE Slack, the book “The Programmer’s Brain” that makes similar observations about how coding is taught, and describes methods for improving reading comprehension.
Breakout room discussion
Then the attendees entered the breakout rooms to discuss. Each group took notes, and then came back to the main room at the end of the call to report out. One common idea across groups was that of automating as much as possible—with linters and continuous integration, for example—so that high-bandwidth face-to-face code review could focus on big-picture planning. At the same time, many RSEs were working in situations where this one-size-fits-all approach did not lend itself to review: e.g., a group with two main developers, where undergraduate CS students make limited contributions that require very specific, tailored recommendations such as “Add a comment here”.
Future community calls
The range of responses from the breakout rooms suggest that code review will continue to be a topic of interest for US-RSE and the global RSE community. One aspect of code review that was not addressed in this community call is whether good practices as they are currently understood can realistically be put into practice by all RSEs. A “lone wolf” developer or a single scientist writing analysis code in a small lab may not have access to a local community of software engineers that they can turn to for review. Could US-RSE help address this by providing some sort of virtual community? These accessibility issues could be a topic for a future call. Please feel free to suggest other topics that may be of interest to you on the repository we’ve set up: https://github.com/USRSE/monthly-community-calls.