Hold on! is my feedback useful? evaluating the usefulness of code review comments

Sharif Ahmed, Nasir U. Eisty

Research output: Contribution to journalArticlepeer-review

Abstract

Context: In collaborative software development, the peer code review process proves beneficial only when the reviewers provide useful comments. Objective: This paper investigates the usefulness of Code Review Comments (CR comments) through textual feature-based and featureless approaches. Method: We select three available datasets from both open-source and commercial projects. Additionally, we introduce new features from software and non-software domains. Moreover, we experiment with the presence of jargon, voice, and codes in CR Comments and classify the usefulness of CR Comments through featurization, bag-of-words, and transfer learning techniques. Results: Our models outperform the baseline by achieving state-of-the-art performance. Furthermore, the result demonstrates that the commercial gigantic LLM, GPT-4o, and non-commercial naive featureless approach, Bag-of-Word with TF-IDF, are more effective for predicting the usefulness of CR Comments. Conclusion: The significant improvement in predicting usefulness solely from CR Comments escalates research on this task. Our analyses portray the similarities and differences of domains, projects, datasets, models, and features for predicting the usefulness of CR Comments.

Original languageEnglish
Article number70
JournalEmpirical Software Engineering
Volume30
Issue number3
DOIs
StatePublished - Jun 2025

Keywords

  • Modern code review
  • Natural language processing
  • Software engineering
  • Software quality
  • Transfer learning

Fingerprint

Dive into the research topics of 'Hold on! is my feedback useful? evaluating the usefulness of code review comments'. Together they form a unique fingerprint.

Cite this