CODECHECK - reviewing code in publications

Categories: coding research software

CODECHECK is a fascinating service - That creates a workflow for academics to provide feedback on research code. The project is being led by Stephen Eglen and and Daniel Nüst.

They describe what they do succinctly as

CODECHECK is a process for independent reproduction of computations and awarding of time-stamped certificates for successful reproductions of scholarly articles.

Checking software in research is a significant challenge, and I really like CODECHECK because it is an initiative that has emerged from researchers themselves.

I met Stephen at an event earlier this year (when we still did those) and he recently emailed me with an update on the service. Below are some of the recent highlights.

Of note are their contributions in checking the COVID models.

  1. Our first “proper” CODECHECK certificate was for a Gigascience paper that came out in April. Scott Edmunds helped with a blog piece about it:

  2. Our biggest impact has been codechecking the (infamous) COVID model from Neil Ferguson’s group at Imperial College; Nature News ran a piece on this recently: By contrast to all the alarm in the media, I found the results reproducible.

  3. I’m now finishing up a couple of other COVID models from Imperial (having done some others for LSHTM). For one of the LSHTM papers, because we worked on a preprint as it (unknown to me) was going through peer review, the authors acknowledged and cited the certificate (ref 18) in their Lancet Public Health paper.

  4. All the certificates completed so far are browsable via:

  5. The certificates are evolving as we learn more about the process, so I think it will be a while yet before we hit a stable idea of what a certificate should look like. We’ve probably also done a poor job of collecting the relevant metadata and getting certificates deposited in correct manner.