Recognising Academic Software Developers
My colleague Michael Crowther recently tweeted the following:
A #Stata package of mine was used in an applied paper, and the paper read ‘We used Stata’ - No, you used a user-written package, developed over 2.5 years of a PhD, which had an associated Stata Journal paper ready and waiting to be cited! #imnotbitter
— Michael Crowther (@Crowther_MJ) February 19, 2019
He’s right (as usual); academics who develop software deserve more recognition from users than they currently receive (at least in my field). I was pleased to see this issue addressed last week in an editorial in Nature Methods. The editorial describes the problem along with some workable solutions. I must admit that I am not as diligent as I should be when reviewing manuscripts, and I rarely react to the “we used Stata” statements, even when I know the authors have used one of my commands.
Those of us who develop software also need to improve. My SAS code and Stata commands for estimating relative survival has been used for 20 years, but it wasn’t until 2015 I wrote it up for the Stata journal. The availability of a peer-reviewed article doesn’t gaurantee users will cite your software, but it certainly makes it easier. The main area where we need to improve is to provide a suggested citation for our software. These are lacking from the software sections on my web sites and from the help files. I have seen “suggested citation” in some help files for Stata user-written command, but these are in the minority. Some users don’t appreciate the disctinction between the primary software and the user-written add-on, whereas other appreciate the disctinction but aren’t sure how to acknowledge it in their paper. We need to help them. As the editors wrote:
It becomes easy to see, then, why fair and proper citation can be challenging, even though the vast majority of researchers have the best intentions. Still, it is crucial to address this gap, as proper credit encourages collaboration between people developing tools at different levels to ensure that robust, user-friendly tools ultimately come to light.
Software development also needs to be given more weight when evaluating academics for, e.g., research grants or promotion (at least in my field). Lance Waller presents an excellent overview of this, and related topics, in Documenting and Evaluating Data Science Contributions in Academic Promotion in Departments of Statistics and Biostatistics (The American Statistician, 2018). He cites Hadley Wickham saying:
“Academics tend to like warm feet, but they don’t appreciate who makes their socks”
My university has a 21-page document describing how to document qualifications when applying for an academic position. Software development is never mentioned; the closest we come is:
10.3 Product development: Document any inventions or products, for example, instruments, diagnostic devices, analysis methods, etc., or services of your own that have been commercialised and brought to market.
Unfortunately, ‘analysis methods’ refers to laboratory analysis rather than data analysis, but maybe software could be considered under ‘services of your own’. Nevertheless, this is only a merit if it is commercialised. I realise this doesn’t preclude reviewers from evaluating software development as a merit, but it would be preferable if it were listed as a merit.