Making the statistics literature reproducible
In 2016, the Journal of the American Statistical Association (JASA) launched a bold initiative to embed reproducibility into the review and publication process for manuscripts published in the Applications & Case Studies section of the journal. This was a bold commitment, and it was unclear how the statistics community would react; while some journals at the time had taken initial steps towards encouraging reproducibility, many statistical methodology papers did not provide data and indicated that code was "available upon request from the author." As one of the Associate Editors for Reproducibility tasked with bringing the JASA initiative to life, over the past five years I have seen both the daunting challenges and many benefits of ensuring the reproducibility of published papers. In this talk, I'll share an insider's view of the issues we had to think through when creating the reproducibility review process, describe a few unexpected surprises and detours along the way, and highlight some potential future directions for the reproducibility initiative.
Dr. Julian Wolfson develops novel techniques for identifying important predictors of clinical outcomes from large and complex data. The techniques he uses merge traditional statistical methods with machine learning approaches to make most efficient use of the data and account for challenges such as missing data, measurement error, and selection bias. He has applied his methods to problems such as finding surrogate endpoints in clinical trials, identifying relevant explanatory variables in the presence of correlation and measurement error, predicting the risk of heart attacks using electronic health record data, and understanding human behavior patterns using smartphone sensor data. Dr. Wolfson also serves as Associate Editor for Reproducibility at the Journal of the American Statistical Association, where he co-developed a comprehensive reproducibility review process for submitted papers.