Building a continuous benchmarking ecosystem in bioinformatics

Authors: Izaskun Mallona, Almut Luetge, Charlotte Soneson, Ben Carrillo, Reto Gerber, Daniel Incicau, Anthony Sonrel, Mark D. Robinson

arXiv: 2409.15472v1 - DOI (q-bio.OT)
21 pages, 2 figures, 1 table
License: CC BY-SA 4.0

Abstract: Benchmarking, which involves collecting reference datasets and demonstrating method performances, is a requirement for the development of new computational tools, but also becomes a domain of its own to achieve neutral comparisons of methods. Although a lot has been written about how to design and conduct benchmark studies, this Perspective sheds light on a wish list for a computational platform to orchestrate benchmark studies. We discuss various ideas for organizing reproducible software environments, formally defining benchmarks, orchestrating standardized workflows, and how they interface with computing infrastructure.

Submitted to arXiv on 23 Sep. 2024

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.