Deprecated: Assigning the return value of new by reference is deprecated in /u/deptinfo/dalle/www/wiki2/cookbook/sourceblock.php on line 153

Warning: Cannot modify header information - headers already sent by (output started at /u/deptinfo/dalle/www/wiki2/cookbook/sourceblock.php:153) in /u/deptinfo/dalle/www/wiki2/pmwiki.php on line 885
Olivier Dalle's Corner: TSimTools / Reproducibility Policy
Olivier Dalle's
Corner
$WikiTagline
 

More coming soon…

At submission time

TSimTools will enforce a strict reproducibility policy: all software used for the publication, including scripts for the production of graphics and the data sets will have to be submitted along with the paper or made available online on a public open access site (eg. sourceforge, Google Code, GitHub, …). In addition, permission to copy, archive, and redistribute (without profit) the submission material must be granted to the journal.

To be accepted for publication, a simulation software tool must meet the following requirements:

  1. Be available for download, fully open source, and at no charge for use in the production of published scientific results (1)
  2. Be redistributable (without profit) by the journal with no limitation
  3. Include a fully automated, self-contained installation procedure (no network connection required)
  4. Be available and ready for use on common execution platforms (2)

(1) Any restriction may still apply for the production of confidential results. (2) Software requiring a particular platform in terms of hardware or operating system will be reviewed using a specific protocol yet to be defined.

Long term reproducibility

Ensuring long term reproducibility is a technical challenge for which the journal has no revolutionary solution. However, we do intend to support long term reproducibility as much as possible as follows:

  • Yearly check: the published material will be rechecked regularly, at least every year.
  • Yearly snapshot: the authors willing to do so will be allowed to provide a new version of their software every year including bug fixes, additions, improvements or even new versions. However, all the previous versions will be kept available along with the new version and rechecked regularly as long as as possible.

Corner cases and allowance

Even when using fully open software, it may be extremely difficult or even impossible to reach a 100% reproducibility level. So far we have identified two main sources of difficulty:

  1. The software uses volatile resources, such as a web service or a real-time input
  2. The software submitted by the authors fails partially to pass reproducibility tests for some unexplained reason.

For such cases corner cases we must have some flexibility in order to find a reasonable trade-off. Here is our suggested allowance policy:

Volatile resources

Given that volatile resources may not ever be reproducible (eg. simulation with hardware-in-the-loop) or may not have any long-term warranty of reproducibility (eg. a web-service may be down or evolve in time) the submitted software must come with a reasonable substitute. A reasonable substitute must be fully reproducible, without restriction in time, and must allow a reasonably degraded function of the software. For example, a reasonable substitute for a web-service could be a pre-defined set of request-response with a server to be used in a special replay mode by the software, or a limited emulation or even a bypass in case the server is not absolutely necessary for the purpose of the demonstration.

Partial failure

Finding a meaningful metrics for measuring reproducibility between the two extreme bounds of 0% pass and 100% pass is is difficult. While requiring more than 0% is not debatable, requiring 100% may questioned. Our policy is to take a pragmatic stand: what is missing must be marginal and not critical for the assessment and general reproducibility of the results. Let’s take a concrete example: a paper discusses the results of 100 experiments, and the conclusion is that in 10 of those experiments some interesting phenomenon, always the same, was observed. Then if any of these 100 experiments fails to pass, whatever the outcome of that missing experiment could be, the conclusions of the paper cannot really be questioned if the 99 remaining experiments pass the reproducibility test. Of course, the editors may require the authors to fix the problem as part of a minor revision, but such a problem is not a case of rejection if the paper gets an overall positive evaluation of its scientific content.