This directory contains code and queries used for comparing against Barliman. This comparison uses the optimized evalo interpreter (evalo-optimized.scm) from the artifact accompanying the paper "A Unified Approach to Solving Seven Programming Problems" (Oxford, ICFP 2017).
Many files included here are directly taken from that artifact.
To generate the Barliman comparison table in the paper accompanying this artifact:
- The numbers for Multi-stage miniKanren come from the general benchmarking script.
- The numbers for Barliman come from the script described below.
- The comparison and speedup numbers were then manually calculated, and table was then manually generated.
To load and run the comparison script:
$ chez
Chez Scheme Version 10.1.0
Copyright 1984-2024 Cisco Systems, Inc.
> (load "barliman-comparison.scm")The following files are unchanged and maintain the same relative paths as in the original Oxford artifact. Their descriptions can be found in the artifact itself:
challenge-7.scmevalo-optimized.scmevalo-standard.scmmk/mk.scmmk/test-check.scm
The following files were sourced directly from Barliman:
test-fib-aps-synth.scmtest-proofo.scm
The following files were adapted slightly to integrate the Oxford artifact environment:
-
chez-load-interp.scm: Based on Barliman'schez-load-interp.scm, but modified to use the Oxford ICFP17 versions oftest-check,arithmetic.scm, and the optimized interpreter (evalo-optimized.scm). -
mk/arithmetic.scm: Originally from the Oxford artifact, modified to adjust the load path of MiniKanren to referencemk/mk.scm.
barliman-comparison.scm: Main script for running all comparison queries.