Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
research:software:effscript [2015/06/08 23:19] mtororesearch:software:effscript [2015/08/19 11:02] (current) mtoro
Line 1: Line 1:
 ====== EffScript: Practical Effects for Scala ====== ====== EffScript: Practical Effects for Scala ======
 +
 +{{bib>toroTanter-oopsla2015|Customizable Gradual Polymorphic Effects for Scala}} accepted at [[http://2015.splashcon.org/track/oopsla2015|OOPSLA 2015]] [[http://2015.splashcon.org/track/splash2015-artifacts|{{research:aec-oopsla.png?100}}]]
  
 EffScript is a small domain-specific language for writing tailored effect disciplines for Scala. In addition to being customizable, the underlying effect system supports both effect polymorphism (as developed by Lukas Rytz in his PhD thesis) and gradual effect checking (following the theory of Bañados, Garcia and Tanter). EffScript is a small domain-specific language for writing tailored effect disciplines for Scala. In addition to being customizable, the underlying effect system supports both effect polymorphism (as developed by Lukas Rytz in his PhD thesis) and gradual effect checking (following the theory of Bañados, Garcia and Tanter).
Line 56: Line 58:
     bottom: @simpleNoIO     bottom: @simpleNoIO
  
-pointcuts:+effspecs:
         def views.html.dummy.apply() prod @simpleNoIO         def views.html.dummy.apply() prod @simpleNoIO
         def views.html.foo.apply[T]() prod @simpleNoIO         def views.html.foo.apply[T]() prod @simpleNoIO
Line 369: Line 371:
 sbt "run 2" sbt "run 2"
 </code> </code>
 +
 +===== Plotting the results =====
 +To plot the benchmark results we have provided a zip file with the required files [[http://pleiad.cl/_media/research/software/effscript/plot.zip|plot.zip]].
 +
 +
 +You will need the following python libraries (I recommend install them using "easy_install" command of setuptools https://pypi.python.org/pypi/setuptools):
 +- numpy
 +- matplotlib
 +
 +Place the content of the zip file inside the benchmark folder. Then, edit ''runbenchmark'' file to set the number of iteration by editing variable ''n'':
 +
 +<code bash>
 +#!/bin/bash
 +n=1
 +...
 +</code>
 +
 +<HTML>
 +<p style="border:1px solid red;padding:5px;">
 +Before running the benchmarks we recommend re packaging the CollsSimple project. The reason for this is that at the moment of the artifact submission, CollsSimple was compiled and packaged using the bit vector version of the compiler plugins. Later we updated the effect compiler plugins but we did not repackage the project.</p>
 +</HTML>
 +<code bash>
 +cd CollsSimple
 +sbt package
 +</code>
 +
 +To run the benchmarks, at the root of the benchmarks folder run:
 +<code bash>
 +./runbenchmark > outputbenchmark
 +</code>
 +
 +The execution will output the results in ''outputbenchmark'' file. We have provided a ''outputbenchmark'' example file.
 +
 +To plot the results just run:
 +<code bash>
 +python buildGraph.py
 +</code>
 +It will generate a ''benchmark.pdf'' file with the plot. We have provided a ''benchmark.pdf'' example file.
 +
 +