Project 5 — Interpreter (Reason)

CS-364: Programming Languages

Fall 2024

Warning! This assignment has not been updated for the Fall 2024 semester and is in draft form.

Programming assignments 3 through 5 will direct you to design and build an interpreter for snail. Each assignment will cover one component of the interpreter: lexical analysis, parsing, and operational semantics. Each assignment will ultimately result in a working interpreter phase which can interface with the other phases.

For this assignment you will write an interpreter. This involves writing the code that performs the execution and interpretation of valid programs. Among other things, this involves implementing the operational semantics specification of snail. You will track enough information to generate legitimate run-time errors (e.g., dispatch on void, valid types for arithmetic, etc.). You will also write additional code to unserialize the AST generated by the parser.

You will be implementing this stage of the interpreter in the Reason programming langauge. You may work in a team of two people for this assignment.

Warning! This assignment uses at least version 1.2.0 of the snail reference interpreter.

Requirements

You must create a minimum of four artifacts:

  1. A Reason program that takes a single command-line argument (e.g., file.sl-ast). That argument will be an SL-AST-formatted snail abstract syntax tree (as described in the snail specification). Your program must execute (i.e., interpret) the snail program described by this input.
    • The SL-AST data will always be well-formed (i.e., there will be no syntax errors in the SL-AST file iteself). However, the SL-AST file may describe a snail program that has runtime errors.
    • Your main program should be a module named interpreter, which compiles to interpreter.exe. Thus, the following two commands should produce the same output:
                            
                              esy x interpreter.exe file.sl-ast
                              snail file.sl
                            
                          
    • Your program will consist of a number of Reason files.
  2. A plain UTF-8 text file called readme.txt describing your design decisions and choice of test cases. See the grading rubric. A few paragraphs should suffice.
  3. A plain UTF-8 text file called references.txt providing a citation for each resource you used (excluding class notes, and assigned readings) to complete the assignment. For example, if you found a Stack Overflow answer helpful, provide a link to it. Additionally, provide a brief description of how the resource helped you.
  4. A suite of test cases and inputs (test-1.sl and test-1.sl-input through test-N.sl and test-N.sl-input). The testcases should exercise interpreter and run-time error corner cases.

Specification

This project is broken down into two parts: P5A and P5B. Each part is described in turn.

P5A: Testing and Hello World

P5A is a checkpoint for the larger interpreter that includes both test suite development and construction of a subset of the interpreter.

Your interpreter should conform to the syntax and SL-AST specification provided by the snail documentation.

Test Suite Development

In this project, we continue to employ a form of test-driven development or mutation testing into our software development process and require you to construct a high-quality test suite.

The goal is to leave you with a high-quality test suite of snail programs that you can use to evaluate your own P5 Interpreter. Writing an interpreter requires you to consider many corner cases when reading the formal operational semantics rules in the snail specification. While you you can check for correct "positive" behavior by comparing your interpreter's output to the reference interpreters's output on the usual "good" snail programs, it is comparatively harder to check for "corner case" behavior.

If you fail to construct a rich test suite of semantically-valid tricky programs you will face a frustrating series of "you fail held-out negative test x" reports for P5 proper, which can turn into unproductive guessing games. Because students often report that this is frustrating (even though it is, shall we say, infinitely more realistic than making all of the post-deployment tests visible in advance), this checkpoint provides a structured means to help you get started with the constuction of a rich test suite.

SLUGS contains 24 variants of the reference compiler, each with a secret intentionally-introduced defect related to Interpretation. A high-quality test suite is one that reveals each introduced defect by showing a difference between the behavior of the true reference compiler and the corresponding buggy verison. You desire a high-quality test suite to help you gain confidence in your own P5 submission.

For P5A, a test consists of a pair of text files: one syntactically valid snail program and one input file containing the input for the program as it would be typed on the command line. For each bug, if one of your tests causes the reference and the buggy version to produce difference output (that is different stdout/stderr), you win: that test has revealed that bug. For full credit your tests must reveal at least 18 of the 24 unknown defects.

The secret defects that are injected into the reference compiler correspond to common defects made by students implementing interpreters. Thus, if you make a rich test suite for P5A that reveals many defects, you can use it on your own P5 submission to reveal and fix your own bugs!

SLUGS will tell you the correct output for test cases you submit that reveal bugs in your own implementation of P5. This is the same information you can determine by comparing your output with that of the reference compiler.

Tests should include a snail program named test-n-XXX.sl with a corresponding input file named test-n-XXX.sl-input (XXX can be anything, but must be the same for both files) where 1 ≤ n ≤ 99. For clarity, each snail program will only be run with the input file that the exact same number n. If a particular test does not require user input, provide a blank input file.

Your test files may contain no more than 2048 characters in any one file (including comments and whitspace). You may submit up to 20 tests (though it is possible to get full credit with fewer). Note that the tests the autograder runs on your solution are NOT limited to 2048 characters in a file, so your solution should not impose any size limits (as long as sufficient system memory is available).

Interpreter Checkpoint (Hello World)

P5A is also a checkpoint for P5. The Interpreter is a large and complicated assignment; we do not want you to fall behind. It is recommended that you complete P5A well before the final due date.

For the P5A checkpoint you will only be tested on something akin to hello-world.sl. If you can interpret that, you pass the checkpoint. While it is possible to take shortcuts on this checkpoint, this will ultimately be a disadvantage. The goal of the checkpoint is not to do the minimal amount of work possible for this program, but instead to do the greatest amount possible now so that you have plenty of time for the rest of the features later.

P5B: Full Interpreter Implementation

P5B tests all functionality of your interpreter. Your final submission for P5B should be capable of interpreting all valid SL-AST files. This includes providing appropriate error messages for dynamic runtime check violations.

Error Reporting

To report an error, write the string

ERROR: line_number:column_number: Exception: message

to standard output and terminate the program with exit code 0. You may write whatever you want in the message, but it should be fairly indicative.

Example erroneous input:

                  class Main : IO {
                    let my_void_io;
                    main() {
                      my_void_io.print_string("Hello, world.\n");
                    };
                  };
                
Example error report output:
ERROR: 4:5: Exception: dispatch on void

Commentary

Note that this time, whitespace and newlines matter for normal output. This is because you are specifically being asked to implement IO and substring functions.

You will need to implement a check for cycles in the interhitance chains of classes in your program.

You should implement all of the operational semantics rules in the spectification. You will also have to implement all of the built-in functions on the built-in classes.

You can do basic testing as follows:

              
                snail --parse file.sl 
                snail file.sl > reference-output.txt
                interpreter.exe file.sl-ast > my-output.txt 
                diff reference-output.txt my-output.txt
              
            

Note that diff is a command line tool for comparing the contents of two files. You may also find Diffchecker as well as VSCode's built-in comparison to be helpful.

Project Size

This assignment is complicated (hence the checkpoint and long duration). The full interpreter took your instructor approximately nine hours to implement and debug. The resulting code was around 1000 lines. You should expect to spend some multiple of this on your own implementation (most students report that it takes 2–10 times longer). Therfore, you should try to work a little bit every day on this assignment.

Data Structures

The following modules in Reason are particularly helpful for this project:

  • Array
  • Hashtbl (though you might also consider "association lists" supported through functions in the List module)
  • List
  • Map
  • Printf
  • String
  • Str
  • Sys

You can find the documentation for these modules here. Further documentation for Reason (such as variant and record types) can be found here.

Reason's standard library Map (think dictionary) is particularly useful for representing the environment and store. Note that you can't really use a Map directly, instead, you will use a functor to generate a module with a particular type for the key.

Let's say that you'd like to have a map from integer locations to stored values (sounds like a "store", no?). You can create a LocationMap module using the Map.Make functor:

Creating a LocationMap module:

                  type location = int;

                  module OrderedLocation = {
                      type t = location;
                      let compare = compare;
                  };

                  module LocationMap = Map.Make(OrderedLocation);
                

Making a Map, which uses strings as keys, is even more direct:

Creating a StringMap module:

                  module StringMap = Map.Make(String);
                

Once you create a Map module, you can use any of these functions. Note that these maps are not mutable; instead a new mapping is returned with each update function.

You will likely find it necessary to use some find/replace operations based on regular expressions in your implementation. Reason's Str module is useful for this, but it is not included by default when compiling.

Dune file for esy

The following dune file should allow you to compile this project:

              
                (executable
                    (name interpreter)
                    (public_name interpreter.exe)
                    (libraries str yojson)
                )
              
            

Reading JSON

Your Reason workspace is already configured to use the Yojson library to parse JSON files. Because SL-AST files can contain 64-bit integers, we will use the Yojson.Safe module, which provides an Intlit variant that can read these values as a string. (Hint: you will need to convert this string to a 64-bit int).

The type of the JSON tree provided in Yojson.Safe is:

              
                type t = 
                  | `Null
                  | `Bool(bool)
                  | `Int(int)
                  | `Intlit(string)
                  | `Float(float)
                  | `String(string)
                  | `Assoc(list((string, t)))
                  | `List(list(t))
                  | `Tuple(list(t))
                  | `Variant((string, option(t)))
              
            

Note tha there is a constructor for each type of JSON value. Assoc represents a JSON object and List represents an array.

You can read a JSON file with:

              
                // json_ast will have type Yojson.Safe.t
                let json_ast = Yojson.Safe.from_file("filename");
              
            

You can then interact with this data using several of the functions provided in Yojson.Safe.Util to read the JSON data.

The following functions are helpful for taking a Yojson value and converting it to a Reason type:

              
                open Yojson.Safe.Util;

                to_bool(t);
                to_int(t); // Note: this cannot handle 64-bit ints, but can be useful for line/col
                to_list(t);
                to_string(t);
              
            

It's also possible to access key-value pairs from a JSON object using member("key", object). As an example, the following code loads an SL-AST file and prints out the names of the classes contained within.

              
                open Yojson.Safe.Util;
                print_endline("Enter file name:");
                let fn = read_line();

                // load file
                let json_ast = Yojson.Safe.from_file(fn);

                // an AST is a list of classes, so convert to list and iterate
                List.iter( (cls) => { 
                  // cls will be an individual class

                  // member accesses a json value from an object 
                  let class_name_json = member("class_name", cls);

                  // to_string converts to Reason type
                  let class_name = to_string(class_name_json);

                  // print this out
                  print_endline(class_name);
                }, to_list(json_ast));
              
            

Video Guides

Video guides are provided to help you get started with various aspects of this project. Note that there may be errors and mistakes in these videos. You should not blindly follow the steps taken in these videos!

Command Line Arguments in Reason

Reading in SL-AST Files

Data Representation

Implementing newloc

Getting Started with Evaluation

Collecting Member Variables

New Opsem

Finding a Method in the AST

How to Deal with Built-In Methods

What to Submit

You must turn in a tar.gz file containing these files:

  • readme.txt: your README file
  • references.txt: your file of citations
  • team.txt: a file listing only the SLU email IDs of both team members (see below). If you are working alone, you should turn in a file with just your email ID.
  • source_files:
    • interpreter.re (or any Reason files) containing the source of your interpreter
    • These source files must include a comment with the project identifier: 139ba267302516bcc9e9e23141d94ac04cd56a91
    • dune
  • test_files:
    • Up to 99 test files named test-N-XXX.sl and test-N-XXX.sl-input where N is a number in the range 1 through 20 and XXX is any descriptive text without spaces
    • Each testcase you submit must have a corresponding input file. For example, if you submit test-1-foo.sl, you must also submit test-1-foo.sl-input, and that test will be run with something akin to:
                            
                              snail --parse test-1-foo.sl 
                              snail test-1-foo.sl-ast < test-1-foo.sl-input > output.txt
                            
                          
    • Each testcase you submit must be at most 2048 characters (i.e., wc -c yourtest.sl says 2048 or less). You want each of your testcases to be meaningful so that it helps you narrow down a bug later.
    • Each testcase you submit may run to completion or it may trigger some run-time error. None of the seeded defects you are trying to uncover are intentionally related to infinite loops. Do not spam SLUGS. Similarly, none of the errors are related to large memory allocations or exhausting the heap at run-time (e.g., by making arbitrarily large strings, etc.).

The following directory layout is recommended for your tar.gz file. This is the default layout generated by make fullsubmit (see below) should you following the same project structure from class.

              
                tar tzf fullsubmit.tar.gz
                (out)Makefile
                (out)dune
                (out)interpreter.re
                (out)readme.txt
                (out)references.txt
                (out)team.txt
                (out)test-1-xxx.sl
                (out)test-1-xxx.sl-input
                (out)test-2-something.sl
                (out)test-2-something.sl-input
              
            

Using the Makefile

Note, you can use the CS-364 Makefile to generate a submission archive:

              
              make fullsubmit
              
            

The Makefile is available here. Be sure to update the IDENTIFIER and EXECUTABLE variables appropriately. Note that you do not need to run make to compile anything for this project; we are just using a part of this script to generate the tarball that you can submit.

Working in Pairs

You may complete this assignment in a team of two. Teamwork imposes burdens of communication and coordination, but has the benefits of more thoughtful designs and cleaner programs. Team programming is also the norm in the professional world.

Students on a team are expected to participate equally in the effort and to be thoroughly familiar with all aspects of the joint work. Both members bear full responsibility for the completion of assignments. Partners turn in one solution for each programming assignment; each member receives the same grade for the assignment. If a partnership is not going well, the instructor will help to negotiate new partnerships. Teams may not be dissolved in the middle of an assignment.

If you are working in a team, both team members should submit to SLUGS. All submissions should include the file team.txt, a two-line, two-word flat UTF-8 text file that contains the email ID of both teammates. Don't include the @stlawu.edu bit. Example: If sjtime10 and kaangs10 are working together, both kaangs10 and sjtime10 should submit fullsubmit.tar.gz with a team.txt file that contains:

                
                kaangs10
                sjtime10
                
              

Then, sjtime10 and kaangs10 will both receive the same grade for that submission.

Grading Rubric

P5 Grading (out of 100 points):

  • 40 points: Hello World works
  • 22 points: autograder tests
  • 18 points: test files (1 point per bug revealed)
  • 10 points: clear description in your README and References
    • 10 — thorough discussion of design decisions and choice of test cases; a few paragraphs of coherent English sentences should be fine. Citations provided are properly formatted.
    • 5 — vague or hard to understand; omits important details. Citations provided are properly formatted.
    • 0 — little to no effort. Citations do not provide correct information.
  • 10 points: code cleanliness
    • 10 — code is mostly clean and well-commented
    • 5 — code is sloppy and/or poorly commented in places
    • 0 — little to no effort to organize and document code
  • 5 points extra credit: Early/Complete Test Suite
    • 0.5 — every 5 revealed defects by 2024-04-24 at 11:59 pm (maximum 2 points)
    • 0.5 — every defect over 18 you reveal (maximum 3 points)