Sunday, 21 October 2018

egg Syntax 2

As mentioned last time, I've been working on a poster for the complete egg programming language syntax as a railroad diagram. I finally managed to squeeze it on to a sheet of A3:

Of course,viewing it online as an SVG is a more pleasurable experience.

Over the next few weeks I aim to use this poster as the basis for an introduction to the egg programming language via its syntax.

Tuesday, 2 October 2018

egg Syntax 1

I've been working intensively on the syntax of the egg programming language. In particular, I've been looking at methods for teaching the syntax to learners not familiar with programming languages. But first, as ever, some background...

Backus-Naur Form

Backus-Naur Form (BNF) is a formal specification typically used to describe the syntax of computer languages. In its simplest form, it is a series of rules where each rule offers a choice or sequence of two or more further rules. Ironically, the syntax of BNF varies greatly, but I'll use the following:

<integer> ::= <zero> | <positive>
<positive::= <one-to-nine<opt-digits>
<opt-digits::= <digitsε
<digits::= <digit<opt-digits>
<digit::= <zero<one-to-nine>
<zero::= "0"
<one-to-nine> ::= "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"

The epsilon "ε" represents a non-existent element.

The rules above define the syntax for (non-negative) integers. Informally,
  • An integer is either zero or a positive integer.
  • A positive integer is a digit "1" to "9" followed by zero or more digits "0" to "9".
These rules explicitly disallow sequences such as "007" being interpreted as "integers".

Formal BNF is great for computers to parse, but verbose and opaque for humans to read. The usual compromise is Extended Backus-Naur Form.

Extended Backus-Naur Form

Extended Backus-Naur Form (EBNF) adds a few more constructs to make rules more concise and (allegedly) easier to read:
  • Suffix operator "?" means "zero or one" of the preceding element or group;
  • Suffix operator "*" means "zero or more" of the preceding element or group;
  • Suffix operator "+" means "one or more" of the preceding element or group;
  • The need for the epsilon "ε" symbol can be removed; and
  • Parentheses are used to group elements
Our example syntax above could be re-written in EBNF as:

<integer::= <zero| <positive>
<positive::= <one-to-nine<digit>*
<digit::= <zero<one-to-nine>
<zero::= "0"
<one-to-nine> ::= "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"

EBNF syntax rules are used a great deal in computer science, but, as can be seen in the "official" EBNF of EBNF (Section 8.1), it's still quite impenetrable for non-trivial cases.

Railroad Diagrams

Railroad diagrams are graphic representations of syntax rules. I first came across them when I learned Pascal and I believe they are one of the factors in making JSON so successful. As with their textual counterparts, railroad diagrams come in a number of flavours. One of my favourites is Gunther Rademacher's. Paste the following into the "Edit Grammar" tab of for an example:

integer ::= zero | positive
positive ::= one-to-nine digit*
digit ::= zero | one-to-nine
zero ::= "0"
one-to-nine ::= "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"

However, with existing railroad syntax diagrams, there's generally a one-to-one correspondence between rules and images. I wondered if there was a way to break this link.

Egg BNF Diagrams

I wrote a simple railroad diagram generator in JavaScript with the following features:


Rules are enclosed in pale blue boxes:
Terminal tokens are in purple rectangles. References to rules are in brown ovals. Tracks are green lines terminated by green circles.


Choices are stacked vertically:
Optional elements branch below the main line:


Loops appear above the main line. There are three main forms: zero or more, one or more and lists with separators:


Rule definitions may be embedded as one of the occurrences within another rule:
Using these features, you can express our example syntax using individual Egg BNF Diagrams: 
Or you can embed the rules into a single diagram:
Personally, I find the last diagram gives me a fair indication of the overall structure of the syntax when compared to the stack of diagrams for individual rules.

This gave me the idea of a single poster diagram for the entire egg programming language syntax...

Wednesday, 19 September 2018

egg Virtual Machine 1

It's been nearly two months since my last post on egg. I did have a couple of breaks but the main delay was a substantial re-write due to a blind alley I stumbled down. It was my own fault...

I started the egg project with an informal language specification in my head and a desire to minimise the number of dependencies on external libraries. Consequently, I developed the language syntax and the parser hand-in-hand. This was the first bad idea.

The parser creates an abstract syntax tree (AST). The ability to create an AST does not guarantee that the program is "well-formed", "correct" or anything else; you need to be able to run the program to verify that what you've got what you expected. So I implemented an interpreter that "ran" the AST and developed it alongside the syntax specification and parser. This was my second bad idea.

The problems didn't rear their heads until I started implementing generators. If you remember, these are co-routines that generate sequences of values. At present, there is no standard way of writing portable co-routines in C++. However, there is an experimental proposal for C++20. One alternative is to write a special form of the egg interpreter to execute generator functions (and only generator functions) in a "continuable" manner. Another alternative is to automatically re-write the generator functions as finite state machines. The issue with the first alternative is that it greatly increases the testing burden because you now have two interpreters that must perform consistently. The issue with the second alternative is that it's just plain hard!

But, anyway, I started down the road of the automatic generator function re-writer and quickly came up against two major obstacles:
  1. What was I converting from?
  2. What was I converting to?
Using ASTs to represent chunks of executable code is just daft; the level of abstraction is all wrong. What I needed was a virtual machine "intermediate representation". Sigh.

This was when I realised my first bad idea was to design the egg language syntax alongside the parser. The developer is just too tempted to use existing portions of the parser to implement new language features instead of taking a step back and saying "What would make most sense for a learner (or practitioner) of this language?"

My second bad idea was to interpret the AST directly and this turned out to be far more serious. The implementation of the execution subsystem pollutes that of the parser and vice versa. I intend to re-write the parser in egg (dogfooding) as soon as possible, and extricating the execution-only portion in C++ would have been a nightmare.

The egg virtual machine is called "ovum" and is extremely high-level. This is a deliberate design decision enabling us to cross-compile egg modules to other computer languages and still be readable by humans. In fact, you can "compile" an egg module, throw away the source, "de-compile" it to egg and get back pretty much what you fed in, but nicely formatted and without the comments. Ovum doesn't even dictate whether the underlying "machine" is stack-based, register-based, or whatever.

The external representation is a naturally-compact byte code. The internal presentation is a tree, not of syntax, but of opcodes. The opcodes describe the semantics of the program; there are far more of them than keywords in the egg language because context is everything. Therefore, a task such as scanning the program for variable declarations is much more simple that via the AST.

I took the execution re-write as an opportunity to refactor some troubling aspects of the previous implementation caused by my two bad ideas. However, this means that the parser and the compiler now use a different type system to the run-time! But, as I mentioned, I'm hoping to throw away the parser and compiler in the near future and have them running on the ovum VM.

The extensive test suites I've been maintaining, including dozens of example scripts, mean I'm fairly confident that everything is still working as expected. I'm back to the stage where I need to implement generator functions, and at least now it's a VM I'm working with and not an AST! But I did lose a few weeks.

Here's an old meme to cheer us all up:

Oh, the hue manatee!

Saturday, 28 July 2018

egg Sequences 3

Sequences (as introduced earlier) are a good way to abstract lists of homogeneous data. We are only allowed to see one piece of data at a time and we don't know (until we get there) where the end of the data is. Indeed, sequences can be infinite.

This promotes the use of streaming algorithms which allow us to process very large data sets (i.e. ones that cannot all fit into memory at one time).

However, there is a hidden danger here:
Sequences may promote sequential processing of data
The hint's in the name!

Sequential processing on today's multi-core computers is rightly frowned upon. We should be writing code that is either explicitly parallel or gives the compiler and/or run-time the opportunity to parallelize. But sequential list processing, where you primarily only process the head of the list before moving on to the tail elements, goes all the way back to Lisp; over sixty years ago! It's no wonder these concepts are so deeply ingrained.

Sequences are often reduced via accumulation. As Guy Steel hints at in "Four Solutions to a Trivial Problem", accumulation can be difficult to automatically parallelize. The abandoned Fortress project was an attempt to mitigate this.

The egg language (not in the same league!) is designed to be easy to learn, but I would like some aspects of concurrency and parallelization to be accessible, even if only as a taste of more complex paradigms. Microsoft's PLINQ demonstrates that it is possible to build parallel execution on top of sequences, but I need to do a lot more research in this area. In particular, do I need to worry about this as part of the language specification as opposed to a library, like PLINQ?

Friday, 27 July 2018

egg Sequences 2

Last time we saw how various curly-brace programming languages deal with sequences, if at all. Now we'll concentrate on sequence generation.


In these examples, I'll use JavaScript because the syntax is more concise, but similar effects have be achieved with C#.

Consider this code:

  // JavaScript
  function* countdown() {
    yield 10;
    yield 9;
    yield 8;
    yield 7;
    yield 6;
    yield 5;
    yield 4;
    yield 3;
    yield 2;
    yield 1;
  for (var i of countdown()) {

This obviously counts down from 10 to 1. (I say "obviously" but that wasn't strictly true for me. I used "in" instead of "of" in the for loop and got nothing out. Good luck, learners!)

The first improvement we'd probably want to make is to use a loop:

  // JavaScript
  function* countdown() {
    for (var n = 10; n > 0; --n) {
      yield n;
  for (var i of countdown()) {

Next, we could parameterize the countdown:

  // JavaScript
  function* countdown(x) {
    for (var n = x; n > 0; --n) {
      yield n;
  for (var i of countdown(10)) {

Nice. But what exactly is "countdown" and what does it return? If you bring up a Chrome console and type in the last snippet, you can get useful insights:

So, "countdown" is a function; no surprise there! But a call to "countdown(10)" produces a suspended "Generator" object with a couple of scopes. This gives an insight into what is happening under the hood.

A JavaScript generator function returns a object that implements the iterator protocol. This consists of the "next()" member function, which, in turn, allows the use of "for ... of" loops and spread syntax:

  > console.log(...countdown(10))
  10 9 8 7 6 5 4 3 2 1


The JavaScript yield operator differs from the return statement in a couple of fundamental ways.

Firstly, "yield" is resumable. If a function terminates with a "return" statement (including an implicit return at the end), that's it: game over; the function has finished. With a yield, the function may resume at a later date. This implies that the state of the function (e.g. local variables, exception try blocks, etc.) must be preserved when a "yield" is executed so that the generator can continue exactly where it left off. This is a coroutine.

Secondly, a JavaScript "yield" is an operator that produces a value. It is an expression. This allows the caller to pass information back into the iterator function. A "return" is a statement.


If generator functions produce coroutines, what sort of coroutines do they produce?

The various flavours of coroutines are discussed at length in the following papers:

Symmetric versus Asymmetric

Symmetric coroutines can yield to any arbitrary coroutine; asymmetric coroutines can only yield to their "caller". It has been shown that you can build symmetric coroutines from asymmetric ones, and vice versa. Asymmetric coroutines are generally considered to be easier for humans to understand; for one thing, they maintain the notion of caller and callee. Both JavaScript and C# implement asymmetric coroutines.

Stackful versus Non-stackful

Stackful coroutines can yield anywhere, including from functions called by coroutine. Non-stackful coroutines can only can only suspend their execution when the control stack is at the same level that it was at creation time. Non-stackful coroutines have the advantage that yields are unambiguous in cases where one co-routine has explicitly called another. Both JavaScript and C# implement non-stackful coroutines.

Internal-only versus Externalised

This only applies to coroutines that are generators. Internal-only yields can only be used inside foreach-like statements; for example, CLU iterators are internal-only. Externalised yields provide an interface for calling the coroutine at arbitrary call-sites. Both JavaScript and C# implement externalised yields (via the "iterator protocol" and "IEnumerator interface" respectively).

Unidirectional versus Bidirectional

This is my own nomenclature. Unidirectional coroutines use yield statements; i.e. no value can be passed from the caller back to the callee upon resumption. Bidirectional coroutines use yield expressions. In reality, bidirectional coroutines can be approximated using unidirectional ones:

  // JavaScript: bidirectional
  function* generator() {
    // blah blah
    var y = yield x; // caller returns 'y'
    // blah blah

  // JavaScript: unidirectional
  function* generator() {
    // blah blah
    var xy = { x: x };
    yield xy; // caller adds 'xy.y'
    var y = xy.y;
    // blah blah

JavaScript implements bidirectional coroutines; C# implements unidirectional coroutines.


Under the hood, calls to JavaScript generator functions create objects that encapsulate the logic of the body of the function and maintain the state. We could do this by hand:

  // JavaScript
  function countdown(x) {
    var n = x;
    return {
       next: function() {
         return (n > 0) ? {value: n--, done: false}
                        : {done: true};

But this is tedious and error-prone, not to mention ugly. Another option is to convert the generator body to resumable code automatically. This usually entails producing a finite state machine that replicates the functionality. There are at least two tools that do this:
  1. Google's Traceur, and
  2. Facebook's Regenerator.
Both produce genuinely scary code.

Perhaps the best solution is to support resumable function bodies natively. On code designed to run on a virtual machine, this is not so difficult. However, for efficiency, generators will probably be run via a different (stackless) code path from standard function calls. This is what the current egg implementation does, and I suspect this is also true for Chrome's generators too. C# on the other hand rewrites the function.

Tuesday, 24 July 2018

egg Sequences 1

Sequences of items are a recurring theme in programming, but different computer languages have differing levels of support for them. For the purposes of these posts, I'm going to define a sequence as:
  1. Zero or more homogeneous elements
  2. Elements may be repeated
  3. Elements have a position (or positions, in the case of repeated elements) within the sequence
  4. The only operation allowed for a consumer is to fetch the next element; this may fail if we have reached the end of the sequence
  5. Sequences may be endless
This definition is known by many names in different languages (lists, enumerations, streams, iterations, etc.) but I'll stick with "sequence" to avoid ambiguity.

Let's break down the definition. Point 1 sounds quite restrictive: what if we want a sequence of integers and strings? This isn't really a problem in a language with composable types; we just define the type of the elements in our sequence to be the union of all the expected, individual types. In egg, we could use the hold-all "any" (or "any?" if we want nulls too).

Point 4 hints that, from a programming point of view, the consumer of a sequence doesn't see very much. But what does it need to see?

Support in Existing Languages

Rust has a very good interface for what it calls iterators. It has a single member function that returns the next element or nothing:

    // Rust
    fn next(&mut self) -> Option;

JavaScript has a similar protocol. The single member returns an object with two fields ("value", which may be undefined, and "done", which is a Boolean):

    // JavaScript
    function next();

Other languages are, in my opinion, a bit inferior in this respect. C# defines an enumerator interface with three members:

    // C#
    public T Current { get; }
    public bool MoveNext();
    public void Reset();

My two concerns with C#'s "IEnumerator<T>" are:
  • Getting the next element is not an atomic operation: you need to move to the next element and then read it.
  • The "Reset" method suggests that sequences are always "replayable". This is rarely the case (see the comment in the source).
In Java 8, sequences are named "iterators". The interface has four members:

    // Java
    default void forEachRemaining(Consumer action)
    boolean hasNext();
    T next();
    default void remove();

The first is a helper that can be overridden to optimise sequence operations. The next two exhibit the same atomicity problem that C#'s interface has. The final "remove" method is just plain strange. Needless to say, the default implementation throws a "not supported" exception.

C++ iterators are heavily templated and do not, in general, work via interfaces (though see Boost for some work towards this). Iterators in C++ are often bi-directional.


As I mentioned above, C# and Java have potential atomicity problems. In the vast majority of cases, these never appear or can be avoided by using additional locks. However, sequences can be a useful building block in concurrent systems (e.g. CSP), so native atomicity is a desirable feature.

For example, imagine a sequence of tasks being used as a queue of work for two or more consumers. If the consumers cannot atomically fetch the next work item from the sequence, there is the potential for work items to be accidentally skipped or even performed twice.

For that reason, if written in C++17, one interface for sequences could be:

    // C++
    template<typename T> class Sequence {
      virtual ~Sequence() {}
      virtual std::optional<T> next() = 0;

This is effectively the Rust interface above.

Sequence Functions in egg

In egg, we can simplify this interface (for integers) to a single function signature:

    // egg
    void|int next()

We could have used the following:

    // egg
    int? next()

But that would mean that sequences could not contain null values.

The type of a sequence function for integers is therefore "(void|int)()" which is somewhat ugly. I originally thought that abbreviating this to "int..." would be quite intuitive, but quickly ran into syntax problems (see later). At present, I am toying with the idea of using "int!" as a shorthand, based on some fuzzy notion that other languages using "!" when dealing with channels.

Thus, if a function took two integer sequences and merged them in some way, the signature could be:

    // egg
    int! merge(int! left, int! right)

This implies that sequences are first-class entities: what R. P. James, and A. Sabry call "externalised yields" that can be used outside of the "for-each" control statement.

However, this only deals with consumers of sequences. How does one produce a sequence in the first place?

Sequence Generators

One way to create a sequence is via a container that permits iteration. Most languages that support the "for-each" control statement allow you to iterate around elements in an array, for example.

    // JavaScript
    var a = [10, 20, 30];
    for (var v of a) {

However, one of the strengths of sequences is that the elements can be generated on-demand or lazily. Indeed, some sequences are infinite, so populating a container with all the elements of a sequence may be impractical or impossible.

Both C# and JavaScript have the "yield" keyword. C, C++ and Java have non-standard mechanisms for achieving similar effects, but they are not part of the language.

Here's a generator function to produce the infinite, zero-based, Fibonacci sequence in JavaScript:

    function* fibonacci() {
      var a = 0;
      var b = 1;
      for (;;) {  
        yield a;
        var c = a + b;
        a = b;
        b = c;

And in C#:

    public IEnumerable<int> fibonacci() {
      int a = 0;
      int b = 1;
      for (;;) {
        yield return a;
        int c = a + b;
        a = b;
        b = c;

The syntax is slightly different, but the fundamentals appear similar. Surely that means that generator functions are trivial to implement? Alas, no.

Next time, I'll discuss why generators are so complicated under the surface. In the mean-time, might I again suggest this paper as some bed-time reading?

Friday, 6 July 2018

egg Garbage Collection 2

In the first part of this thread, I introduced a less intrusive tracing garbage collector that I'm working on for the egg programming language. One of the questions I left hanging was determining which nodes in the basket are "roots". That is, which nodes are pointed to directly from outside the basket?

Roots and Concurrency

At some stage in the future, I'd like to switch the garbage collection to be concurrent. This poses additional problems. Imagine you are inserting a parent node and a child node into the basket and the parent is a root. How do you do this whilst the garbage collector is concurrently running? You have to be very careful of the order of operation so that the collector doesn't accidentally evict your nodes before you've had the chance to finalise all the links.

One solution is to ensure all the new nodes are roots and then downgrade most of them after linking them together. This implies that downgrading nodes, from root to non-root, needs to be an efficient operation.

Another potential requirement is "locking" nodes when a subsystem other than the garbage collector is using them; you don't want the collector to pull the rug from under their feet.

Hard and Soft References

To try to solve these issues, I've been experimenting with "soft" and "hard" references. These shouldn't be confused with "weak" references; that's an orthogonal concern. A soft reference is a link between two nodes in the same basket; these are the links that the tracing garbage collector follows. A hard reference is a link to a node that uses traditional reference counting. Unlike soft references, hard references do not need to know which node they are pointing from, only where they are pointing to. Indeed, hard references do not even need to be to or from nodes within a basket. In these respects, hard references are similar to "std::shared_ptr" in C++.

In egg, the "Collectable" base class can be the target of both soft and hard references. To be the target of a soft reference, the "Collectable" node must be a member of exactly one basket. When the node was added to it's basket, the basket obtained a single hard reference to it. This single hard reference is maintained no matter how many soft references there are to it from within the basket; it is only relinquished when the node is evicted from the basket.

Here's an overview:
  • When a node is added to a basket, the basket takes a hard reference to it.
  • Nodes can have zero or more additional hard references to them.
  • The garbage collector only considers nodes for eviction if they have a hard reference count equal to one, i.e. the only hard reference to them is from the basket.
  • Nodes that have hard references in addition to the single reference from the basket are considered "roots" and are effectively locked.
  • Nodes can only be added to a basket by supplying a hard reference. This overcomes the race condition causing premature eviction of partially constructed networks.
  • Nodes can be made a "root" simply by creating an external hard reference to it. When the last external hard reference is lost, the node is no longer a "root" and is a candidate for eviction from the basket if it is not accessible from some other root.

Practical Considerations

When I started implementing soft references, I found they are incredibly difficult to construct properly. The reason is that soft references need to know both the source and target of the reference. If you are constructing an instance, "source", of a class that contains a soft reference to another node, "target", you can easily end up creating a reference between "source" and the partially constructed "target". If an exception is thrown within the constructor (or a function called from it) it can be very difficult to untangle the links safely.

The solution I'm using at the moment creates all the links as hard references and then demotes them to soft references later on. This has the added advantage of not adding nodes to the basket at all in the event of an exception being thrown part-way through the initial creation of the network.

Another facility I added to make constructing soft references less error-prone is "basket inference". Usually, the sequence of events is:
  1. Add the target to the basket, if it's not already added
  2. Add the source to the basket, if it's not already added
  3. Create a soft reference from the source to the target
Instead, the basket (which must be the same for both source and target) is inferred where possible; the source or target are added to the basket as necessary. This usually works because it's highly unlikely that neither the source nor the target have been added to a basket already.