translating programs


this article is a response to understanding what it's like to program in forth

every so often when forth is talked about, this article comes up, typically meant as a 'diss' on the language. the article talks about the impracticality of forth, demeaning programming in it to little more than a puzzle like sudoku. it does this by demonstrating how someone could go about implementing a very simple piece of code - adding two three element vectors to each other. however, it goes about the implementation in a flawed way, and thus ends up at a poor conclusion.

methodology

the article first brings up a code example, from C:

void vadd(int *v1, int *v2, int *v3) {
    v3[0] = v1[0] + v2[0];
    v3[1] = v1[1] + v2[1];
    v3[2] = v1[2] + v2[2];
}

this C example is great! it compiles down to 3 instructions per line, which sounds about right - there's a memory store, and an addition. we can decompose a single line as two dereferences, and one addition. the author decides to do the same, and translates this model to roughly the following:

over n cells + @  over n cells + @  + r@ n cells + !

the snippet n cells + @ is equivalent to [n] - offset, then dereference. the overs and r@ correspond to the name of the vector, and the ! to the assignment operator. overall, this is a very faithful translation of the C code. so what's the issue?

the issue

funnily enough, it's the fact that it is a very faithful translation of the C code that we run into the issue - the code sucks. first of all, there is far too much stack twiddling. it isn't obvious what the overs and the r@s actually achieve. the author's original code is poorly factored, failing to recognise the n cells + @ pattern and generalising, instead making a specific case for each access. they go through the motions of getting the nth element three times a line. this approach took three attempts to arrive at, which led the author to their conclusion. the author has fallen to one of the largest traps when porting code to a new language - translating existing code.

translating existing code is wonderfully tempting, however using it often ignores the reasons one uses the new language in the first place. to demonstrate this, lets try a language similar to forth only in how different it is to C: haskell.

the translation

we have three operations we do in C: adding, getting and setting. the first is simple enough - haskell also has the + operator. fetching too, is simple, with the !! operator, which functions near identically to []. however, when we encounter setting individual elements of an array(actually, a linked list in this case), we run into an issue. haskell has no mutability. we have to construct a new array with the nth element replaced with our desired element. we can write a function that looks something like this:

replaceAt n x xs = take n xs ++ [x] ++ drop (n + 1) xs

which first produces an array containing the first n elements of xs, puts it together with an array containing x, then puts the new array together with one containing the last n + 1 elements of xs. the array gets split into two components, with the nth element missing, then put back together with a new nth. since we have no mutability, we can't simply write v3 = replaceAt 0 (v1!!0 + v2!!0) v3. we can't set the third vector at all. thus, our haskell translation won't return void, but instead the new list. our v3 argument here is utterly useless.

vadd :: [Int] -> [Int] -> [Int] -> [Int] 
vadd v1 v2 v3 = replaceAt 0 (v1!!0 + v2!!0) . replaceAt 1 (v1!!1 + v2!!1) . replaceAt 2 (v1!!2 + v2!!2)

so, what does the actual haskell solution look like?

actually solving the problem

v3 = zipWith (+) v1  v2

zipWith is a list constructor - it produces a new list, by applying an operator over two other lists. how helpful, and how perfect for our problem! by not setting out to translate the C code, and instead solve the same problem as the C code, we arrive at a far more readable, obvious and idiomatic solution.

of course, this seems obvious. haskell is a functional language, trying to solve problems in the same style as an imperative language obviously will lead to terrible code! this is widely known among programmers. however, forth is an imperative language, just like C is, which means we can solve problems in a similar way, right? not quite. you see, forth isn't quite an imperative language - while it has mutable state, it is nothing like an imperative programming language, or a functional programming language. solving problems in forth thus is very different to solving problems in most other languages.

so, let's solve this problem, by first approaching it differently. the biggest problem with the previous forth solution was the poor factoring. too much of the definition was repeated, and could've been condensed down. it's tempting to factor the cells + @ into a single word, but that still leaves us with a problem - the stack shuffling. the stack shuffling problem can't really be solved with factoring. a new approach is needed.

the solution

historically, to deal with arrays in a concise manner, some CPUs provided a mode that whenever an address was used in some way, it would be automatically incremented. notably, this feature was present on the PDP-11, which both Forth and C were originally implemented on. we can emulate this functionality in our own code, to provide a rather convenient abstraction for anything that involves arrays. such an emulation is incredibly simple:

variable a
variable b
: !a a ! ;
: !b b ! ;
: a@+ a @ a cell +! ;
: b@+ b @ b cell +! ;

the naming of these functions is inherited from Machine Forth, a forth dialect for the F21 forth CPU, which featured these words as hardware instructions. the @+ operations simply fetch the value inside the 'register', then increment the address stored in the 'register' to point at the next element in the array. the first two words simply put a value inside the register. with this newfound power, we can write vadd like so:

: vadd ( v1 v2 v3 -- v3 )
  >r !a !b r@ \ setup
  dup a@+ b@+ + ! cell+
  dup a@+ b@+ + ! cell+
  dup a@+ b@+ + ! cell+
  drop r> ;

each line has considerably more symbols, but has only one stack manipulating word, and has been factored the most reasonably it can be without entering 'wasteful' factoring - creating words that will at most be used once. at the end and beginning of each line, the stack only has a memory address on it, which will have the results of the addition stored in it, then incremented to point at the next cell. the first line places the address of v3 on the return stack, so that it can be returned later - purely a convenience for the programmer. the >r, r@ and r> can be removed to have the same behaviour as the C version.

if you have a keen eye, you may've noticed something about the previous haskell implementation, and this one. the haskell version works on arrays of any length, and this one can be made to. the body of the vadd routine is just an unrolled loop - we can simply roll it back up, generalising the routine. here's how, using the do loop:

: nvadd ( i v1 v2 v3 -- v3 )
  >r !a !b r@ swap ( v3 i )
  0 do dup a@+ b@+ + ! cell+ loop
  drop r> ;

with very minimal changes, we managed to make our very limited, 3 length vector add into a generalised one! the code is a bit smaller if we make it not return v3:

: nvadd
  >r !a !b r> swap
  0 do dup a@+ b@+ + ! cell+ loop
  drop ;

our implementation of the vector add operation produces a useful abstraction for future array operations, and was easily converted into a generalised operation, and was written with about the same brain effort it took to write the equivalent C code. problem solving in different paradigms is obviously different. don't be surprised when the code you translated from another language into forth is bad; you're not writing forth, you're writing that language. try actually writing some forth.

back