Despite hitting a nasty (but obvious) bug involving Ruby’s GC, it seems that the feature is now stable.
I have been using a script for a while that computes catalogs for 30 nodes, taking into account exported resources, and runs some tests on the results. This script used to run in around 50 seconds. On my puppet master, the combined catalog generation time for those hosts is around 10 minutes¹.
Language-puppet was about ten times faster than the original implementation, but was wasting a significant amount of time spawning Ruby processes, rendering gobs of data (the list of all known variables and their values), and feeding them to said process, for each template evaluation. On the Ruby side, the data was interpreted (with eval), the templates were loaded and interpolated, and the response spit back to the Haskell executable. For this reason I wrote a minimalist template parser that is capable of interpolating the simplest ones while staring in Haskell land.
Now the Ruby process is embedded, and variable resolution happens only when needed, by providing a callback Haskell function to the Ruby runtime.
The whole script now runs in less than 10 seconds (six if you omit the tunnelled accesses to PuppetDB). This is now acceptable to run it before almost all commits, which was the goal. It will help making sure nothing got (too) broken, especially with regards to exported resources.
The software is now stable enough, and I will probably prepare a new binary release soon, along with a Debian-style repository.
¹ This is not a fair comparison however. My script queries the PuppetDB for facts using a SSH tunnel, whereas the puppet master is local. On the other hand the puppet master does stuff my script doesn’t, such as updating facts and reporting data into PuppetDB (in all fairness my script updates a local PuppetDB-like database). I do not believe this accounts for an important fraction of those ten minutes, but might be wrong. Also, the puppet master has a faster CPU, and does not run unit tests on the catalogs.