beders a day ago

> A Rama operation does not return a value to its caller. It emits values to its continuation. This is a critical distinction, as part of what makes Rama operations more general than functions is how they can emit multiple times, not emit at all, or emit asynchronously.

  • thom a day ago

    Which is obviously very similar to how transducers already work in Clojure, but they still lack some of concurrency options of reducers. Getting all this on a smart, distributed runtime seems very promising.

    • dig1 20 hours ago

      There is a library called tesser [1] (by Jepsen/Riemann author) that behaves like parallel transducers with more Clojure "native" syntax. With transducers, you have to use "comp," and with tesser, you use "->>" as you'd use with lazy functions.

      Sadly, tesser is not advertised as it should; I find it much more flexible than transducers. E.g. you could parallelize tesser code over Spark/Hadoop cluster.

      [1] https://github.com/aphyr/tesser

      • thom 19 hours ago

        As I understand it Tesser doesn't support ordering which makes it tricky for many types of logic. I'd like to be able to control exactly which parts of a pipeline are parallelisable without too much ceremony, but I can't say I've used Rama in anger or that it makes this ergonomic.

      • aeonik 20 hours ago

        See the injest library as well, I've had pretty good experience with ita performance.

        https://github.com/johnmn3/injest

        I can squeeze more performance out of tesser, but injest gives me a surprising boost with very little ceremony most of the time.

    • mschaef a day ago

      > Getting all this on a smart, distributed runtime seems very promising.

      Hopefully it is.

      This CPS article is the first of the Rama blog posts where it seemed like there might be something there. The earlier posts - "I built Twitter scale Twitter in 10Kloc" - were never really all that convincing. The thing they claimed to have built was too ambitious a claim.

      • thom 21 hours ago

        Oh I think there’s a lot of good stuff baked in there. The big idea downstream is that you have incrementally calculated, indexed data structures to query all the results of this fancy CPS logic. It’s all slightly esoteric even coming from a Clojure background but it ticks every box I want from a modern data platform, short of speaking SQL.

fire_lake a day ago

What is the best introductory post to Rama right now?

I would like to skip the marketing and understand how it compares to a wiring together Kafka, Spark, MySQL, etc.

  • bbor 20 hours ago

    AFAICT it's not actually published yet? It's a bit of a confusing situation because you can "download Rama" but it's not "full", whatever that means. See https://redplanetlabs.com/learn-rama

    E: this download has "the full Rama API for use in simulated clusters within a single process", but also "Rama is currently available in a private beta." That's a highly unusual way to release what appears to be a Java library at the end of the day, but hopefully that's because it's unusually awesome! Looking forward to actual info some time in the future. I wonder if the "private beta" costs money...

    • nathanmarz 19 hours ago

      What we've released publicly on our public Maven repository is a different build of Rama which can only be used for testing/experimentation in a single process. So it can't be used to run full clusters. It's API-equivalent to the full Rama build.

      This will change when we move out of private beta, when Rama will be free to use for production for small-scale applications.

      • grounder 17 hours ago

        Do you have an estimate of when you'll be out of private beta and available? Can you share any more about pricing or what you consider to be a "small-scale application". Thanks!

        • nathanmarz 17 hours ago

          We're aiming to be out of private beta early next year. "Small-scale" basically means the kind of scale a single Postgres node + application server can handle.

robertlagrant 17 hours ago

This article is written for people who know Clojure, or at least the examples are. It might be nice to see the examples written in a non-LISP as well.

E.g.

  (?<-
    (ops/explode [1 2 3 4] :> *v)
      (println "Val:" *v))
Is (I believe) the equivalent of Python's

  for element in [1, 2, 3, 4]:
    print(element)
  • nathanmarz 15 hours ago

    That Python code has the same effect, but the equivalent Python in CPS would be:

      def explode(args, cont):
        for e in args:
           cont(e)
      
      (explode [1, 2, 3, 4], print)
waffletower 17 hours ago

Could I politely suggest more Clojure-like naming? `deframaop -> defop`. You can always `(require 'com.rpl.rama :as rama)` and invoke with `(rama/defop ...)` for the desired level of clarity and improved readability.

  • nathanmarz 17 hours ago

    I named it like this so there would be consistency between deframaop and deframafn. Shortening deframafn like you suggest would be "deffn" or "deffunction", which would be very confusing. And I'd rather have deframaop + deframafn than defop + deframafn.

    • arunix 16 hours ago

      Why is it called Rama? (there may be an FAQ, but I couldn't find it)

      • nathanmarz 16 hours ago

        It's named after the Arthur C. Clarke book.

        • cryptonector 4 hours ago

          But do things in Rama the language come in threes?

          • nathanmarz 3 hours ago

            Yes. The second Rama language will have a co-author and will lose the purity of the first language by adding many keywords each with a lot of emotional backstory.

    • knubie 17 hours ago

      Why not rama/defop and rama/defn?

      • nathanmarz 16 hours ago

        That would make it so you can't do "use" on com.rpl.rama. Since Rama is a full language, doing a "use" on the namespace is generally preferred as otherwise you would have to write "rama/" everywhere, which is irritating.

        I also don't like overloading "defn" with something that's completely different. Also, a deframafn is more than a Clojure defn since it can emit to other output streams.

moomin a day ago

I feel like CPS is one of those tar pits smart developers fall into. It’s just a fundamentally unfriendly API, like mutexes. We saw this with node as well: eventually the language designers just sighed and added promises.

You’re better off with an asynchronous result stream, which is equivalent in power but much easier to reason about. C#’s got IAsyncEnumerable, I know that Rust is working on designing something similar. Even then, it can be hard to analyse the behaviour of multiple levels of asynchronous streams and passing pieces of information from the top level to the bottom level like a tag is a pain in the neck.

  • cryptonector 4 hours ago

    IMO TFA makes much of CPS because that's how they chose to implement generators, but the main thing about the language is that it has generators.

    Now having generators is nothing new, but I don't want to take too much away from TFA, as there are some interesting things there. I'll limit myself to pointing out that the Icon programming language had generators and pervasive backtracking using CPS in the Icon-to-C compiler, and that other languages with generators and pervasive backtracking have been implemented with CPS as well as with bytecode VMs that don't have any explicit (internally) continuations. Examples include Prolog, Icon, and jq, to name just three, and now of course, Rama.

  • mschaef a day ago

    > I feel like CPS is one of those tar pits smart developers fall into. ... eventually the language designers just sighed and added promises.

    Bear with me, but raising kids taught me a lot about this kind of things.

    Even at two or three years old, I could say things to my children that relied on them understanding sequence, selection, and iteration - the fundamentals of imperative programming. This early understanding of these basic concepts why you can teach simple imperative programming to children in grade school.

    This puts the more advanced techniques (CPS, FP, etc.) at a disadvantage. For a programmer graduating college and entering the workforce, they've had life time of understanding and working with sequencing, etc. and comparatively very little exposure to the more advanced techniques.

    This is not to say it's not possible to learn and become skillful with these techniques, just that it's later in life, slower to arrive, and for many, mastery doesn't get there at all.

    • pyrale 21 hours ago

      I feel like these explanations based on cognitive development always end up with unprovable assertions which inevitably support their author's views. The same exist about natural language, and they're always (unconvincingly) used to rationalize why language A is better than language B.

      In my experience, when you ask people to tell you what "basic" operations they do for e.g. multi-digit number additions or multiplications, you get many different answers, and it is not obvious that one is better than another. I don't see why it would be different for languages, and any attempt to prove something would have a high bar to pass.

      • mschaef 15 hours ago

        > I feel like these explanations based on cognitive development...they're always (unconvincingly) used to rationalize why language A is better than language B.

        I'm not arguing that one language is _better_ than another... just that people are exposed to some programming concepts sooner than others. That gives these ideas an incumbency advantage that can be hard to overcome.

        > any attempt to prove something would have a high bar to pass.

        Honestly, the best way to (dis)prove what I'm saying would be to put together a counterexample and get the ideas in broader use. That would get FP in the hands of more people that could really use it.

    • moomin 20 hours ago

      I take your point about mastery. Especially FP, where it's very clear that mastery of it is extremely powerful. On the other hand, there are some like our regular synchronization primitives where not even mastery will save you. Even experienced developers will make mistakes and find them harder to deal with than other higher-level abstractions. Where CPS fits on this curve, I don't know. I feel pretty confident about where FP and Mutexes sit. But I have yet to see something where I feel I'd rather use CPS than an async stream result.

      • mschaef 15 hours ago

        > Especially FP, where it's very clear that mastery of it is extremely powerful. On the other hand, there are some like our regular synchronization primitives where not even mastery will save you.

        This alludes to my biggest frustration with FP... it solves problems and should be more widely used. But by the time people are exposed to it, they've been doing imperative programming since grade school. It's harder for FP to be successful developing critical mass in that setting.

        At least, this is my theory of the case. I'd love counter examples or suggestions to make the situation better.

  • oersted a day ago

    I agree. I'm sure that CPS has much more robust theoretical roots and that it's more general and powerful, but in practice it doesn't often look much different from classic callback-hell.

    Generally, I prefer the coroutine/generator style, it is more explicit and straightforward syntax-wise. More importantly, it decouples operation execution from chaining. A function that emits multiple values in sync/async shouldn't be responsible for running the next function in the pipeline directly. It's better when the user of the interface has direct control over what function is run over which values and when, particularly for parallelizing pipelines.

    I do understand that Rama builds such a syntax on top of CPS, and a compiler that implements generators has a similar execution model (perhaps an explicit state-machine rather than leveraging the function stack to do the same thing implicitly).

    • pyrale 20 hours ago

      That's because CPS is the callback hell.

      Promises are a mechanism that was devised to separate the composition mechanism and the function itself, much like shell pipes exist to separate the control flow from the called function.

      In this article, they implement a pipe-like mechanism, that avoids having to do "traditional" CPS. That is why they say the continuation is implicit. That being said, that mechanism goes further than that, and looks very much like Haskell's do-notation which enables programmers to use functional languages in an imperative style without knowing too much of the underlying implementation.

      • nathanmarz 18 hours ago

        The Cont monad in Haskell is only for single continuation targets and can't do branching/unification like Rama. That kind of behavior doesn't seem like it would express naturally or efficiently with just "do".

        • pyrale 15 hours ago

          Yes, Rama probably isn’t semantically comparable to one single monad.

          I was talking about the do notation as a way to sugar the syntax of cps monadic operations into a flat, imperative syntax. This is exactly what Rama is doing.

          If you look at a tutorial of what haskell do-notations desugar into, you’ll find the same cps stuff described in this article.

  • cryptonector 4 hours ago

    I've actually used hand-coded CPS in an evented program for C10K. Hand-coding CPS is a real pain, but CPS is usually used as an implementation detail, and as such it's not an API. In some cases you can get at an implicit continuation to then use it explicitly (call/cc comes to mind), and then it's an API, sure, but typically one does not have to use it.

  • greener_grass a day ago

    CPS might be the underlying structure, but that doesn't mean that CPS is the interface.

  • packetlost 19 hours ago

    CPS is one of the most obnoxious ways to write code.

    Unless you're Gleam, in which case it feels natural and looks pleasant.

  • andrewflnr 19 hours ago

    Yeah, CPS is best used as an implementation technique, which is how it's used here. I think they even use it to build a stream-like API for their "operations".

  • neonsunset 17 hours ago

    I found myself liking the F# way of consuming/producing IAsyncEnumerable's with taskseq. It's very terse and looks nice: https://github.com/fsprojects/FSharp.Control.TaskSeq?tab=rea...

    • moomin 16 hours ago

      Apparently Microsoft hold a patent on “yield!”, which makes it all the more frustrating that they haven’t included it in C#.

      • neonsunset 16 hours ago

        In F#, yield! is a computation expression, C#'s yield within methods that return IAsyncEnumerable<T> works more or less the same way.

kamma4434 a day ago

I don’t want to be a party pooper, but from my very cursory look at the page, I don’t think this will go much farther than a very small community. I feel like you’re adding a lot of complexity compared to normal backhand/frontend/websetvices Scenario that everybody already understand.

  • nathanmarz 19 hours ago

    Actually, we've eliminated a massive amount of the complexity of backend development. This is most pronounced at large scale, but it's true at small scale as well.

    Our Twitter-scale Mastodon example is literally 100x less code than Twitter wrote to build the equivalent (just the consumer product), and it's 40% less code than the official Mastodon implementation (which isn't scalable). We're seeing similar code reduction from private beta users who have rewritten their applications on top of Rama.

    Line of code is a flawed metric of course, but when it's reducing by such large amounts that says something. Being able to use the optimal data model for every one of your use cases, use your domain data directly, express fault-tolerant distributed computation with ease, and not have to engineer custom deployment routines has a massive effect on reducing complexity and code.

    Here's a post I wrote expanding on the fundamental complexities we eliminate from databases: https://blog.redplanetlabs.com/2024/01/09/everything-wrong-w...

    • btown 16 hours ago

      The original post makes so much more sense in this context! One of the "holy grails" in my mind is making CQRS and dataflow programming as easy to learn and maintain as existing imperative programming languages - and easy to weave into real-time UX.

      There are so many backend endpoints in the wild that do a bunch of things in a loop, many of which will require I/O or calls to slow external endpoints, transform the results with arbitrary code, and need to return the result to the original requestor. How do you do that in a minimal number of readable lines? Right now, the easiest answer is to give up on trying to do this in dataflow, define a function in an imperative programming language, maybe have it do some things locally in parallel with green threads (Node.js does this inherently, and Python+gevent makes this quite fluent as well), and by the end of that function you have the context of the original request as well as the results of your queries.

      But there's a duality between "request my feed" and "materialize/cache the most complex/common feeds" that's not taken into account here. The fact that the request was made is a thing that should kick off a set of updates to views, not necessarily on the same machine, that can then be re-correlated with the request. And to do that, you need a way of declaring a pipeline and tracking context through that pipeline.

      https://materialize.com is a really interesting approach here, letting you describe all of this in SQL as a pipeline of materialized views that update in real time, and compiling that into dataflow. But most programmers don't naturally describe this kind of business logic in SQL.

      Rama's CPS assignment syntax is really cool in this context. I do wish we could go beyond "this unlocks an entire paradigm to people who know Clojure" towards "this unlocks an entire paradigm to people who only know Javascript/Python" - but it's a massive step in the right direction!

  • diggan 21 hours ago

    With that mindset, should we just stop trying to improve anything regarding backend/frontend/webservices since "everybody already understand it"?

    • kimi 20 hours ago

      I second the OP - I'm not sure where the big prize is. I have a feeling that whomever wrote the article thinks there is a 10x (or 100x) improvement to be made, but I was not able to see it.

      I find the syntax very clunky, and I have been programming professional Clojure for at least 10 years. It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water. Been there, done that - thanks but no thanks.

      OTOH I know who Nathan is, so I'm sure there is a gem hidden somewhere. But the article did not convince me that I should go the Rama way for my next webapp. I doubt the average JS programmer will be convinced. Maybe someone else will find the gem, polish it, and everybody will be using a derivative in 5 years.

      • nathanmarz 19 hours ago

        Well, this article is to help people understand just Rama's dataflow API, as opposed to an introduction to Rama for backend development.

        Rama does have a learning curve. If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it. Here are two examples of how elegant it is:

        This one does atomic bank transfers with cross-partition transactions, as well as keeping track of everyone's activity:

        https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

        This one does scalable time-series analytics, aggregating across multiple granularities and minimizing reads at query time by intelligently choosing buckets across multiple granularities:

        https://github.com/redplanetlabs/rama-demo-gallery/blob/mast...

        There are equivalent Java examples in that repository as well.

        • goostavos 17 hours ago

          This question is probably obvious if I knew what a microbatch or topology or depot was, but as a Rama outsider, is there a good high level mental model for what makes the cross-partition transactions work? From the comments that mention queuing and transaction order, is serialized isolation a good way to imagine what's going on behind the scenes or is that way off base?

          • nathanmarz 17 hours ago

            A depot is a distributed log of events that you append to as a user. In this case, there's one depot for appending "deposits" (an increase to one user's account) and another depot for appending "transfers" (an attempt to move funds from one account to another).

            A microbatch topology is a coordinated computation across the entire cluster. It reads a fixed amount of data from each partition of each depot and processes it all in batch. Changes don't become visible until all computation is finished across all partitions.

            Additionally, a microbatch topology always starts computation with the PStates (the indexed views that are like databases) at the state of the last microbatch. This means a microbatch topology has exactly-once semantics – it may need to reprocess if there's a failure (like a node dying), but since it always starts from the same state the results are as if there were no failures at all.

            Finally, all events on a partition execute in sequence. So when the code checks if the user has the required amount of funds for the transfer, there's no possibility of a concurrent deduction that would create a race condition that would invalidate the check.

            So in this code, it first checks if the user has the required amount of funds. If so, it deducts that amount. This is safe because it's synchronous with the check. The code then changes to the partition storing the funds for the target user and adds that amount to their account. If they're receiving multiple transfers, those will be added one at a time because only one event runs at a time on a partition.

            To summarize:

            - Colocated computation and storage eliminates race conditions

            - Microbatch topologies have exactly-once semantics due to starting computation at the exact same state every time regardless of failures or how much it progressed on the last attempt

            The docs have more detail on how this works: https://redplanetlabs.com/docs/~/microbatch.html#_operation_...

        • refulgentis 18 hours ago

          > If you think its API is "clunky", then you just haven't invested any time in learning and tinkering with it.

          Sigh.

      • stingraycharles 20 hours ago

        I would have expected better from HN that to shoot down smart people tinkering with potentially elegant solutions to complex problems. It’s something we should embrace.

        Having said that, as a long term Clojure developer myself, I’m also not a big fan of this approach myself (I try to avoid libraries that use a lot of macros, and instead prefer a more “data driven” approach, which is also why I’m not a fan of spec), but I’m not one to judge.

      • bbor 20 hours ago

        TBF "this Clojure library has clunky syntax that makes it brittle" is a far more sophisticated and valid critique than "it's not built on Node so no one will use it" ;)

      • eduction 20 hours ago

        > It reminds me of clojure.async - wonderful idea, but if you use the wrong sigil at the wrong place, you are dead in the water.

        Isn’t that how any programming works? If you call the wrong function, pass the wrong var, typo a hash key etc etc the whole thing can blow up. Not sure how it’s a knock on core.async that you have to use the right macro or function in the right place. Are there async libraries that let you typo the name of their core components? (And yes some of the macros are named like “<!”, is that naming the issue?)

        • synthc 19 hours ago

          The difference is in how easy it is to detect the cause of problems. Mistakes like wrong function names are mostly easy to find and fix. Mistakes when using core.async can be very hard to track down.

          • eduction 16 hours ago

            Not at all my experience. Do you have any examples?

            OP called it "clojure.async." I question how much they've really used it.

            • kimi 3 hours ago

              Enough to keep wondering if this is the case <! or <<! and if whether I'd be better off with a dead-stupid, surprise-free thread pool.

        • ValentinA23 16 hours ago

          No it is different because libraries such as core.async or Rama rely on inversion of control [1]: the framework is in charge of the control flow and code fed to the framework will be executed by some kind of black box. To achieve this, these frameworks build their own machinery on top of existing core facilities (normal functions, call stacks, etc) to implement similar concepts (rama ops for instance) one level above. The real issues arise when something goes wrong.

          If you're lucky you'll get an exception but it won't tell you anything about the process you described at the framework level using the abstractions it offers (like core.async channels). The exception will just tell you how the framework's "executor" failed at running some particular abstraction. You'll be able to follow the flow of the executor but not the flow of the process it executes. In other words the exception is describing what is happening one level of abstraction too low.

          If you're not lucky, the code you wrote will get stuck somewhere, but issuing a ^C from your REPL will have no effect because the problematic code runs in another thread or in another machine. The forced halting happens at the wrong level of abstraction too.

          These are serious obstacles because your only recourse is to bisect your code by commenting out portions of it just to identify where the problem arises. I personally have resorted to writing my own half-baked core.async debugger, implementing instrumentation of core.async primitives gradually, as I need them.

          Having said that, I don't think this is a fatal flaw of inversion of control, and in fact looking at the problem closely I don't think the root issue is that they come with their own black box execution systems. Those are not black boxes, as shown by the stack traces these frameworks produce which give a clear picture of their internals, they are grey boxes leaking info about one execution level into another level. And this happens because these frameworks (talking about core.async specifically, maybe this isn't the case with Rama) do not but should come with their own exception system to handle errors and forced interruption. Lacking these facilities they fallback on spitting a trace about the executor instead of the executed process.

          What does implementing a new exception system entails ?

          Case 1, your IoC framework does not modify the shape of execution, it' still a call-tree and there is a unique call-path leading to the error point, but it changes how execution happens, for instance it dislocates the code by running it on different machines/threads. Then the goal is to aggregate those sparse code points that constitute the call-path at the framework's abstraction level. You'll deal with "synthetic exceptions" that still have the shape of a classical exception with a stack of function calls, except that these calls are in succession only from the framework semantics; at a lower-level, they are not.

          Case 2, the framework also changes the shape of execution, you're not dealing with a mere call-tree anymore, you're using a dataflow, a DAG. There is not a single call-path up to the error point anymore, but potentially many. You need to replace the stack in your exception type by a graph-shaped trace in addition to handling sparse code point aggregation as in case 1.

          Aggregation to put in succession stack trace elements that are distant one abstraction level lower and to hide parts of the code that are not relevant at this level. And new exception types to account for different execution shapes.

          In addition to these two requirement, you need to find a way to stitch different exception types together to bridge the gap between the executor process and the executed process as well as between the executed process and callbacks/continuations/predicates the user may provide using the native language execution semantics.

          [1] https://en.wikipedia.org/wiki/Inversion_of_control

          • eduction 15 hours ago

            Yes, core.async is CSP style async (not to be confused with CPS programming style, which this article is about) and there is a learning curve. Particularly as the go macro cannot see across function boundaries. (Some of Rich Hickey's videos on it give an overview similar to what you wrote above.)

            My confusion was on the OP's statement about "sigils": "if you use the wrong sigil at the wrong place, you are dead in the water."

            So don't use the wrong sigil? There are all of two of them; I think OP means the parking take and blocking take macros. One is used inside go blocks and one outside. That was the easy part. The hard part was wrapping my head around how to efficiently program within the constraints imposed by core.async. But the machinery of how to do things (macros, functions) was very simple and easy to learn. You basically just need to learn "go", "<!" and "<!!". Eventually you may need ">!", "alts!", and "chan".

            • kimi 2 hours ago

              I am with you on this - it's not impossible, but is it maintainable? what if I break a leg? and when something blocks and you don't know why, who can debug it? that's why we went for a different approach.

              The problem with core.async is that it is an excellent PoC, but does not actually solve the underlying problem, that is "Hey! I want a new thread here! And I want it cheap.". Project Loom solves it. Of course, the problem is not something that could be solved within the land of bytecode.

            • ValentinA23 15 hours ago

                  (defn test-dbg7 [] ;; test buffers
                      (record "test-dbg.svg"
                              (let [c ^{:name "chan"} (async-dbg/chan 1)]
                                ^{:name "thread"}
                                (async-dbg/thread
                                  (dotimes [n 3]
                                    ^{:name "put it!"} (async-dbg/>!! c n))
                                  ;; THE BUG IS HERE. FORGOT TO CLOSE GODAMNIT
                                  #_(async-dbg/close! c))
                                (loop [x (async-dbg/<!! c)]
                                  (when x
                                    (println "-->" x)
                                    (recur ^{:name "take it!"} (async-dbg/<!! c)))))))
              
              The code above produces the following before hanging:

                  --> 0
                  --> 1
                  --> 2
              
              https://pasteboard.co/L4WjXavcFKaM.png

              In this test case, everything sits nicely within the same let statement, but these puts and reads to the same channel could be in different source files, making the bug hard to track.

              Once the bug is corrected the sequence diagram should look like this:

              https://pasteboard.co/CCyGZKUUkVFL.png

              • eduction 14 hours ago

                Ya I also needed some time to wrap my head around async programming but OP was talking about "use[ing] the wrong sigil at the wrong place" - that's not what your stumbling block is here, you forgot to close the channel and you have a loop statement that by design is going to read eternally from the channel so as long as the channel is open you're going to "hang". Doesn't have anything to do with mixing up "sigils", it's just that async programming has unique challenges.

    • jgalt212 21 hours ago

      You're arguing change (a broadly defined term) is not necessarily bad, but the OP is arguing adding complexity (a type of change) is bad.

thunkingdeep a day ago

I’ve always thought that CPS is a good barometer for finding out whether a developer is talented or whether they THINK they’re talented enough to design and/or implement these kinds of compiler components. This kind of thing but definitely CPS in particular is so much trickier to nail down than it initially seems if you’ve written a compiler before. Up there in difficulty with automatic parallelization and loop transforming. I tried to write a very small POC lisp once with an idea to have all vectors of known sizes get map’d in parallel and I never could nail it down.

Kudos to all involved. Clojure is such a mind bending tool. God only knows what it takes these people to maintain the guts of it all.