Coalton [1] is a Lisp language that has Hindley-Milner type inference with type classes. Its syntax actually resembles the prototype syntax from TFA pretty closely. The Coalton syntax-equivalent would be:
Of course types and classes like Option and Eq are built-in already.
Coalton is based on Common Lisp (and allows free mixing with CL code) instead of Scheme however, though Coalton is a Lisp-1 and would feel relatively at-home to any Scheme programmer.
Very cool! I saw a number of projects pretty similar to this one, typed racket for example. I didn’t really talk about it too much but what I want to do is flip the direction. Coalton is based on CL, but what I would like is scheme to essentially be a set of Gouki function with the signature top … -> top. Then, i want to see how fast I can make the interaction between them be using different analysis like k-CFA
Coalton is great. Indeed often it's much easier to just implement on top of either SBCL or ChezScheme; they are very fast and chances are you are not going to make anything as or more performant, better documented etc. However, reimplementation in Rust is not too bad as that's were we need to be for small, fast, low level runtimes. If the goal was 'just' their own language, I feel just a an implementation on top of CL/SBCL + Coalton would've been faster to build, easier to maintain and probably far more performant. Less fun though!
The Rust community has given us at least two Scheme runtimes, several JS runtimes, but no Clojure interpreter yet that I'm aware of. There are a few zombie projects squatting on the name (seems common in Rust) but no viable releases.
jank is the native Clojure dialect on LLVM, with seamless C++ interop. You get full interactive dev, REPL support, etc while having native binaries, ability to JIT compile C++ code along with your Clojure sources, ability to load arbitrary .o and .so files, etc.
It'll have its first alpha release this year. It's currently written in C++, but will be migrated to Rust piece by piece, either once we find a champion to lead it or once we reach Clojure parity and I can focus on it. C++ interop will remain, but we'll benefit from Rust's memory safety and improved dev tooling.
I randomly came across the talk from the founder, and their reasoning for this was awesome: basically creating a C++ REPL for the science community (and the rest of us by happy accident).
If it is a Clojure dialect, implementated on top of a compiler construction kit written in C++, generating C++, and calling into C++ libraries, what is the benefit of rewriting part of it in Rust, complicating the whole toolchain, only to write some blog post about the rewrite?
If it is about additional safety, in any regard, self hosting would be the answer, not Rust.
there's no reason you couldn't do a clojure in rust
but the main implementation benefits from GC and making it easy to call host (java) methods. as far as i know, the core clojure data structures are kinda hard if you don't have a GC
The hashed trie structure is definitely more annoying to write without a general garbage collector. Reference counting it works though, and comes with the magic trick that a reference count of one means you're the only reference to it, i.e. only one thread can see it, thus mutation is safe. Encoding that in the lifetime rules of rust might be a nuisance. Works very well in C.
Rust has provisions for safely existing outside of the lifetime rules, which exist largely to accommodate refcounting schemes. You'd likely be using an Arc<RwLock<T>> here. Otherwise you can implement something on your own and express whatever C-style shenanigans you have in mind via the primitives in std::cell or std::sync.
if you're implementing the clojure data structures, you wouldn't use a cell inside. semantically, they're immutable, and the operations on them all return copies that share nodes with the old one. if you know the old one won't be used, you can cheat and mutate it instead
the issue with using Arc for these data structures is that it's a fatter pointer (costs memory, causes cache pressure), and it requires atomics/fences. they also generate a ton of garbage in typical use-cases (unless you can do the mutation optimization). languages with a compacting GC also usually get a faster allocator
it's basically ideal conditions for a GC to outperform reference counting
I don't doubt that a good GC can generally outperform a simple refcounting scheme, but when it comes to the aforementioned count-of-one optimization, Rust already gives you that out of the box by dint of the fact that owned values are moved by default and bumping the refcount is an explicit operation, which means that in any given scope you can simply pass the handle on if you're done with it and subsequent functions can access it without needing to touch the counter at all.
Yes, although the Common Lisp derived ones, and the old ones from Xerox, Symbolics, TI and co, also have other capabilities available, especially for the low level programming part, and all common data structures, not only lists.
The only mature, production-used exception of which I can think is Naughty Dog's Game Oriented Object Lisp/Game Oriented Assembly Lisp. EDIT: I was wrong, wikipedia claims there are basic GC functions. I can't speak to the details, but I imagine that whatever they have would attempt to lift allocations out of the hot path/use arena allocation or some similar mechanism.
That said, people have strong opinions on whether reference counting counts as GC. I say it does, but others are vehement that it does not. If it does not count as GC, I think Interlisp-D would qualify.
Hygienic macro expansion is one of the things I still haven't implemented before. I remember a [talk][1] where Matthew Flatt illustrates the idea of sets of scopes, and choosing the largest intersection. I see in your implementation there are sets of "marks" in syntax objects, is that what's going on there?
I haven't played with rust, but when I do I'll be able to play with this scheme too.
I probably should have looked at this talk! Instead my reference was this PhD thesis[1] cited in the R6RS spec. I can’t say I read the whole thing, but my takeaway was to use a system of marks/antimarks. The basic idea is that at the point of macro expansion, the syntax object is “marked” (i.e. added to the set of marks) with a random integer. The environment of the macro is recorded at that point as well, along with the mark used. After expansion, the resulting syntax object is again marked with the same mark. When an identifier is marked twice with the same mark, the marks cancel each other out. The result of this is that identifier that are introduced into the expander and those introduced by the expander are distinguishable.
The talk goes line by line through this code [1]; I've been transcribing and taking notes on this for the last week, for a somewhat similar project actually.
If you're interested, I've got a huge pile of papers and links collected here [2] that you might enjoy. I've read everything, but right now it's still a big ol' mush; there's a _ton_ of prior art!
I have said this before, but it deserves repeating: When people say scheme is a simple language, they do not mean the hygienic macro system. Especially not something like psyntax/syntax-case.
I am very impressed! The implementation is crazy clear.
I think you make a pretty bad case for how embedding a Scheme interpreter is going to help with the pain points of async. Listing "stack traces full of tokio code" and then seemingly proposing to solve that by adding more glue to pollute the stack traces is especially weird.
Have you seen a stack trace originating from somewhere within tokio? Nearly all useful information is lost. My contention is that by isolating the functions that are required to be written in Rust and then doing orchestration, spawning, etc in Scheme the additional debug information at runtime will make the source of errors much more clear.
I could be wrong! But hey there’s other reasons too. Being able to debug Rust functions dynamically is pretty cool, as well as being able to start/stop them like daemons.
It seems like the intended application is live debugging. Lisp and Smalltalk have a rather nice history here. C, C++ boost and Rust tend more to be lands of contrasts.
Chris Kohlhepp has a great write up of embedding ECL in C++. The trick is to know about the C++ configure flag for building ECL. [0 with terminal screenshots, 1 on web archive without terminal screenshots]
Haskell people seem to like Lua. Just look at pandoc.
I thought the same thing. It's a hard pivot from "async rust is hard" to "but if we add in Scheme it will be good!" With no real justification for it. I'm not a Scheme guy but I don't see the connection.
What would be really nice is a new lisp which actually allows for incredibly interactive programming similar to common lisp, but which targets a runtime like the v8 engine maybe.
Because I think a lot of people are missing out on the Smalltalk / Common Lisp experience.
Even Clojure and other lisps do not enable that level of interactive programming which the break loop in Common Lisp does.
I actually built one of my earlier startups on Steelbank Common Lisp and I absolutely love the break loop. To build up a program by iterating on the break loop is just a magical experience and I think one can go even much further with a modern version of it where you can sort of jump around the stack and you can almost do a bunch of these things in common lisp as well but it is something that needs a little bit of work and it doesn't look and feel as modern as some of the other languages so I feel like therelisp as well but it is something that needs a little bit of work and it doesn't look and feel as modern as some of the other languages so I feel like there is an opportunity for a new language.
I'm more of a R7RS kinda guy but I appreciate this, especially the types. Scheme allows for extremely non-linear memory access which means you necessarily need dynamic memory management and garbage collection. IMO once you're to that spot, there's little reason to stick with Rust, though it being the implementation language is interesting in itself. That being said, there are battle tested Scheme implementations out there that have fantastic FFI APIs that would probably work just as well if you don't mind a small bit of C to glue things together.
This seems like a great idea and I support the effort. It was not clear to me on first read though that what was being proposed is not an extension to current async rust (compiling code to a series of state machines), but a completely alternative design utilizing a context switching runtime like Erlang or Go. If I interpreted that wrong please correct me.
Part of me wonders, considering that rust is a systems programming language, how difficult would it be to write a runtime like that in rust so that there is no need to use a second language?
To be clear, in this comment, I am referring to async in the broadest sense. I am specifically NOT referring to the Rust compiler keyword and runtimes (like tokio) that work with rust future state machines.
If I understand OP correctly his current proposal is to create a different runtime for rust. That runtime happens to be written in scheme using his own compiler. It is meant to make writing async rust applications easier by having simple rust interop.
Given the authors implicit argument that an async runtime with a stack-based context switching model can provide better debugging. Is it possible to write something like that in rust? That may be better for some use cases, as it wouldn't result in a multi-language project.
Unlikely. C and Rust have run-time environments. Just not as elaborate as the JVM for Java or a web browser for JS and WASM.
It could only be about adding a second run-time environment to the same operating system process. That’s what the questions seems to be enquiring about.
And it’s about instrumenting the Rust run-time environment with a REPL and human-friendly run-time data structures.
It’s about interactive instrumentation of the runtime. Think lua in Haskell adding slight reflection and a REPL. In another comment I referenced Chris Kohlhepp’s write up.
I wonder if people are in a way misusing Rust by trying to use it to build everything. It's designed to be a systems language, one for writing OSes, drivers, core system services, low level parsers, network protocols, compilers, runtimes, etc.
There's no way a low-level systems language like this is going to be quite as ergonomic as a higher level more abstract garbage collected language.
IMO you've stumbled into one of the reasons Rust is amazing -- it can go to higher levels, for many good reasons. Whether you should is another question, and it will arguably never be as easy to write as python or javascript (to be fair it's also easy to write inconsistent code in those languages), but it can.
I wouldn't argue that Rust is as ergonomic as other languages, but it has enough pieces to build DSLs that make it look as ergonomic as other languages. An example of this is the front-end frameworks that have been spun in Rust, like Leptos[0].
Rust probably has no business being on the frontend, but the code snippets are convincing and look pretty reasonable. If the macro machinery and type stuff never blows up/forces you to touch internals, then does anyone care that it's a system language underneath?
> Rust probably has no business being on the frontend...
I would argue it this were more common they would have a lot of incentive to iron out the rough spots. Well, hopefully not in the way that makes it easier to have you download a 100+ megabyte wasm bundle to read a static blogpost, but anyways.
Personally, if I'm writing a script for an actual purpose (as opposed to just coding for coding's sake) I reach for python and if that's too slow I write a wrapper around some C code to speed things up. If I'm just messing around I use C(++) because that's what I know -- which is where rust could easily fit if the learning curve wasn't so high that I don't see the effort as worth it.
So? There is instrumentation for Async Rust being added. Just not in the Rust compiler and not in tokio. More in application level code. So as far as the user is concerned to "their" Rust run-time. A distinction that is mostly useless when programming lisp. Hence lisp.
Lisp is about solving your problem and getting the "language" out of the way. So you add what is missing. As long as it works it doesn’t even matter where it’s added.
Even C has a barebones runtime environment when compiled for a machine with an MMU and an OS, so not a microcontroller.
CPS style stands for continuation passing style. It is common for compiler to represent programs in a CPS style since it is a handy format for a compiler to reason about.
> I believe that it is a remarkably well designed language
> and just overall a poor debugging experience make async Rust a frustrating experience for rapid iteration.
> is the result of conscious trade-offs, trade-offs that I believe were chosen correctly. I don’t believe that async Rust necessarily needs to change at all, although I don’t doubt that the experience could be improved.
That's rust for you. It's a remarkably well designed language. Except for all the ways it isn't. What's hilarious to me is none of these are even deep or hard to find issues. Right out of the gate.. cargo is a wart.. and async was designed inside out.
Remarkable. Indeed.
So, _of course_, let's use it to implement a _different_ language entirely. The niche is disappearing.
Why do people put up with the Lisp syntax? I get that it's super simple for computers to parse, but it's definitely not pleasant for humans to parse, and humans are the ones reading/writing code.
I would have thought some very simple syntactic sugar would make it a lot nicer to work with. Has anyone done that?
People don’t put up with Lisp syntax, we actively enjoy it. Some of us find it very pleasant to read.
Most folks probably had a fair amount of difficulty as children associating letter shapes with sounds, and then with words. Languages such as English have all sorts of historical cruft in their orthography, syntax and grammar, and yet we write poetry in it anyway.
Syntaxes that are hard for computers to parse are also hard for humans to parse.
But, nobody literally parses Lisp syntax: you rely on indentation, just like when you're writing C, Java, shell scripts.
People working in Lisp like Lisp syntax; it's really easy to edit, very readable, very consistent.
Anything new in the language, whether coming from a new release of your dialect or someone's new project, is a list with a new keyword, and predictable syntax! Syntax that works with your editor, tooling, and readability intuitions.
If you work involves a lot of numerical formulas, the Lisp way of writing math isn't always the best. You might have documents for the code which use conventional math and someone going back and forth between the doc and the code may have to deal with the translation.
Most software isn't math formulas. The language features that need ergonomics are those which address program organization.
There are ways to get infix math if you really want it in that code.
> Has anyone done that?
Over and over again, starting in the 1960's.
1. 1960's: Defunct "Lisp 2" project run by John MacCarthy himself. Algol-like syntax generating Lisp code under the hood.
2. 1970s': CGOL by Vaughan Pratt (of Pratt Parser fame).
3. Others:
- Sweet Expressions by David A. Wheeler (for Scheme, appearing as a SRFI).
- Dylan
- infix.cl module for Common Lisp
- Racket language and its numerous #lang modules.
The common thread in "syntactic skins" for Lisp is two fold: they all either died, or turned into separate languages which distanced themselves from Lisp.
None of the above skins I mentioned received wide spread use and are defunct, except maybe Racket's various #lang things.
However, Lisp has inspired languages with non-Lisp syntax: some of them with actual Lisp internals. For instance the language R known for statistical capabilities is built on a Lisp core. It has symbols, and cons-cell based lists terminated by NIL, which are used for creating expressions. The cons cells have CAR and CDR fields!
The language Julia is boostrapped out of something called Femtolisp, which is still buried in there.
> Syntaxes that are hard for computers to parse are also hard for humans to parse.
This is obviously untrue. People aren't computers. Obviously there's some relationship, but it's clearly not 1:1.
> it's really easy to edit
Is it? Kind of looks like a formatting nightmare to me, though presumably auto-formatters are pretty much a requirement (and I guess trivial to implement).
If you remove the syntactic sugar like operators (which includes assignment and curly braces - it all desugars into function calls!), R is basically a lazily evaluated Lisp with S-expressions written as C-style function calls. By "lazily evaluated" here I mean that each argument is actually passed as the underlying S-expr + environment in which it was created, so it can either be evaluated when its value is needed, or parsed if the function actually wants to look at the syntax tree instead (which covers a lot of the same ground as macros).
So the existence of R is evidence that Lisp's syntax is a serious problem. If a language providing syntactic sugar has a significantly increased adoption rate, that suggests that the bitter pill of your syntax is a problem :-).
Language adoption is a complicated story. Many Lispers won't agree with me anyway. But I do think the poor uptake of Lisp demands an explanation. Its capabilities are strong, so that's not it. Its lack of syntactic sugar is, to me, the obvious primary cause.
R isn't simply syntactic sugar over a Lisp-like runtime, to make it acceptable, but an implementation of an earlier language S, using Lisp techniques under the hood.
In fact, I recommend to newbies that their Lisp syntax highlighting shades the parentheses close to your background color. You get the best of both worlds: they fade away, but you can use Structural Editing.
I read these articles with Go or Rust code examples and just tune out because I don't use either of those languages so don't know enough to extract the wheat from the chaff, as they say.
Hell, I don't even know scheme enough to read/write code without having to look up the syntax and I've been hacking on a scheme interpreter off and on for years.
Something about different strokes for different folks...
I've written a lot of Lisp in my past. I'm used to reading Lisp. Still hate it. For example, everyone else has figured out that mathematical notation uses infix. Stock Lisp's inability to handle infix notation means most software developers will not consider using it seriously. Obviously not all will agree with me :-).
I will give my answer, noting that not everyone will agree with me :-).
The vast majority of software developers do not put up with Lisp syntax. The vast majority of software developers use a different programming language with a much better syntax, and never consider using Lisp for anything serious. For example, for most developers, a programming language is a laughable nonstarter if it cannot use infix notation for basic arithmetic, as that is the standard notation for mathematics worldwide and is taught to everyone.
As evidence, look at TIOBE:
https://www.tiobe.com/tiobe-index/
... you'll see "Lisp" is #26, lower than Classic Visual Basic, Kotlin, Ada, and SAS. Scheme and Clojure aren't even in the top 50, so adding them wouldn't change much. Lisps will never break the top 10 with their current syntax.
There is a reason for Lisp's syntax: it's homoiconic, enabling powerful macros. Lisp macros are incredibly powerful, because you can manipulate programs as data. Other macro systems generally don't hold a candle to Lisp's. The vast majority of "simple syntactic sugar" systems for Lisp lose homoiconicity, ruining Lisps's macros. This was even the problem for the M-expressions created by Lisp's original developer. Most of those syntactic improvements also tend to NOT be backwards-compatible.
There is a solution. I developed curly-infix expressions, SRFI-105 https://srfi.schemers.org/srfi-105/ Curly infix enables infix notation WITHOUT losing homoiconicity or requiring a specific meaning for symbols. On top of that I developed sweet-expressions
SRFI-110 https://srfi.schemers.org/srfi-110/ . These are backwards-compatible (easing transition) and homoiconic (so macros keep working). There are no doubt other approaches, too.
In general getting anything accepted and used is hard.
Here's my personal take, though: Almost all programmers abandoned Lisps decades ago, in large part due to its terrible syntax. The few who still use Lisp tend to like its notation (otherwise they wouldn't be using Lisp). Since they like Lisp syntax, they don't see the problem. Cue the "this is fine" cartoon dog in a fire. This means that Lisps will never be seriously considered by most software developers today, due to their user-hostile syntax. Some will learn them, as a fun toy, but not use them seriously.
I think that is a terrible shame. Lisps have a lot going for them. For certain kinds of problems they can be an excellent tool.
I realize most people who seriously use Lisp will disagree with me. That's fine, you're entitled to your own opinion! But as long as Lisp has terrible syntax (according to most developers), it will continue to be relegated to a (relative) backwater. It will continue to be learned, but primarily as an "elegant weapon" from a bygone "civilized age" https://www.explainxkcd.com/wiki/index.php/297:_Lisp_Cycles
Even MIT's 6.001 (Structure and Interpretation of Computer Programs), which once taught powerful concepts using Scheme, has been replaced by a course using Python. Reason: "starting off with python makes an undergraduate’s initial experiences maximally productive in the current environment". Scheme did not help them be maximally productive. See: https://cemerick.com/blog/2009/03/24/why-mit-now-uses-python...
I like Lisp. I don't like Lisp syntax. Since most Lispers don't want to fix Lisp syntax, or don't believe it's a problem, the vast majority of software developers have decided to use a different programming language that has a syntax they prefer to use. The numbers from TIOBE make it clear Lisp isn't a common choice.
You'll no doubt get different views in the comments. :-)
In the back of my mind I'm wondering whether regular parentheses couldn't incorporate the curly infix syntax.
In particular, a Lisp-2 helps here. Suppose we have (x + y). If that is interpreted as arguments + y passed to function x, it makes no sense. There is no such function, and + has no variable binding. So we can take the second interpretation: swap the first two positions.
Even in a Lisp-1, we can simply have a rule that if the second position of a funcion call is the member of a set of math operators (which could be by symbol identity, without caring about the binding), we do the swap.
The users then just can't use + as a variable name in the second position of a function call.
This swap can be done during the macro-expanding code walk, so the interpreter or compiler only see (+ x y), just like with brace expressions.
Coalton [1] is a Lisp language that has Hindley-Milner type inference with type classes. Its syntax actually resembles the prototype syntax from TFA pretty closely. The Coalton syntax-equivalent would be:
Of course types and classes like Option and Eq are built-in already.Coalton is based on Common Lisp (and allows free mixing with CL code) instead of Scheme however, though Coalton is a Lisp-1 and would feel relatively at-home to any Scheme programmer.
[1] https://github.com/coalton-lang/coalton
Very cool! I saw a number of projects pretty similar to this one, typed racket for example. I didn’t really talk about it too much but what I want to do is flip the direction. Coalton is based on CL, but what I would like is scheme to essentially be a set of Gouki function with the signature top … -> top. Then, i want to see how fast I can make the interaction between them be using different analysis like k-CFA
Coalton is great. Indeed often it's much easier to just implement on top of either SBCL or ChezScheme; they are very fast and chances are you are not going to make anything as or more performant, better documented etc. However, reimplementation in Rust is not too bad as that's were we need to be for small, fast, low level runtimes. If the goal was 'just' their own language, I feel just a an implementation on top of CL/SBCL + Coalton would've been faster to build, easier to maintain and probably far more performant. Less fun though!
Also a shout out to Steel (https://github.com/mattwparas/steel), another Scheme-on-Rust which is more active.
The Rust community has given us at least two Scheme runtimes, several JS runtimes, but no Clojure interpreter yet that I'm aware of. There are a few zombie projects squatting on the name (seems common in Rust) but no viable releases.
I expect that jank will be migrating from C++ to Rust, at some point. https://github.com/jank-lang/jank
That’s really interesting, as a long time clojurist and recent rust user. Can you expand on that, or is it too early days?
jank is the native Clojure dialect on LLVM, with seamless C++ interop. You get full interactive dev, REPL support, etc while having native binaries, ability to JIT compile C++ code along with your Clojure sources, ability to load arbitrary .o and .so files, etc.
It'll have its first alpha release this year. It's currently written in C++, but will be migrated to Rust piece by piece, either once we find a champion to lead it or once we reach Clojure parity and I can focus on it. C++ interop will remain, but we'll benefit from Rust's memory safety and improved dev tooling.
https://jank-lang.org/blog/2025-01-10-i-quit-my-job/
> REPL support
I randomly came across the talk from the founder, and their reasoning for this was awesome: basically creating a C++ REPL for the science community (and the rest of us by happy accident).
You're replying to them ;)
They would hardly gain anything and lose access to 40 years of libraries.
Aren't they calling C++ functions via LLVM? Their implementation language shouldn't affect their ability to _generate_ code that uses the C++ ABI.
If it is a Clojure dialect, implementated on top of a compiler construction kit written in C++, generating C++, and calling into C++ libraries, what is the benefit of rewriting part of it in Rust, complicating the whole toolchain, only to write some blog post about the rewrite?
If it is about additional safety, in any regard, self hosting would be the answer, not Rust.
One of the main goals of the project is to bring REPL workflows to C++: https://youtu.be/ncYlHfK25i0?si=rmzp5In3sv3IFlvA&t=965
there's no reason you couldn't do a clojure in rust
but the main implementation benefits from GC and making it easy to call host (java) methods. as far as i know, the core clojure data structures are kinda hard if you don't have a GC
The hashed trie structure is definitely more annoying to write without a general garbage collector. Reference counting it works though, and comes with the magic trick that a reference count of one means you're the only reference to it, i.e. only one thread can see it, thus mutation is safe. Encoding that in the lifetime rules of rust might be a nuisance. Works very well in C.
Rust has provisions for safely existing outside of the lifetime rules, which exist largely to accommodate refcounting schemes. You'd likely be using an Arc<RwLock<T>> here. Otherwise you can implement something on your own and express whatever C-style shenanigans you have in mind via the primitives in std::cell or std::sync.
you could do it in rust with Arc<T>.get_mut
if you're implementing the clojure data structures, you wouldn't use a cell inside. semantically, they're immutable, and the operations on them all return copies that share nodes with the old one. if you know the old one won't be used, you can cheat and mutate it instead
the issue with using Arc for these data structures is that it's a fatter pointer (costs memory, causes cache pressure), and it requires atomics/fences. they also generate a ton of garbage in typical use-cases (unless you can do the mutation optimization). languages with a compacting GC also usually get a faster allocator
it's basically ideal conditions for a GC to outperform reference counting
I don't doubt that a good GC can generally outperform a simple refcounting scheme, but when it comes to the aforementioned count-of-one optimization, Rust already gives you that out of the box by dint of the fact that owned values are moved by default and bumping the refcount is an explicit operation, which means that in any given scope you can simply pass the handle on if you're done with it and subsequent functions can access it without needing to touch the counter at all.
rust is definitely better than other languages for refcounting, since you don't pay for it on reads
Lisps almost universally require or expect GC.
Yes, although the Common Lisp derived ones, and the old ones from Xerox, Symbolics, TI and co, also have other capabilities available, especially for the low level programming part, and all common data structures, not only lists.
The only mature, production-used exception of which I can think is Naughty Dog's Game Oriented Object Lisp/Game Oriented Assembly Lisp. EDIT: I was wrong, wikipedia claims there are basic GC functions. I can't speak to the details, but I imagine that whatever they have would attempt to lift allocations out of the hot path/use arena allocation or some similar mechanism.
That said, people have strong opinions on whether reference counting counts as GC. I say it does, but others are vehement that it does not. If it does not count as GC, I think Interlisp-D would qualify.
The Clojure community hasn't even closed the gap on the CLR implementation they been working on for years, as far as I am aware.
The Helix editor, a popular alternative to Vim, is going to implementing it's plugin system in a Scheme-like language. Helix is also written in Rust.
https://github.com/helix-editor/helix/discussions/3806#discu...
Nice work! It's a good write-up, too.
Hygienic macro expansion is one of the things I still haven't implemented before. I remember a [talk][1] where Matthew Flatt illustrates the idea of sets of scopes, and choosing the largest intersection. I see in your implementation there are sets of "marks" in syntax objects, is that what's going on there?
I haven't played with rust, but when I do I'll be able to play with this scheme too.
[1]: https://www.youtube.com/watch?v=Or_yKiI3Ha4
I probably should have looked at this talk! Instead my reference was this PhD thesis[1] cited in the R6RS spec. I can’t say I read the whole thing, but my takeaway was to use a system of marks/antimarks. The basic idea is that at the point of macro expansion, the syntax object is “marked” (i.e. added to the set of marks) with a random integer. The environment of the macro is recorded at that point as well, along with the mark used. After expansion, the resulting syntax object is again marked with the same mark. When an identifier is marked twice with the same mark, the marks cancel each other out. The result of this is that identifier that are introduced into the expander and those introduced by the expander are distinguishable.
[1]: https://www2.ccs.neu.edu/racket/pubs/dissertation-kohlbecker...
The talk goes line by line through this code [1]; I've been transcribing and taking notes on this for the last week, for a somewhat similar project actually.
If you're interested, I've got a huge pile of papers and links collected here [2] that you might enjoy. I've read everything, but right now it's still a big ol' mush; there's a _ton_ of prior art!
Enjoy, good hacking!
[1]: https://github.com/mflatt/expander/tree/pico [2]: https://github.com/lygaret/gigi?tab=readme-ov-file#reading
Lovely pipe of papers and links, thanks for sharing!
I have said this before, but it deserves repeating: When people say scheme is a simple language, they do not mean the hygienic macro system. Especially not something like psyntax/syntax-case.
I am very impressed! The implementation is crazy clear.
Thank you! I’m very proud of the macro expander, so comments like yours are really invigorating and justify the work I put into it
I think you make a pretty bad case for how embedding a Scheme interpreter is going to help with the pain points of async. Listing "stack traces full of tokio code" and then seemingly proposing to solve that by adding more glue to pollute the stack traces is especially weird.
Not an interpreter! A compiler :-)
Have you seen a stack trace originating from somewhere within tokio? Nearly all useful information is lost. My contention is that by isolating the functions that are required to be written in Rust and then doing orchestration, spawning, etc in Scheme the additional debug information at runtime will make the source of errors much more clear.
I could be wrong! But hey there’s other reasons too. Being able to debug Rust functions dynamically is pretty cool, as well as being able to start/stop them like daemons.
On the other hand, you might wind up with an impressive Scheme implementation.
luajit gets you almost all of scheme features except macros. Speed of luajit is very close to native C.
Sounds like Greenspun's tenth rule.
Well, unlike Lisp & Scheme, Lua and Luajit are used in production, millions of video game consoles, embedded systems, etc.
Isn't this very website written is some sort of lispy goodness?
It seems like the intended application is live debugging. Lisp and Smalltalk have a rather nice history here. C, C++ boost and Rust tend more to be lands of contrasts.
Chris Kohlhepp has a great write up of embedding ECL in C++. The trick is to know about the C++ configure flag for building ECL. [0 with terminal screenshots, 1 on web archive without terminal screenshots]
Haskell people seem to like Lua. Just look at pandoc.
[0] https://isocpp.org/blog/2014/09/embedding-lisp
[1] https://web.archive.org/web/20200307212433/https://chriskohl...
I thought the same thing. It's a hard pivot from "async rust is hard" to "but if we add in Scheme it will be good!" With no real justification for it. I'm not a Scheme guy but I don't see the connection.
What would be really nice is a new lisp which actually allows for incredibly interactive programming similar to common lisp, but which targets a runtime like the v8 engine maybe. Because I think a lot of people are missing out on the Smalltalk / Common Lisp experience.
Even Clojure and other lisps do not enable that level of interactive programming which the break loop in Common Lisp does.
if you are curious about the breakloop as I was: https://news.ycombinator.com/item?id=35693916
I actually built one of my earlier startups on Steelbank Common Lisp and I absolutely love the break loop. To build up a program by iterating on the break loop is just a magical experience and I think one can go even much further with a modern version of it where you can sort of jump around the stack and you can almost do a bunch of these things in common lisp as well but it is something that needs a little bit of work and it doesn't look and feel as modern as some of the other languages so I feel like therelisp as well but it is something that needs a little bit of work and it doesn't look and feel as modern as some of the other languages so I feel like there is an opportunity for a new language.
I'm more of a R7RS kinda guy but I appreciate this, especially the types. Scheme allows for extremely non-linear memory access which means you necessarily need dynamic memory management and garbage collection. IMO once you're to that spot, there's little reason to stick with Rust, though it being the implementation language is interesting in itself. That being said, there are battle tested Scheme implementations out there that have fantastic FFI APIs that would probably work just as well if you don't mind a small bit of C to glue things together.
This seems like a great idea and I support the effort. It was not clear to me on first read though that what was being proposed is not an extension to current async rust (compiling code to a series of state machines), but a completely alternative design utilizing a context switching runtime like Erlang or Go. If I interpreted that wrong please correct me.
Part of me wonders, considering that rust is a systems programming language, how difficult would it be to write a runtime like that in rust so that there is no need to use a second language?
So your goal for the runtime would just be a different runtime for Rust ?
To be clear, in this comment, I am referring to async in the broadest sense. I am specifically NOT referring to the Rust compiler keyword and runtimes (like tokio) that work with rust future state machines.
If I understand OP correctly his current proposal is to create a different runtime for rust. That runtime happens to be written in scheme using his own compiler. It is meant to make writing async rust applications easier by having simple rust interop.
Given the authors implicit argument that an async runtime with a stack-based context switching model can provide better debugging. Is it possible to write something like that in rust? That may be better for some use cases, as it wouldn't result in a multi-language project.
Unlikely. C and Rust have run-time environments. Just not as elaborate as the JVM for Java or a web browser for JS and WASM.
It could only be about adding a second run-time environment to the same operating system process. That’s what the questions seems to be enquiring about.
And it’s about instrumenting the Rust run-time environment with a REPL and human-friendly run-time data structures.
Doesn’t seem like it.
It’s about interactive instrumentation of the runtime. Think lua in Haskell adding slight reflection and a REPL. In another comment I referenced Chris Kohlhepp’s write up.
Is the name Gouki a Street Fighter Reference? (For those not in the know, Gouki is the Japanese name for Akuma)
Yup. He’s also my GitHub avatar.
The idea behind the name is a reference to the language providing the user the maximal set of tools
I see, so Scheme is the Satsui no Hado…
To the author, if you're reading this:
> But while I thing that async Rust is well designed
At least one typo, thank you for not using an LLM to spit out an article. :)
I wonder if people are in a way misusing Rust by trying to use it to build everything. It's designed to be a systems language, one for writing OSes, drivers, core system services, low level parsers, network protocols, compilers, runtimes, etc.
There's no way a low-level systems language like this is going to be quite as ergonomic as a higher level more abstract garbage collected language.
IMO you've stumbled into one of the reasons Rust is amazing -- it can go to higher levels, for many good reasons. Whether you should is another question, and it will arguably never be as easy to write as python or javascript (to be fair it's also easy to write inconsistent code in those languages), but it can.
I wouldn't argue that Rust is as ergonomic as other languages, but it has enough pieces to build DSLs that make it look as ergonomic as other languages. An example of this is the front-end frameworks that have been spun in Rust, like Leptos[0].
Rust probably has no business being on the frontend, but the code snippets are convincing and look pretty reasonable. If the macro machinery and type stuff never blows up/forces you to touch internals, then does anyone care that it's a system language underneath?
[0]: https://book.leptos.dev/view/01_basic_component.html
> Rust probably has no business being on the frontend...
I would argue it this were more common they would have a lot of incentive to iron out the rough spots. Well, hopefully not in the way that makes it easier to have you download a 100+ megabyte wasm bundle to read a static blogpost, but anyways.
Personally, if I'm writing a script for an actual purpose (as opposed to just coding for coding's sake) I reach for python and if that's too slow I write a wrapper around some C code to speed things up. If I'm just messing around I use C(++) because that's what I know -- which is where rust could easily fit if the learning curve wasn't so high that I don't see the effort as worth it.
Luckily, neither CPython, nor Lua, nor Perl, nor PHP, nor Ruby, nor Google Chrome are written in C.
Oh, wait.
Glad to see more language runtimes be available from Rust!
Looking forward to scheme-rs being able to benefit from the safety and speed Rust allows
It’s about instrumenting Rust’s existing run-time.
Are we reading the same article? It's about this project:
https://crates.io/crates/scheme-rs
So? There is instrumentation for Async Rust being added. Just not in the Rust compiler and not in tokio. More in application level code. So as far as the user is concerned to "their" Rust run-time. A distinction that is mostly useless when programming lisp. Hence lisp.
Lisp is about solving your problem and getting the "language" out of the way. So you add what is missing. As long as it works it doesn’t even matter where it’s added.
Even C has a barebones runtime environment when compiled for a machine with an MMU and an OS, so not a microcontroller.
What is a Tokio function ?
What is CPS ?
CPS style stands for continuation passing style. It is common for compiler to represent programs in a CPS style since it is a handy format for a compiler to reason about.
Tokio is a Rust library for async - https://tokio.rs/
CPS is Continuation Passing Style - https://www.youtube.com/watch?v=MbtkL5_f6-4
Is that the correct video for CPS?
Yes, via an old joke. See https://wiki.call-cc.org/chicken-compilation-process#cps-con... .
> I believe that it is a remarkably well designed language
> and just overall a poor debugging experience make async Rust a frustrating experience for rapid iteration.
> is the result of conscious trade-offs, trade-offs that I believe were chosen correctly. I don’t believe that async Rust necessarily needs to change at all, although I don’t doubt that the experience could be improved.
That's rust for you. It's a remarkably well designed language. Except for all the ways it isn't. What's hilarious to me is none of these are even deep or hard to find issues. Right out of the gate.. cargo is a wart.. and async was designed inside out.
Remarkable. Indeed.
So, _of course_, let's use it to implement a _different_ language entirely. The niche is disappearing.
Languages are about trade-offs.
Compilers are. Languages are not.
Why do people put up with the Lisp syntax? I get that it's super simple for computers to parse, but it's definitely not pleasant for humans to parse, and humans are the ones reading/writing code.
I would have thought some very simple syntactic sugar would make it a lot nicer to work with. Has anyone done that?
People don’t put up with Lisp syntax, we actively enjoy it. Some of us find it very pleasant to read.
Most folks probably had a fair amount of difficulty as children associating letter shapes with sounds, and then with words. Languages such as English have all sorts of historical cruft in their orthography, syntax and grammar, and yet we write poetry in it anyway.
> simple for computers to parse
Syntaxes that are hard for computers to parse are also hard for humans to parse.
But, nobody literally parses Lisp syntax: you rely on indentation, just like when you're writing C, Java, shell scripts.
People working in Lisp like Lisp syntax; it's really easy to edit, very readable, very consistent.
Anything new in the language, whether coming from a new release of your dialect or someone's new project, is a list with a new keyword, and predictable syntax! Syntax that works with your editor, tooling, and readability intuitions.
If you work involves a lot of numerical formulas, the Lisp way of writing math isn't always the best. You might have documents for the code which use conventional math and someone going back and forth between the doc and the code may have to deal with the translation.
Most software isn't math formulas. The language features that need ergonomics are those which address program organization.
There are ways to get infix math if you really want it in that code.
> Has anyone done that?
Over and over again, starting in the 1960's.
1. 1960's: Defunct "Lisp 2" project run by John MacCarthy himself. Algol-like syntax generating Lisp code under the hood.
2. 1970s': CGOL by Vaughan Pratt (of Pratt Parser fame).
3. Others:
The common thread in "syntactic skins" for Lisp is two fold: they all either died, or turned into separate languages which distanced themselves from Lisp.None of the above skins I mentioned received wide spread use and are defunct, except maybe Racket's various #lang things.
However, Lisp has inspired languages with non-Lisp syntax: some of them with actual Lisp internals. For instance the language R known for statistical capabilities is built on a Lisp core. It has symbols, and cons-cell based lists terminated by NIL, which are used for creating expressions. The cons cells have CAR and CDR fields!
The language Julia is boostrapped out of something called Femtolisp, which is still buried in there.
> Syntaxes that are hard for computers to parse are also hard for humans to parse.
This is obviously untrue. People aren't computers. Obviously there's some relationship, but it's clearly not 1:1.
> it's really easy to edit
Is it? Kind of looks like a formatting nightmare to me, though presumably auto-formatters are pretty much a requirement (and I guess trivial to implement).
Very interesting comment anyway, thanks!
I would not work in any language without automatic indentation and formatting. Wrestling with these things by hand is unproductive.
Writing code in an editor not for programmign is not a lot of fun.
Even in classic Unix Vi of Bill Joy heritage, there is a Lisp mode (:set Lisp). (Not described in the POSIX description of vi, though.)
Vim has decent Lisp support out of the box. It's built on extensions over top the classic Lisp mode.
Some newer editors may be lacking in Lisp support, due to being produced by newer people who are not aware of Lisp.
However editors have to deal with nested parentheses for all sorts of languages!
For instance when you're editing a .c file in Vim, it does a decent "Lisp like" job of nested parentheses, in some regards.
You can probably hack a Lisp mode out of a C or Javascript mode by trimming out a bunch of functionality and adding a thing or two.If you remove the syntactic sugar like operators (which includes assignment and curly braces - it all desugars into function calls!), R is basically a lazily evaluated Lisp with S-expressions written as C-style function calls. By "lazily evaluated" here I mean that each argument is actually passed as the underlying S-expr + environment in which it was created, so it can either be evaluated when its value is needed, or parsed if the function actually wants to look at the syntax tree instead (which covers a lot of the same ground as macros).
R is also far more popular than Lisp. According to TIOBE: https://www.tiobe.com/tiobe-index/ R is #15 (1.06%), Lisp is #26 (0.54%).
So the existence of R is evidence that Lisp's syntax is a serious problem. If a language providing syntactic sugar has a significantly increased adoption rate, that suggests that the bitter pill of your syntax is a problem :-).
Language adoption is a complicated story. Many Lispers won't agree with me anyway. But I do think the poor uptake of Lisp demands an explanation. Its capabilities are strong, so that's not it. Its lack of syntactic sugar is, to me, the obvious primary cause.
R isn't simply syntactic sugar over a Lisp-like runtime, to make it acceptable, but an implementation of an earlier language S, using Lisp techniques under the hood.
There’s this quip that there’s also a lot of spaces in a news paper.
Mostly, the parentheses are read like spaces.
In fact, I recommend to newbies that their Lisp syntax highlighting shades the parentheses close to your background color. You get the best of both worlds: they fade away, but you can use Structural Editing.
This is an observation you only see with people who don't use Lisps. It's a total non-issue. Editing is also easier than with syntax-heavy langs.
> This is an observation you only see with people who don't use Lisps.
Uhm.. yeah. Do you think there might be a very obvious reason for that?
Because they're not used to reading lisp?
I read these articles with Go or Rust code examples and just tune out because I don't use either of those languages so don't know enough to extract the wheat from the chaff, as they say.
Hell, I don't even know scheme enough to read/write code without having to look up the syntax and I've been hacking on a scheme interpreter off and on for years.
Something about different strokes for different folks...
I've written a lot of Lisp in my past. I'm used to reading Lisp. Still hate it. For example, everyone else has figured out that mathematical notation uses infix. Stock Lisp's inability to handle infix notation means most software developers will not consider using it seriously. Obviously not all will agree with me :-).
I will give my answer, noting that not everyone will agree with me :-).
The vast majority of software developers do not put up with Lisp syntax. The vast majority of software developers use a different programming language with a much better syntax, and never consider using Lisp for anything serious. For example, for most developers, a programming language is a laughable nonstarter if it cannot use infix notation for basic arithmetic, as that is the standard notation for mathematics worldwide and is taught to everyone.
As evidence, look at TIOBE: https://www.tiobe.com/tiobe-index/ ... you'll see "Lisp" is #26, lower than Classic Visual Basic, Kotlin, Ada, and SAS. Scheme and Clojure aren't even in the top 50, so adding them wouldn't change much. Lisps will never break the top 10 with their current syntax.
There is a reason for Lisp's syntax: it's homoiconic, enabling powerful macros. Lisp macros are incredibly powerful, because you can manipulate programs as data. Other macro systems generally don't hold a candle to Lisp's. The vast majority of "simple syntactic sugar" systems for Lisp lose homoiconicity, ruining Lisps's macros. This was even the problem for the M-expressions created by Lisp's original developer. Most of those syntactic improvements also tend to NOT be backwards-compatible.
There is a solution. I developed curly-infix expressions, SRFI-105 https://srfi.schemers.org/srfi-105/ Curly infix enables infix notation WITHOUT losing homoiconicity or requiring a specific meaning for symbols. On top of that I developed sweet-expressions SRFI-110 https://srfi.schemers.org/srfi-110/ . These are backwards-compatible (easing transition) and homoiconic (so macros keep working). There are no doubt other approaches, too.
In general getting anything accepted and used is hard.
Here's my personal take, though: Almost all programmers abandoned Lisps decades ago, in large part due to its terrible syntax. The few who still use Lisp tend to like its notation (otherwise they wouldn't be using Lisp). Since they like Lisp syntax, they don't see the problem. Cue the "this is fine" cartoon dog in a fire. This means that Lisps will never be seriously considered by most software developers today, due to their user-hostile syntax. Some will learn them, as a fun toy, but not use them seriously.
I think that is a terrible shame. Lisps have a lot going for them. For certain kinds of problems they can be an excellent tool.
I realize most people who seriously use Lisp will disagree with me. That's fine, you're entitled to your own opinion! But as long as Lisp has terrible syntax (according to most developers), it will continue to be relegated to a (relative) backwater. It will continue to be learned, but primarily as an "elegant weapon" from a bygone "civilized age" https://www.explainxkcd.com/wiki/index.php/297:_Lisp_Cycles
Even MIT's 6.001 (Structure and Interpretation of Computer Programs), which once taught powerful concepts using Scheme, has been replaced by a course using Python. Reason: "starting off with python makes an undergraduate’s initial experiences maximally productive in the current environment". Scheme did not help them be maximally productive. See: https://cemerick.com/blog/2009/03/24/why-mit-now-uses-python...
I like Lisp. I don't like Lisp syntax. Since most Lispers don't want to fix Lisp syntax, or don't believe it's a problem, the vast majority of software developers have decided to use a different programming language that has a syntax they prefer to use. The numbers from TIOBE make it clear Lisp isn't a common choice.
You'll no doubt get different views in the comments. :-)
In the back of my mind I'm wondering whether regular parentheses couldn't incorporate the curly infix syntax.
In particular, a Lisp-2 helps here. Suppose we have (x + y). If that is interpreted as arguments + y passed to function x, it makes no sense. There is no such function, and + has no variable binding. So we can take the second interpretation: swap the first two positions.
Even in a Lisp-1, we can simply have a rule that if the second position of a funcion call is the member of a set of math operators (which could be by symbol identity, without caring about the binding), we do the swap.
The users then just can't use + as a variable name in the second position of a function call.
This swap can be done during the macro-expanding code walk, so the interpreter or compiler only see (+ x y), just like with brace expressions.
What makes you so sure that the placement in the TIOBE index is due to its syntax? That sounds like a weird argument to me.
By that logic, Lua, Scala and Elixir have even weirder syntax that people despise.