bigdict 4 days ago | root | parent | next |

nah, this is about codebases that are not themselves primarily lisp implementations

kazinator 4 days ago | root | parent | next |

This is correct. Greenspun's Tenth Rule is not meant to be interpreted as applying to projects that are consciously creating a Lisp implementation. It's about programs which are not meant to be language implementations at all reinventing ad hoc infrastructure that is designed in Lisps according to established patterns. For instance, badly reinventing something that functions as symbols.

Nevermark 4 days ago | root | parent |

I conjecture the line is not so easy to draw.

If you are creating Lisp because you want to create Lisp, like creating Lisp, want to show off creating Lisp, that obviously is not what the Law is about.

Furthermore, if you create Lisp because you know the Law, know it is inevitable, and want to avoid the caveats and missed bars by doing so explicitly, well then that also is not what the Law is about.

But if you are going about your business, focused on your independent goal, realize you need Lisp like lists, and then 250 lines of code later realize you have created a solid Lisp unintentionally? Well, congrats on falling into the trap but not getting hurt!

Personally, I have created both Lisp and Forth multiple times for suitable projects where I wanted some flexible runtime scripting. I am not familiar with the standard version libraries of either and don’t need them.

Minimal foundation implementations are extremely easy to create, and eliminate any dependencies or sources of mystery.

Anyone know of any other well designed mininal languages?

pjmlp 4 days ago | root | parent |

Except it is been like 60 years that any proper Lisp implementation has more than plain cons lists.

PhilipRoman 4 days ago | root | parent | prev | next |

Pretty sure it applies to Common Lisp itself too.

arnsholt 4 days ago | root | parent | next |

The corollary to Greenspun’s rule is that any sufficiently complicated Common Lisp program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Prolog.

Guthur 4 days ago | root | parent | prev |

I've now used both professionally. I look forward to hopefully using both in the same project. Though not holding my breath.

klrtaj 4 days ago | root | parent | prev | next |

A great example of the rule is that C++ rediscovers car/cdr at a glacial pace in the template language. In C++26 one can finally get the car of a typename parameter pack with Args...[0].

I've no idea why they don't introduce car/cdr functions and nil for empty parameter packs and allow to store parameter packs instead of the current syntax insanity.

gpderetta 4 days ago | root | parent | next |

C++ metaprogramming was done with cons cells already back in '98. The new syntax provides random access instead, which is completely different from linked lists.

otabdeveloper4 4 days ago | root | parent | prev |

Store where?

C++ templates are a lambda calculus, there's no notion of memory cells or state.

jasfas 4 days ago | root | parent |

In a struct! Pseudo code for extracting the types from subclasses and calling a multimethod on them:

  template <typename Result, typename Head, typename ...Tail>
  struct Foo : Visitor {
    Result res;
    UntypedList lst; // This does not exist!
    Tail... tail;    // This is not possible!
    Foo(UntypedList l; Head hd, Tail... tl) : lst(l), tail(tl) {
      hd->accept(*this);
    }
    void visit(float *f) {
      res = Foo<Result, Tail...>(append(lst, f), tail)::res; }
    }
  };
 
  // Base case that calls the multimethod omitted.
There are two issues here: You can't store the parameter pack in Foo() which is later required by the continuation in visit(). And you would have to use tuples and tuple_cat() instead of UntypedList.

Why can the compiler not infer the types in such an UntypedList? Why is there no car/cdr for tuples?

gpderetta 2 days ago | root | parent |

Not sure what Untyped list is or supposed to do in the above. In fact I'm not sure what the code above is trying to do at all.

If you want a cons-based tuple, boost::tuple will work just fine (or you can implement your own, it is not hard). std::tuple is not (visibly) cons based.

ForOldHack 4 days ago | root | parent | prev | next |

HA!

"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."

HA HA!

dgfitz 4 days ago | root | parent | prev |

What defines "sufficiently complicated" I wonder? Seems like a shit definition.

kazinator 4 days ago | root | parent | next |

That's explicitly encoded right in the rule itself! When the program contains a bug-ridden, ad-hoc simulacrum of half (or more) of Common Lisp, then it has become sufficiently complicated. When it has less than half, it is not yet sufficiently complicated.

ithkuil 4 days ago | root | parent | prev | next |

"for any sufficiently X the Y" just means that if you don't observe Y right now, just increase the magnitude of X and you'll inevitably reach the conditions for Y to happen.

Famous use of that pattern is Arthur's Clarke "Any sufficiently advanced technology is indistinguishable from magic.". As with OP's quote, this is also vague and imprecise but the point of the idea is that there is a gradient towards which advancements in technology bring us towards a situation where the average person no longer understands how it works and it may as well be magic (while not necessarily defining exactly when does that transition happen)

munificent 4 days ago | root | parent | prev | next |

I like that the "sufficiently complicated" part bothered you, but not the "ad hoc", "informally-specified", "bug-ridden", or "slow" parts.

krick 4 days ago | root | parent |

No, I think it's pretty fair. One could argue about these, but except for "slow" these are more quality qualifiers, rather than quantity. So you either agree it's "bug-ridden" or not (i.e. the number and seriousness of bugs in it is negligible by whatever standards). And I think even "slow" can be discussed in the same manner, the actual speed is quantitative, of course, but in the end one either argues that this speed is "slow" or isn't. So, given some rhetoric skills of your opponent it's at least possible for that statement to be proven false, if he convinces you the implementation actually isn't slow nor bug-ridden, or at least if there's no Lisp-implementation indeed.

But what is "sufficiently" complicated? Now it's a silly statement that just doesn't pass Popper criterion: even if nobody dares to deny your program is complicated, one can always say it isn't sufficiently complicated, hence the fact it doesn't contain Lisp-interpreter disproves nothing. A man of culture wouldn't use such definitions.

devjab 4 days ago | root | parent | next |

Isn’t that sort of the point? It makes the argument impossible to dispute. If it’s not slow or big-ridden, then it’s just not sufficiently complicated. Which you’re correct to call out.

I think what makes it work is that Common Lisp very quickly becomes a faster way to do things because of how insanely efficient it is.

orwin 4 days ago | root | parent | prev |

My experience is: once you have enough different FSM and reducers that you need to abstract them, the 'sufficiently complicated' criterion is made.

celeritascelery 4 days ago | prev | next |

I tried doing something similar to this once. I ran into an issue where I couldn’t define symbols with a dash in them (DEFINE MY-FN…). This is because rust will split the tokens on a dash. It was a small thing, but it meant I couldn’t just copy and paste snippets from a real lisp, I had to transform everything to underscore. Is it the same in your implementation?

quasigloam 4 days ago | root | parent | next |

Yes, at the moment I just assume that all atoms are rust idents because that makes it easier to implement (I can just match against $x:ident), so it doesn't support dashes in atoms. I guess you could match against `$x:ident $(- $y:ident)*` instead? That should work I think, I'd have to change some details in some of the arms but it seems like that would be possible.

throwup238 4 days ago | root | parent | next |

Wouldn’t that match “(foo - bar)” as well as “(foo-bar)”? I don’t think you can get around token whitespace in macro_rules

quasigloam 4 days ago | root | parent | next |

Yes it would match both, this would require us to assume that "foo - bar" can only be an atom. It's not a great solution.

celeritascelery 4 days ago | root | parent | prev |

It would, but lisp has prefix operators, so you wouldn’t have to worry about it getting confused.

quasigloam 4 days ago | root | parent |

Although in a Lisp such as Scheme, you could pass around the negation operator in something like (map - '(1 2 3)), so it would be a valid concern that it might clash.

tmtvl 4 days ago | root | parent |

The problem with that is that there are spaces between map, -, and '(1 2 3). The only way to get spaces into a name is by using vertical bars:

  (defvar |do i 10| 1.100)

anothername12 4 days ago | root | parent | prev |

Does Rust have something like a reader macro where you can have arbitrary syntax for a bit?

throwup238 4 days ago | root | parent |

It's almost but not quite arbitrary. It still has to be a valid Rust token stream with matching braces/parens. Since it's whitespace insensitive with respect to tokens and "-" is its own token, "(foo-bar)" and "(foo - bar)" result in the same token stream minus the span information.

You can use proc_macro::Span.line()/column() [1] to track how much whitespace there is between tokens and merge them, but the macro will have to rename them to valid Rust identities ("foo-bar" to "foo_bar") and make sure there's no collisions.

[1] https://doc.rust-lang.org/proc_macro/struct.Span.html#method...

zozbot234 4 days ago | root | parent | prev |

> I ran into an issue where I couldn’t define symbols with a dash in them

I'm not seeing any issue? (DEFINE MYᜭFN...) works just fine.

gleenn 4 days ago | prev | next |

I wish there was a well-supported Lisp in Rust, not just the macros. I wonder how much memory safety you would retain or lose being based in Rust. Is it even possible to leverage the borrow checker any any sane way?

dokyun 4 days ago | root | parent | next |

Some Lisp compilers like SBCL are already capable of more extensive compile-time type-checking, but its information that the programmer is up to supply, and tends to be the part of the optimization stage rather than normal, incremental development. Lisp is usually defined by its dynamic nature, and run-time type checking is a big part of this. Making the programmer have to worry about how objects are managed before the fact would conflict with the freedom and expressibility one would expect from such a system. It also makes the compilers themselves simpler in comparison: Normal code without any extra declarations is safe by default, some systems might ignore such declarations for always being 'safe' (say if you're on a bytecode VM like CLISP or a Lisp machine that does hardware type-checking). SBCL compiles code quite quickly--so I've heard others tend to be even faster; the Rust compiler on the other hand is more likely to introduce a young programmer to the concept of thrashing. I really see them as two mutually incompatible worlds although they may not seem to be at first glance. One thing to remember is that Lisp is essentially the flagship "The Right Thing" language, while C is the "Worse is Better" language. Rust is neither, it is something entirely different which I think is overdue for a name, perhaps something that reflects the losing characteristics of both philosophies.

(This isn't to discredit the OP article: it's still a cool hack!)

wild_egg 4 days ago | root | parent | next |

> perhaps something that reflects the losing characteristics of both philosophies.

Oof. With how much love Rust gets here, I didn't expect to see it being called out like that.

How about "The Worse Thing"?

BoingBoomTschak 4 days ago | root | parent | prev |

> Some Lisp compilers like SBCL are already capable of more extensive compile-time type-checking, but its information that the programmer is up to supply

Which is nice and all, but very much gimped by the glaring holes in CL's typing tooling: you can't create actual types, only derived types (deftype) and struct/class types. The two consequences of that is that you can't type cons-based lists/trees (arguably THE Lisp data structure) because deftype can't be recursive and you can't create parametric types, it's an internal thing only used for https://www.lispworks.com/documentation/HyperSpec/Body/t_arr... (and not even hash-tables, these are completely untyped!).

dokyun 3 days ago | root | parent |

> you can't type cons-based lists/trees (arguably THE Lisp data structure) because deftype can't be recursive and you can't create parametric types

  (deftype list-of (type)
    `(cons ,type (or null (list-of ,type))))

  (typep '(1 2 3 4) '(list-of integer))
  => T
Most of the standard types from which derived types can be made are parametric types, which specialize on their arguments in different ways. They work the same way as macros. One upon a time I wrote a type specifier that allows you to specialize on a lambda expression, using the ordinary 'satisfies' type that only lets you provide named functions:

  (deftype satisfiesp (function)
    (let (predicate)
      (cond ((symbolp function)
             (setq predicate function))
            ((functionp function)
             (setq predicate (gensym))
             (setf (symbol-function predicate) function))
            ((and (consp function) (eq (car function) 'lambda))
             (setq predicate (gensym))
             (compile predicate function))
            (t (error "~S is neither a lambda expression, function or symbol." function)))
      `(satisfies ,predicate)))
Now you can even type-check on lambda expressions, and quasiquotation can sidestep the issue of capturing variables.

  (defvar foo 'bar)
  (typep 'bar `(satisfiesp (lambda (x) (eq x ',foo))))
  => T
https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node44.html

BoingBoomTschak 3 days ago | root | parent |

The CLHS sez "Recursive expansion of the type specifier returned as the expansion must terminate, including the expansion of type specifiers which are nested within the expansion." though. As expected, here on SBCL, your typep blows the stack.

> Most of the standard types from which derived types can be made are parametric types

I meant "parametric with a type parameter", but yeah.

> satisfies

I was (obviously, I hope) talking about static, not runtime typing.

dokyun 2 days ago | root | parent |

> The CLHS sez "Recursive expansion of the type specifier returned as the expansion must terminate, including the expansion of type specifiers which are nested within the expansion." though. As expected, here on SBCL, your typep blows the stack.

Weird, I tried it in CLISP and ECL and it worked fine. It looks like this issue was considered during the standard: https://www.lispworks.com/documentation/HyperSpec/Issues/iss... If you ask me that's a misfeature and not the Right Thing, but I was able to work around it by using a helper function and the type I wrote earlier, although it does lose more in comparison:

  (defun list-of-p (list type)
    (and (consp list)
         (typep (car list) type)
         (or (null (cdr list))
             (list-of-p (cdr list) type))))

  (deftype list-of (type)
    `(satisfiesp (lambda (x) (list-of-p x ',type))))
I will admit one of the double-edged swords of Common Lisp is that some of these rough edges are ultimately left up to the implementation, so there are a lot of very powerful features that are only `de-facto' standard or come with a lot of portability glue. However, many programs are also not designed with portability in mind. It would be nice if SBCL in particular could be made to adopt this use of types.

> I was (obviously, I hope) talking about static, not runtime typing.

The only difference between static and runtime typing would be the stage at which the types are evaluated, and being the same as macros they might be more interesting in that regard since they might have to consider what it means to run at both stages of evaluation.

BoingBoomTschak 2 days ago | root | parent |

> It would be nice if SBCL in particular could be made to adopt this use of types.

I agree, that would be an easy extension (at least from the user-facing API PoV) and would tremendously improve the language!

> The only difference between static and runtime typing would be the stage at which the types are evaluated, and being the same as macros they might be more interesting in that regard since they might have to consider what it means to run at both stages of evaluation.

That is the only difference, but that difference has important consequences on optimization.

krick 4 days ago | prev | next |

Everyone is supposed to be cheering for how "fun" it is, but every time I see anything like that I cannot help but think, that I hate the fact it can be implemented in Rust. It never truly was a simple language, but I think it started as something way more manageable than what it has become.

zxexz 4 days ago | root | parent | next |

Rust is definitely not a simple language, I agree there.

I am quite likely a bit naïve here, but I'm having trouble understanding why you hate that this is possible. Now, the macro system definitely can generate near infinitely-complex code, but I'm not getting how implementing a sandboxed lisp using macros is a particularly potent example of the language being less manageable than at its genesis.

On another note, the fact that the type system is turing-complete, like with C++ templates or the Haskell type system (variously dependent on which GHC languages extensions you're using...), makes me want to see a lisp implemented that way!

quasigloam 4 days ago | root | parent |

Implementing a lisp in the type system would be fun, that's originally what this project was about until I got distracted with macros. Awesome projects like fortraith (https://github.com/Ashymad/fortraith) already exist, and even far more useful projects like dimensioned (compile time dimensional analysis using type level numbers) are inspiring. These examples, although cool, are probably a worse sign for krick for Rust compared with macro_rules being Turing complete.

zxexz 4 days ago | root | parent |

Aha, that's pretty cool! Didn't even scroll more halfway down the page before my lizard brain saw some something resembling Peano numbers.

Thanks for sharing your project, by the way - between this and the other one you linked, I think I know what my leisure time this weekend is going to consist of...

quasigloam 4 days ago | root | parent |

No problem! I had fun making it, for such a silly little project it's given me a surprising amount of joy. If you want to look at another cool project that's tangentially related, have a look at Sectorlisp.

zxexz 4 days ago | root | parent |

Oh, I'm very well acquainted with Sectorlisp - I even have a scrappy local package on nixos to build, boot and launch a qemu repl for it locally! Somewhere around here there's an HDD in an X200 that probably still has it in the boot sector on its HDD.

I think I objectively suck at lisp, but that doesn't preclude me from being a "Greenspan enjoyer" :) I think all of my projects that used Selenium ended up with a haphazardly-pieced-together lisp interpreter as their scripting tool after I realized whatever I was writing was effectively becoming a DSL in a .ini/.yaml/.toml file. These days, I mostly use nix, but my dream is a lispy NixOS (yes, I know of and have used Guix, but I really just want a lisp<>Nix 'native' compiler, and to be able to use it with nixpkgs without hassle).

pkolaczk 4 days ago | root | parent | prev | next |

> but I think it started as something way more manageable than what it has become.

Hard disagree here. Rust team is continuously making the language easier to work with by removing limitations and making features more orthogonal. A notable example is non lexical lifetimes, impl Trait in return position or async traits. Also they have removed some features eg before it went 1.0, it even had GCed references built in with special syntax.

rapsey 4 days ago | root | parent | prev | next |

The only real change since 1.0 was async. If you want to live without async that is completely up to you. It is an entirely optional part of the language.

If you want a language guided by the principle of simplicity Rust was never that and you have plenty of options elsewhere.

xedrac 4 days ago | root | parent |

Except that many crates only provide an async interface. I actually use async for many things, but the whole colored functions thing is really annoying.

chickdilla 4 days ago | root | parent |

One trick to get out of this is to wrap any function with an asynchronous interface with pollster::block_on which turns the call back into exactly how it would have run if it had been synchronous (which leads to no color leaking).

xedrac 4 days ago | root | parent |

Yes, easy enough but annoying. Going the other direction (async -> sync) is a bit more problematic though. Now you've got to wrap things in spawn_blocking(), which isn't nearly as benign.

huijzer 4 days ago | root | parent | prev | next |

Aren’t macro’s always very powerful but at the same time very tricky? I wouldn’t count the macro side into language complexity. It’s an add-on which you can but don’t have to use (I’m talking about creating macro’s here.)

meindnoch 4 days ago | prev | next |

But C++ is not a sane language because templates are Turing-complete, right?

danschuller 4 days ago | root | parent | next |

C++ is not a sane language for anyone with a passing familiarity. At least Rusts macros aren't literal text substitutions, a move towards the light.

sham1 4 days ago | root | parent | next |

To be fair, neither are C preprocessor macros. They're instead token substitutions. It's not that much better, but they're not literally text substitution. They're at least more clever than just that.

They're also of course surprisingly powerful, at the expense of being very cludgy.

pjmlp 4 days ago | root | parent | prev |

C++ templates, coupled with compile time programming, and eventually static reflection, make it easier than the multiple macro languages from Rust, with the additional dependency on syn crate.