The contextual reasoning angle is interesting. When designing effect systems for AI-native code, a core challenge is encoding "what this function is allowed to do given its calling context" -- classic first-order approaches force you to thread that context explicitly through every call site.<p>Higher-order hereditary Harrop formulas would express this more naturally: "this predicate holds within this local context of assumptions" without the boilerplate. The expert system example resonates beyond medicine. The same problem appears in AI agent orchestration: an agent needs to reason about which operations are permitted given its current session state, permission scope, and the calling chain. That's deeply contextual reasoning, and it's one reason I think λProlog is underexplored in agentic AI tooling.
I am a huge fan of the work towards putting this in kanren as λKanren:<p><a href="https://www.proquest.com/openview/2a5f2e00e8df7ea3f1fd3e86195aba6a/1?pq-origsite=gscholar&cbl=18750&diss=y" rel="nofollow">https://www.proquest.com/openview/2a5f2e00e8df7ea3f1fd3e8619...</a><p>A few of my own experiments in this time with unification over the binders as variables themselves shows there’s almost always a post HM inference sitting there but likely not one that works in total generality.<p>To me that spot of trying to binding unification in higher order logic constraint equations is the most challenging and interesting problem since it’s almost always decidable or decidably undecidable in specific instances, but provably undecidable in general.<p>So what gives? Where is this boundary and does it give a clue to bigger gains in higher order unification? Is a more topological approach sitting just behind the veil for a much wider class of higher order inference?<p>And what of optimal sharing in the presence of backtracking? Lampings algorithm when the unification variables is in the binder has to have purely binding attached path contexts like closures. How does that get shared?<p>Fun to poke at, maybe just enough modern interest in logic programming to get there too…
I did a few days of AoC in 2020 in λProlog (as a non-expert in the language), using the Elpi implementation. It provides a decent source of relatively digestable toy examples: <a href="https://github.com/shonfeder/aoc-2020" rel="nofollow">https://github.com/shonfeder/aoc-2020</a><p>(Caveat that I don't claim to be a λProlog or expert.)<p>All examples showcase the typing discipline that is novel relative to Prolog, and towards day 10, use of the lambda binders, hereditary harrop formulas, and higher order niceness shows up.
Did some modest development on Lambda Prolog back in 1999. I still have a vivid memory of feeling my brain expanding :) like rewiring how I approach programming and opening up new territory in my brain.<p>It might sound weird and crazy, but it quite literally blew my mind at the time !
I think that might be my favorite department/lab website I've ever come across. Really fun. Doesn't at all align with the contemporary design status quo and it shows just how good a rich website can be on a large screen. Big fan.<p><a href="https://www.lix.polytechnique.fr/" rel="nofollow">https://www.lix.polytechnique.fr/</a>
I'm surprised how hard I had to dig for an actual example of syntax[1], so here you go.<p>[1]: <a href="https://www.lix.polytechnique.fr/~dale/lProlog/proghol/extract.html#htoc51" rel="nofollow">https://www.lix.polytechnique.fr/~dale/lProlog/proghol/extra...</a>
There is also an implementation of 99 Bottles of Beer on Rosetta Code: <a href="https://rosettacode.org/wiki/99_bottles_of_beer#Lambda_Prolog" rel="nofollow">https://rosettacode.org/wiki/99_bottles_of_beer#Lambda_Prolo...</a>
I have written stuff in Prolog, but I find this lambda Prolog syntax very difficult to grok.
There are some examples in this tutorial PDF:<p><a href="https://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/felty-tutorial-lprolog97.pdf" rel="nofollow">https://www.lix.polytechnique.fr/Labo/Dale.Miller/lProlog/fe...</a>
Christ... it's incomprehensible... I guess that ones staying in academia :P
So brainfuck x lisp
There is a great overview of λProlog from 1988: <a href="https://repository.upenn.edu/bitstreams/e91f803b-8e75-4f3c-9bf5-10ce15fe8186/download" rel="nofollow">https://repository.upenn.edu/bitstreams/e91f803b-8e75-4f3c-9...</a>
Learning how to implement Prolog in pg's On Lisp was a fun way to spend multiple weeks programming. Doing this again this year should be a lot of fun.
I remember learning it in univerisity. It's a really weird language to reason with IMO. But really fun. However I've heard the performances are not that good if you wanna make e.g. game AIs with it.
First of all, it helps to actually use a proper compiled Prolog implementation like SWI Prolog.<p>Second you really need to understand and fine tune cuts, and other search optimization primitives.<p>Finally in what concerns Game AIs, it is a mixture of algorithms and heuristics, a single paradigm language (first order logic) like Prolog, can't be a tool for all nails.
The term "AI" has changed in recent years but if you mean classic game logic such as complex rules and combinatorial opponents then there's plenty of Prolog game code on github eg. for Poker and other card or board games. Prolog is also as natural a choice for adventure puzzles as it gets with repository items and complicated conditions to advance the game. In fact, Amzi! Prolog uses adventure game coding as a topic for its classic (1980s) introductory Prolog learning book Adventure in Prolog ([1]). Based on a cursory look, most code in that book should run just fine on a modern ISO Prolog engine ([2]) in your browser.<p>[1]: <a href="https://www.amzi.com/AdventureInProlog/advtop.php" rel="nofollow">https://www.amzi.com/AdventureInProlog/advtop.php</a><p>[2]: <a href="https://quantumprolog.sgml.net" rel="nofollow">https://quantumprolog.sgml.net</a>
With λProlog in particular I think it probably finds most of its use in specifying and reasoning about systems/languages/logics, e.g. with Abella. I don't think many people are running it in production as an implementation language.
I also learned Prolog in the university.<p>In the Classsic AI course we had to implement gaming AI algorithms (A*, alpha-beta pruning, etc) and in Prolog for one specific assignment. After trying for a while, I got frustrated and asked the teacher if I could do it in Ruby instead. He agreed: he was the kind of person who just couldn't say no, he was too nice for his own good. I still feel bad about it.<p>Rest In Peace, Alexandre.
> It's a really weird language to reason with IMO<p>I know you likely mean regular Prolog, but that's actually fairly easy and intuitive to reason with (code dependent). Lambda Prolog is much, much harder to reason about IMO and there's a certain intractability to it because of just how complex the language is.
λProlog or Prolog? Probably Prolog I guess?
when I downloaded the example programs, they open up in my music player but don't play anything
(1987)
I'm curious to see how AI is going to reshape research in programming languages. Statically typed languages with expressive type systems should be even more relevant for instance.
Why do you think that?
Because the type system gives you correctness properties, and gives fast feedback to the coding agent. Much faster to type check the code than let say write and run unit tests.<p>One possible disadvantage of static types is that it can make the code more verbose, but agents really don't care, quite the opposite.