This invention showed conclusively, and for the first time, that Prolog code can be very efficiently executed, and is even more efficient than Lisp in many cases of practical relevance due to its efficient reclamation of memory on backtracking which does not require garbage collection:
Several popular Prolog systems, including GNU Prolog, Scryer Prolog and SICStus Prolog, compile Prolog code to WAM code and then interpret the abstract machine code for good efficiency.
One interesting feature of the WAM is that it is register-based, not stack-based. Hassan Aït-Kaci's tutorial is a nice introduction to the WAM:
There are several other used and proposed abstract machine architectures for Prolog, such as the ZIP, TOAM and the Vienna Abstract Machine (VAM). Andreas Krall's paper contains many interesting implementation techniques:
"Since release 4.3, on 32 and 64 bit x86 platforms running Windows, OS X, and Linux,
SICStus Prolog has the ability to compile predicates from virtual machine code to native
code. This process, known as Just In Time (JIT) compilation, is controlled by a couple
of system properties (see Section 4.17.1 [System Properties and Environment Variables],
page 224), but is otherwise automatic. JIT compilation is seamless wrt. debugging, profiling,
coverage analysis, etc. JIT compiled code runs up to 4 times faster than virtual machine
code, but takes more space."
As another example, one of the most recent and also very significant innovations in Prolog implementations happened in 2020, when Scryer Prolog became the first Prolog system to implement a very compact internal representation of lists of characters, internally storing them as sequences of raw bytes in UTF-8 encoding instead of compound terms, while Prolog code "sees" them as lists and can therefore reason about them with conventional predicates over lists.
Trealla Prolog took up this idea and now already allows the efficient application of Prolog definite clause grammars (DCGs) to entire files (using phrase_from_file/2) by mmapping the entire file to memory, i.e., using the operating system to transparently process the file as if it were loaded, without having to load it in its entirety into memory. This paves the way for the use case Prolog was designed for: efficient and convenient reasoning about large amounts of text.
This looks like a good codebase to learn modern Python style and features from. I found it quite readable despite not being very familiar with this sort of programming.
If you're interested in implementing languages in Python, I've been enjoying watching the development of Porth[0], a stack-based, forth-like* language on Twitch.
* See readme disclaimer that there's no actual connection to Forth other than the stack-based design.
http://www.ai.sri.com/pubs/files/641.pdf
This invention showed conclusively, and for the first time, that Prolog code can be very efficiently executed, and is even more efficient than Lisp in many cases of practical relevance due to its efficient reclamation of memory on backtracking which does not require garbage collection:
http://www-public.int-evry.fr/~gibson/Teaching/CSC4504/Readi...
The WAM has since become a popular target architecture for efficient Prolog execution, analogous to the BEAM in Erlang etc.:
https://en.wikipedia.org/wiki/Warren_Abstract_Machine
Several popular Prolog systems, including GNU Prolog, Scryer Prolog and SICStus Prolog, compile Prolog code to WAM code and then interpret the abstract machine code for good efficiency.
One interesting feature of the WAM is that it is register-based, not stack-based. Hassan Aït-Kaci's tutorial is a nice introduction to the WAM:
http://wambook.sourceforge.net/
There are several other used and proposed abstract machine architectures for Prolog, such as the ZIP, TOAM and the Vienna Abstract Machine (VAM). Andreas Krall's paper contains many interesting implementation techniques:
https://www.complang.tuwien.ac.at/andi/papers/wlp_94.pdf