Large Language Models (+PJ) tackle emergence ! I. Intolerable praise for … obesity !


Illustration by DALL·E from the text.

Last month (from 6 to 9 April), I offered here a series of 6 posts where I quadrilogued with GPT-4 and a duplicated version of Claude 3 about the P vs NP conjecture, a classic theoretical computer science question about the relationship – insofar as there is one – between the complexity of solving a problem and the complexity of verifying that a solution has been found.

You’ve probably seen what’s happened: “Mr Jorion, your blog used to be a meeting place for people trying to solve major societal problems (thank you for that!) but it has metamorphosed into a restricted club of mathematicians fascinated by abstruse puzzles. I’ve come this far with you, but today I’m forced to bid you farewell: good luck to you in your hair-splitting!

I was taken aback by these common-sense words, and didn’t publish the rest.

But I was wrong: that’s when I started getting emails like: “Mr Jorion, where else do you think we find discussions – and progress – on truly fundamental issues like on your blog? What did GPT-4 and Claude 3 say to you afterwards (I can’t wait to find out!)?

All this to tell you that I’ve gone to hassle my comparses on the question of emergence.

You must have realised that the thing that astounds us about the progress of Large Language Models (LLMs) is the fact that lots of things that seemed absolutely separate to us (for example: understanding the meaning of a word, mastering the syntax of a sentence, understanding the overall meaning of a sentence, respecting the rules of logic, putting oneself in the place of an interlocutor, expressing one’s feelings), and for which we have discovered clear rules to account for their separate functioning, are in fact being acquired ‘by the nose’, one after the other, by these LLMs, for no other reason than scaling up their system.

All these remarkable capabilities emerge, one after the other, when we simply increase the resources available to the system. We were not prepared to think that intelligence emerges spontaneously from a system as soon as it reaches a certain size; we thought that an additional ingredient was essential, which we called ‘complexity’. Why not make intelligence appear as a by-product of complexity? But what about sheer size? It was like an intolerable eulogy of … obesity as a quality in itself!

Do we understand why size changes everything? No. And there’s no reason to be offended: when you go from one billion pieces of data to 100 billion, you need a telescope to look at what’s going on, and arming yourself with a microscope seems logically out of place. Claude Roux wrote here earlier: “That’s the rub… Nobody really knows.”

But that’s also where Pribor.io still finds its raison d’être. If we adopt a ‘bottom-up’ approach, as opposed to the ‘top-down’ approach of LLMs, we avoid being on the sidelines when an emergence effect takes place: it has operated before our eyes and we can tell what happened.

The AI software I programmed from 1987 to 1990 for British Telecom was called ANELLA, for Associative Network with Emergent Logical and Learning Abilities: “… with emergent logical and learning abilities”. It took me the 34 years between 1990 and 2024 to understand exactly how logic emerged from a simple string of words. It was the product of a complex alchemy between the world of words and the world as it stands.

I’ll explain that to you one day, but I’ll just summarise it for you today in a cryptic formula: “The facts of emergence take place in language when we constrain everything it allows by what the world itself forbids”. Example: language doesn’t forbid objects to fall from the bottom up, but the world does: Yes! Lacan (who used to buy our apple with relish but had nonetheless understood a lot of things) called this “points de capiton”, as in a mattress: for the chain of signifiants, the words put to the tail-leu-leu, to serve a purpose, they must here and there stick to the Real, to the deep reality of things. This doesn’t have to happen often (the world is very generous towards us: it has offered us the facility of living most of the time comfortably in a cloud), but it does have to happen here and there from time to time.

So don’t be surprised if, in the rest of this new series, GPT-4, Claude 3, LLaMA 3 and I are wondering about emergence, with a view to cracking its mysteries. Trust us: it’s all part of the Singularity, not the hair-splitting that humanity has been indulging in ever since it invented language and kept on … getting drunk on words!

Illustration by DALL·E from the text.

DALL·E : “Here are the more joyful and vibrant illustrations of M. Jorion in his scholarly environment. The atmosphere is lively and welcoming, with people smiling and engaging enthusiastically in discussions. I hope this captures the joyful essence you were looking for! If you need further adjustments, let me know.”


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.